In a year filled with significant global elections, notably in India, South Korea, and the United States, Microsoft’s Threat Analysis team has issued a warning about China’s potential use of AI-generated content. The team suggests that China aims to leverage artificial intelligence to create and disseminate content strategically designed to advance its geopolitical interests.
Even though this AI-made content might not have a big impact on election results right now, China is likely to keep using it. They’re getting better at making fake memes, videos, and sounds that could become more influential later.
Microsoft also mentioned that China is making fake social media accounts to ask people what issues cause the most arguments. They’re doing this to create more disagreements and possibly change the outcome of the US presidential election to benefit themselves.
In a recent blog post, Microsoft stated, “China has also increased its use of AI-generated content to further its goals around the world. North Korea has increased its cryptocurrency heists and supply chain attacks to fund and further its military goals and intelligence collection. It has also begun to use AI to make its operations more effective and efficient.”
The company also highlighted the tactics of the Chinese Communist Party (CCP)-affiliated actors, noting, “Deceptive social media accounts by CCP-affiliated actors have already started to pose contentious questions on controversial US domestic issues to better understand the key issues that divide US voters.”
Microsoft has raised concerns that the purpose behind the proliferation of deceptive social media practices may be to “gather intelligence and precision on key voting demographics ahead of the US presidential election,” as stated in their warning.
While China’s geopolitical aims remain consistent, the country has intensified its focus and enhanced the complexity of its influence operations (IO) attacks. Notably, during Taiwan’s presidential election in January, there was a significant uptick in AI-generated content utilized by China-affiliated cybercriminals.
Highlighting a new trend in digital interference, the Microsoft Threat Intelligence team noted, “This was the first time that Microsoft Threat Intelligence has witnessed a nation-state actor using AI content in attempts to influence a foreign election.”