Alphabet’s Google recently announced plans to limit AI-generated search queries related to elections in the run-up to the 2024 U.S. Presidential election. These restrictions, set to be enforced by early 2024, will impact responses from Google’s chatbot Bard and its search-generative experience. The move aims to curb the potential spread of misinformation during critical political periods.
Notably, Google acknowledges the global significance of 2024, with groundbreaking elections anticipated in countries like India and South Africa. In preparation, the tech giant commits to an increased focus on the role of artificial intelligence (AI) in serving voters and election campaigns worldwide.
Following suit, Meta, the owner of Facebook, announced in November that it would prohibit political campaigns and advertisers in regulated industries from using its new generative AI advertising products. Additionally, advertisers on Meta platforms must disclose the use of AI or other digital methods in altering or creating political, social, or election-related advertisements on Facebook and Instagram.
Contrastingly, Elon Musk’s social media platform X, currently under investigation by the European Union, reversed its previous global ban on political ads. As of August, X permits political advertising in the U.S. from candidates and political parties, accompanied by an expansion of its safety and elections team ahead of the U.S. election.
Governments worldwide are increasingly focused on regulating AI due to concerns about its potential to spread misinformation. In response, the European Union is set to implement new rules requiring Big Tech firms to label political advertising on their platforms clearly, disclose funding details, and specify targeted elections.
These measures collectively reflect the tech industry’s evolving approach to AI and political content, with companies taking proactive steps to address potential risks and enhance transparency in the digital realm.