Artificial intelligence-powered image creation tools, including those developed by OpenAI and Microsoft, have been identified by researchers as potential sources of election-related disinformation. Despite policies against misleading content, these tools can generate photos that could fuel false narratives surrounding elections. The Center for Countering Digital Hate (CCDH) conducted tests using generative AI tools, revealing the potential for AI-generated images to exacerbate the spread of false claims, particularly ahead of the U.S. presidential election in November.
Testing AI Image Generation Tools
CCDH utilized various AI image generation tools, such as OpenAI’s ChatGPT Plus, Microsoft’s Image Creator, and others, to assess their susceptibility to producing misleading content. The report highlights concerns about the ease with which these tools can be manipulated to create images depicting scenarios such as President Joe Biden in a hospital bed or election workers destroying voting machines. Such images, if presented as genuine, could significantly undermine the integrity of elections.
Vulnerabilities and Policy Updates
According to the report, the AI tools tested by CCDH demonstrated vulnerabilities, particularly when prompted to create images related to election fraud. While some tools were successful in blocking prompts involving candidates like Biden and former President Donald Trump, others, like Midjourney, exhibited a higher propensity for generating misleading images. Furthermore, CCDH noted that certain Midjourney images are publicly available, raising concerns about their potential misuse for political disinformation.
Industry Response and Mitigation Efforts
Following the report’s findings, efforts are underway to address the misuse of AI-generated content. Some companies, such as Midjourney and Stability AI, have pledged to update their policies to prohibit the creation or promotion of disinformation. Midjourney’s founder highlighted forthcoming updates aimed at addressing concerns related to the upcoming U.S. election. Stability AI also emphasized its commitment to preventing fraud and disinformation through policy enhancements. OpenAI stated its ongoing efforts to prevent abuse of its tools, while Microsoft did not provide a comment in response to the report.
As the use of AI technology continues to evolve, the challenge of combating misinformation and safeguarding the integrity of elections remains paramount. Collaborative efforts between tech companies, researchers, and policymakers are essential to mitigate the risks posed by AI-generated content and uphold the democratic process.