Google Gemini AI Halts Image Generation Amid Accusations of Racism

Spread the love

Google introduced a new chatbot named Gemini on February 8th, which replaced its previous chatbot, Bard. Gemini was touted to offer enhanced capabilities compared to its predecessor.

Google has opted to temporarily suspend the Gemini image generator’s ability to create images of people. This decision follows criticisms regarding the program’s misrepresentation of individuals’ races within historical contexts, leading to discussions about the technology’s impact on shaping perceptions of history and diversity.

Billionaire Elon Musk has entered the fray, criticizing Google’s AI tool as “woke” and “racist” amidst the controversy. The tool’s struggle to navigate the complexities of balancing inclusivity with historical accuracy has been brought to the forefront of discussions.


Google’s rapid response to disable people generation in Gemini, coupled with assurances of an enhanced version in the future, underscores the company’s recognition of the issues with the model. This swift action reflects Google’s commitment to addressing concerns and improving the accuracy and fairness of its AI technologies.

Jack Krawczyk, Google’s product director for Gemini Experiences, further confirmed the challenges faced by the chatbot in a post on X. He acknowledged that the chatbot was encountering difficulties, particularly in “offering inaccuracies in some historical image generation depictions.” This acknowledgment demonstrates Google’s transparency about the limitations and ongoing efforts to refine the technology to deliver more reliable results, especially in contexts as sensitive as historical depictions.


Google introduced a new chatbot named Gemini on February 8th, which replaced its predecessor, Bard. Gemini was marketed as offering enhanced capabilities compared to Bard and was positioned as a direct competitor to OpenAI’s ChatGPT. Similar to Bard, Gemini has the ability to generate images, akin to other tools developed by OpenAI.

However, shortly after its launch, users began to notice inaccuracies in the images generated by Gemini, particularly in historical contexts. For instance, when prompted for a “US senator from the 1800s,” Gemini produced images of black women, despite the fact that the first black woman senator was not elected until 1992. Similarly, it generated images of women and black men in World War II German uniforms, which was historically inaccurate.

The Verge reported on these discrepancies, and Google acknowledged that the images generated by Gemini were indeed misleading. This highlights the challenges that Gemini faced in accurately representing historical figures and contexts, despite its capabilities in image generation.

The criticism faced by Gemini marks the first instance where an AI tool has been called out for not introducing more white characters, highlighting a unique aspect of bias in image generation. In contrast, other image generators like Stable Diffusion XL have been accused of producing more images of white people when prompted to create images of ‘attractive’ or ‘productive’ individuals, as reported by the Washington Post. However, when the prompt shifted to themes related to social services, the model showcased people of color, showcasing a nuanced portrayal of bias within AI algorithms.

Read more Tech News

Leave a Reply