Techpadi

Gemini AI’s ability to generate images of people paused after inaccuracies

Google has announced it is temporarily pausing the ability of its new AI system, Gemini, to generate images of people. The decision comes after users highlighted inaccuracies with Gemini’s depictions of certain historical figures and groups.

Gemini is Google’s latest generative AI tool, introduced earlier this month as a competitor to ChatGPT and other AI systems. A key feature of Gemini is its ability to create unique images based on text prompts provided by the user.

However, some users began testing the limits of this feature by asking Gemini to generate images of specific figures from history. In many cases, the AI produced depictions that were highly inaccurate and reflected modern diversity rather than historical reality.

For example, when prompted to create an image of a 1943 German soldier, Gemini generated soldiers of various non-white ethnicities wearing Nazi uniforms. Other requests yielded Founding Fathers and Popes depicted as people of color.

The images spread rapidly on social media, drawing criticism and mockery over Gemini’s clear lack of historical knowledge. In response, Google acknowledged the AI’s shortcomings in properly depicting certain groups and time periods.

In a statement, the company announced it is pausing Gemini’s ability to generate images of people while engineers work quickly to improve the technology. Google stated it aims to “re-release an improved version soon” that will hopefully avoid the same pitfalls.

Experts note that AI systems like Gemini are only as good as their training data. If the data lacks diversity or contains biases, the AI will replicate and amplify those issues. Google likely tried to add diversity to Gemini’s dataset, but failed to account for appropriate historical context.

Read also: Google to shut down Google Pay and consolidate payment services under Google Wallet

The controversy comes shortly after Google rebranded its AI chatbot from Bard to Gemini in an effort to compete with ChatGPT’s exploding popularity. However, this apparent lack of historical awareness presents another early stumble for Google in the AI space.

For now, Gemini will refuse requests to generate images of people at all. It remains functional for other image prompts not involving humans. Google has not provided an exact timeline for when it will restore the feature after addressing the underlying problems.

As AI becomes more of a household utility, debates over how to balance diversity in imaging while maintaining historical accuracy is expected to spring up around every corner. Gemini’s errors highlight the difficulty tech companies face in programming societal nuance into AI systems.

Exit mobile version