Google has decided to temporarily suspend the feature of generating images of people with its artificial intelligence, Gemini AI. This decision comes following the discovery of some historical inaccuracies in the images generated by the model.

The problem came to light in recent days, when some users reported on social media that Gemini produced images of historical figures with ethnic and gender traits different from the real ones. This has raised concerns about the potential distortion of historical reality by artificial intelligence.

“We are aware that Gemini has some inaccuracies in the depiction of historical figures,” Google said in a statement. “For this reason, we have decided to temporarily pause the generation of images of people and will implement improvements to ensure greater historical accuracy”

This move represents a new stumbling block for Google in the AI race. In recent months, the company has been trying to catch up with its rivals OpenAI and Microsoft, launching Bard, a chatbot based on generative AI, and later rebranding it to Gemini. However, its release was marred by inaccurate information presented in a promotional video.

The company assures that it is working to “further adjust” Gemini to take into account the nuances of historical contexts and ensure a more faithful representation of reality. It’s not yet clear when the people image generation feature will be available again, but for now, Google is just saying it will update users “as soon as possible.”

This case raises important questions about ethics and responsibility in the use of artificial intelligence. It is crucial to ensure that these technologies are developed and used responsibly, without distorting reality or perpetuating harmful stereotypes.