Gemini Is Biased (and Lies About It)

What does the future of AI look like if it’s not committed to telling the truth?


John Stonestreet

Jared Hayden

Google continues to face criticism for the implicit bias evident on its AI Platform Gemini. When asked to create images of Vikings or nineteenth century U.S. senators, Gemini generated images of people of all races except those who were actually Vikings and nineteenth century U.S. senators. 

When asked by a reporter to generate content related to the abortion debate, Gemini refused to generate content that argued against abortion and for life, claiming it could not generate images or content of “sensitive or controversial topics.” However, when asked to generate depictions of pro-abortion rallies or in praise of pro-abortion writers, Gemini quickly generated the requested content.  

Not only is Gemini biased, it lies about being biased. But the problem goes beyond programming. Gemini’s flubs reveal that AI is not necessarily “intelligent” or committed to telling the truth. 

For more about the perils and promise of AI, watch the latest Breakpoint Forum at the Colson Center’s YouTube channel. 


  • Facebook Icon in Gold
  • Twitter Icon in Gold
  • LinkedIn Icon in Gold

Have a Follow-up Question?

Related Content