Inaccurate images generated by AI chatbot were ‘unacceptable’, says Google boss

The tech giant said it had been working ‘around the clock’ to address the issue.
The tech giant said it had ‘always sought to give users helpful, accurate and unbiased information’ in its products (Jeff Chiu/AP)
AP
Martyn Landi28 February 2024

The historically inaccurate images generated by Google’s Gemini AI chatbot were “unacceptable”, chief executive Sundar Pichai has said in a memo to staff.

Last week, users of Gemini began flagging that the chatbot was generating images showing a range of ethnicities and genders, even when doing so was historically inaccurate – for example, prompts to generate images of certain historical figures, such as the US founding fathers, returned images depicting women and people of colour.

Some critics accused Google of anti-white bias, while others suggested the company appeared to have over-corrected over concerns about longstanding racial bias issues within AI technology which had previously seen facial recognition software struggling to recognise, or mislabelling, black faces, and voice recognition services failing to understand accented English.

I know that some of (Gemini's) responses have offended our users and shown bias - to be clear, that’s completely unacceptable and we got it wrong

Google chief executive Sundar Pichai

Following the Gemini image generation incident, Google apologised, paused the image tool and said it was working to fix it.

But issues were then also flagged with some text responses, with an incident highlighted where Gemini said there was “no right or wrong answer” to a question equating Elon Musk’s influence on society with Adolf Hitler’s.

Now Mr Pichai has addressed the issue with staff for the first time and promised changes.

In his memo, Mr Pichai said the image and text responses were “problematic” and that Google had been working “around the clock” to address the issue.

No Al is perfect, especially at this emerging stage of the industry’s development, but we know the bar is high for us and we will keep at it for however long it takes. And we’ll review what happened and make sure we fix it at scale

Google chief executive Sundar Pichai

“I know that some of its responses have offended our users and shown bias – to be clear, that’s completely unacceptable and we got it wrong,” he said.

“No Al is perfect, especially at this emerging stage of the industry’s development, but we know the bar is high for us and we will keep at it for however long it takes. And we’ll review what happened and make sure we fix it at scale.”

He said Google had “always sought to give users helpful, accurate and unbiased information” in its products and this was why “people trust them”.

“This has to be our approach for all our products, including our emerging AI products”, he added.

Going forward, Mr Pichai said “necessary changes” would be made inside the company to prevent similar issues occurring again.

“We’ll be driving a clear set of actions, including structural changes, updated product guidelines, improved launch processes, robust evals (sic) and red-teaming, and technical recommendations. We are looking across all of this and will make the necessary changes,” he said.

Create a FREE account to continue reading

eros

Registration is a free and easy way to support our journalism.

Join our community where you can: comment on stories; sign up to newsletters; enter competitions and access content on our app.

Your email address

Must be at least 6 characters, include an upper and lower case character and a number

You must be at least 18 years old to create an account

* Required fields

Already have an account? SIGN IN

By clicking Create Account you confirm that your data has been entered correctly and you have read and agree to our Terms of use , Cookie policy and Privacy policy .

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged in