The integration of Generative Artificial Intelligence (Gen-AI) into a critical search platform like Google, enables users of the platform to get summaries from image searches using the Google Lens feature.
This feature, similar to Google Reverse Image Search, has been a critical resource especially for fact-checkers and Media and Information Literacy (MIL) enthusiasts, as it can trace the origin of videos and images, providing context for when the content was first shared online.
Thanks to the Gen-AI feature of Google, the AI Overview, users of Google Lens get summaries of image searches with related links.
“AI Overviews provide a snapshot of key information about a topic or question with links so you can easily explore more on the web,” Google says of the feature.
In the midst of the August 6 helicopter plane crash in Ghana, GhanaFact’s misinformation analysis around the crash brought to light a classic case of AI hallucination.
AI hallucination – What is it?
AI hallucinations, according to Google Cloud, “are incorrect or misleading results that AI models generate. These errors can be caused by a variety of factors, including insufficient training data, incorrect assumptions made by the model, or biases in the data used to train the model.”
“AI hallucinations can be a problem for AI systems that are used to make important decisions, such as medical diagnoses or financial trading,” the definition added.
Hallucinations happen because, unlike how Google Search gets information from the web, Large Language Models (LLMs) do not gather information at all. Instead, LLMs predict which words come next based on user inputs.
AI Overview misfires on the August 6 helicopter crash image
In a GhanaFact report busting misinformation around the August 6 crash, we observed the spread of an image of a crashed Ghana Air Force helicopter turned upside down, with foamy substance sprayed on the aircraft in a bushy enclave with a military officer on site.
Several social media users, especially on Facebook (here, here, here, and here), circulated the image with the narrative that it was from the crash site. Some international websites (here, here) have since used the same as their featured image on reports about the incident in Ghana.
A GhanaFact report revealed that the image was initially posted online by the Ghana National Fire Service (GNFS) on X on March 20, 2024, from a helicopter crash scene at Bonsukrom, Agona Nkwanta in the Western Region.
Google Lens, however, via the AI Overview feature, presented false information about the image, linking the 2024 image to the recent mishap on August 6, 2025.
The three false claims associated with the image included:
- The image shows a Ghana Air Force helicopter that has reportedly crashed in the Adansi Akrofuom District of the Ashanti Region.
- Preliminary reports suggest some individuals on board sustained injuries.
- There was no official statement from the army or relevant authorities.
All the points gleaned from the image were linked to Modern Ghana news publication, which had recycled the image to portray scenes from the August 6 crash, without any clarification or clearly labelling the originality of the image.
Full Fact speaks to Google Rep about AI hallucination
UK-based Full Fact, in a report on Google AI’s misleading outputs, spoke to a representative of the global tech giant who said the AI’s “search results surface web sources and social media posts that combine the visual match with false information, which then informs the AI overview…”
“We aim to surface relevant, high-quality information in all our Search features, and we continue to raise the bar for quality with ongoing updates and improvements. When issues arise—like if our features misinterpret web content or miss some context—we use those examples to improve and take appropriate action under our policies,” the Google rep added.
What Google says about AI platforms and mistakes
According to Google support, AI platforms can make mistakes at two levels, hence the need to take their output with caution.
“Because generative AI is experimental and a work in progress, it can and will make mistakes: It may make things up. When generative AI invents an answer, it’s called a hallucination,” and added that “It may misunderstand things. Sometimes, generative AI products misinterpret language, which changes the meaning.”
Google also encourages users to evaluate responses they get from AI with the following steps:
- Think critically about the responses you get from generative AI tools.
- Use Google and other resources to check information that’s presented as fact.
- If you come across something that isn’t right, report it.
“Many of our generative AI products have reporting tools. Your feedback helps us refine the models to improve generative AI experiences for everyone,” Google support added.
Conclusion
Google’s public concession that its AI platforms can and will make mistakes deepens the urgency of fact-checking and media literacy education.
Many users of the platforms will likely not be exposed to the disclaimer under AI Overview, which reads “AI responses may include mistakes.” This represents an exposure to potentially false and misleading information with all the consequences that come with it.













