Google Photos
Google photos

Google is developing a new tool for Google Photos to let consumers determine whether an image was made or altered using artificial intelligence. According to a recent Android Authority article, the photo-sharing and storage app will soon display tags indicating whether an image is AI-generated or digitally edited, to curb the spread of deepfakes. This functionality was discovered in the current version of Google Photos, 7.3, however, it is not yet available to consumers.

According to reports, the feature will leverage resource tags in the app’s metadata, specifically new identifiers such as “ai_info” and “digital_source_type,” to indicate whether an image was generated by an AI tool and possibly specify the exact model used, such as Gemini or Midjourney.  

This decision comes amid growing concerns about deepfakes, a type of digital manipulation that employs artificial intelligence to create realistic but misleading media. Deepfakes, which include photos, videos, and audio snippets, are frequently used to propagate misinformation or fool audiences. Recently, Bollywood actor Amitabh Bachchan sued a company for allegedly employing deepfake technology in commercials without his permission, highlighting the dangers of such manipulation.

Currently, it is unclear how Google wants to present this AI-related data. One alternative solution is to incorporate the information in the image’s EXIF data, making it more tamper-resistant but less accessible to users until they inspect the image’s metadata. Alternatively, Google Photos might take a more straightforward approach, such as an on-image sticker indicating AI involvement—similar to what