To counter the threat of deepfake videos, Google is conducting tests to identify various safety and security risks associated with emerging forms of AI-generated synthetic audio or video content, commonly known as ‘synthetic media.’ While recognizing the useful applications of this technology, Google acknowledges the concerns related to its potential use in disinformation campaigns and malicious activities, particularly through deep fakes, which can spread false narratives and manipulated content. In collaboration with the Indian government, Google plans to address safety and security risks associated with synthetic media during the upcoming Global Partnership on Artificial Intelligence (GPAI) Summit.
Google has introduced protective measures against fake images, including SynthID, which is an embedded watermark and metadata labeling solution designed to identify images created using Google’s text-to-image generator, Imagen. Additionally, Google employs a combination of machine learning and human reviewers to swiftly identify and remove content that violates guidelines, enhancing the accuracy of its content moderation systems to effectively respond to misleading or harmful visual content.
As part of its commitment to responsible AI development, Google is contributing $1 million in grants to the Indian Institute of Technology, Madras, to establish a multidisciplinary center for Responsible AI. This initiative aims to bring together researchers, domain experts, developers, community members, policymakers, and others to collaboratively ensure the responsible development and localization of AI in the Indian context.
For YouTube, Google is implementing disclosure requirements for creators utilizing altered or AI-generated content. Creators will be mandated to inform users by adding labels to the description panel and video player, promoting transparency and awareness regarding the use of such content on the platform.