Digital Watermarking: Enhancing AI Transparency and Supporting Good Actors
VentureBeat presents: AI Unleashed – An exclusive executive event for enterprise data leaders. Network and learn with industry peers. Learn More
In an era of deepfakes, bot-generated books, and AI images created in the style of famous artists, digital watermarks have been hailed as a potential solution for identifying AI-generated content and improving AI transparency.
Recently, several companies pledged to President Biden that they would take steps to enhance AI safety, including the use of watermarking technology. In August, Google DeepMind introduced a beta version of SynthID, a tool that embeds invisible digital watermarks into images, making them detectable for identification purposes.
However, it has been revealed that current digital watermarks, whether visible or invisible, are not foolproof against bad actors. A computer science professor at the University of Maryland stated that existing watermarking methods have been easily bypassed. Attackers can remove watermarks or even add them to human-created images, leading to false positives.
Despite these challenges, digital watermarks hold significant value for enabling and supporting good actors. Margaret Mitchell, a computer scientist and AI ethics researcher at Hugging Face, emphasized the importance of digital watermarks for providing a type of embedded ‘nutrition label’ for AI content. Provenance, or the lineage of AI-generated content, is crucial for tracking consent, credit, compensation, and understanding model inputs.
Mitchell believes that while digital watermarks may not work for all users, they have a lot to offer the majority of users. The subset of bad actors who possess the technical know-how to manipulate watermarks is relatively small compared to the larger user base that benefits from watermarking technology.
Hugging Face, an open-access AI platform, has introduced new functions in collaboration with Truepic, a provider of authenticity infrastructure. These functions allow users to automatically add responsible provenance metadata to AI-generated images. By combining provenance credentials and invisible watermarking, users can further enhance the authenticity and traceability of AI content.
While some may view watermarking tools as a small step in addressing the ocean of AI-generated content, Mitchell sees it as a significant advancement. Watermarking has garnered consensus among experts in both AI ethics and AI safety, which is evident from its inclusion in the White House’s voluntary commitments.
In conclusion, digital watermarking is a critical tool for improving AI transparency, enabling provenance tracking, and supporting good actors in the AI landscape. Despite its limitations, the widespread recognition and adoption of watermarking technology indicate its potential to revolutionize the way we verify and authenticate AI-generated content.
VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.
I’m a highly experienced and respected author in the field of cryptocurrency. I have been writing about Bitcoin, Ethereum, Litecoin and other digital currencies for over 5 years which is widely regarded as one of the most knowledgeable and reliable sources of information in this area.