ADVERTISEMENT

Watermarking Can Help Track AI-Generated Content, Build Trust

As technologies evolve, AI-created text, images, videos and audio are often indistinguishable from those created by humans.

<div class="paragraphs"><p>Watermarking allows developers to control the watermark encryption, ensuring better content detection and authenticity of AI-generated content. (Source: rawpixel.com/Freepik)</p></div>
Watermarking allows developers to control the watermark encryption, ensuring better content detection and authenticity of AI-generated content. (Source: rawpixel.com/Freepik)

As artificial intelligence technologies become more advanced, concerns are growing about the source and authenticity of digital content. There is a need to distinguish AI-generated content from human-generated material, which can enable consumers to differentiate between the two sets of content for informed decision-making, according to a new report by EY and Federation of Indian Chambers of Commerce & Industry.

It has become increasingly challenging to differentiate between human-created and machine-generated content, leading to potential issues such as misinformation, copyright infringement and a loss of credibility in digital content. As per the report—Identifying AI Generated Content in the Digital Age: The Role of Watermarking—AI content detection tools such as watermarking are essential for establishing authenticity and maintaining the integrity of AI-created content.

Key Issues In AI-Generated Content

As technologies evolve, AI-created text, images, videos and audio are often indistinguishable from those created by humans. There is a growing concern about the authenticity and source of the content, which may lead to deepfakes, copyright infringements, fake news, social manipulation and false attribution and lack of content detection.

AI-generated content is significantly influencing public decisions and opinions, the report said. 

Opinion
Top CEOs Betting Big On AI, Talent To Navigate Global Turbulence: KPMG

Building Trust On AI With Watermarking

One approach to establish authenticity and build an AI content detection mechanism is watermarking. Watermarking can enable developers to encrypt watermarks on content created by AI, providing detection and authenticity, which can enable an ecosystem of accountability, transparency and reliability to foster trust in AI systems.

Various governments around the world have recognised the potential of watermarking technologies, and India can similarly shape a domestic digital ecosystem that is secure, transparent and trustworthy, the report suggested. 

“Robust watermarking must resist tampering, and detection systems should maintain low false-positive rates while functioning across different gen AI platforms. India aims to lead in this area by promoting the development and adoption of advanced watermarking techniques to ensure secure and authentic AI content creation,” said Rajnish Gupta, partner, Tax and Economic Policy Group, EY India.

Need For Tech Advancements And Frameworks

Global leaders have acknowledged the need for frameworks and initiated watermarking and other content detection measures. According to the report, policies are being formulated to promote the adoption of watermarking and establishing global standards for its implementation. Developing open-source tools and digital public infrastructure to support watermarking initiatives is also important.

The report noted that India and other countries need to strengthen frameworks, ensuring legal recognition and governance for digital watermarks across jurisdictions. Ethical implications should also be considered, striking a balance between protecting content and respecting individual privacy.

"As generative AI reshapes the digital landscape, we as a nation must come together and establish robust safeguards to trace content origins, ensuring transparency and trust in the AI-driven world," said Jyoti Vij, director general, FICCI.

Opinion
MeitY Looking Only At Domestic Firms For AI-Infra Tender, Final Bids Soon