Google makes its AI text-watermarking technology, SynthID, open-source for AI detection
These days, when we go online, we have to deal with a lot of chatter and bot issues, all while facing an overwhelming amount of AI-generated content. It's slowly becoming increasingly difficult to tell the difference between all three.
Google DeepMind realized this and has decided to make its AI watermark tool, SynthID open-source, making it easier for users to detect what text is AI-generated and what isn't.
Already integrated into Gemini earlier this year, Google making SynthID open-source allows developers to access the Google Responsible Generative AI Toolkit, enabling easy integration into their services and promoting better use of AI.
It operates as part of the text generation and AI detection process. If integrated into any generative AI model, all text generated through that AI model is embedded with a unique digital watermark that humans can't see, without compromising the quality or accuracy of the AI-generated text. For the detection aspect, it can identify these unique watermarks to determine whether the text was generated by AI.
This makes them a great addition to the ongoing efforts to reduce misinformation and disinformation being pushed out via AI.
But Google made it clear that it’s not a 'silver bullet', meaning it might not be accurate all the time. As The Verge pointed out, the system has trouble detecting content that’s been generated and then rewritten, since that strips away its unique ID. The same goes for text that’s been translated.
However, that does not stop this from being a good step forward when curbing issues brought about by AI content. AI has come to stay so the ability to tell what is AI generated and what isn't could at least help with decision-making and the like.
For now, this feature is limited to text detection. While SynthID can also detect AI-generated images and videos, it’s unclear if that part of the system will be publicly available soon.
Many social media platforms have been pushing against the level of bots and AI-generated content on their platforms and this could help them make detections better.
Meta recently announced improvements to its AI labelling system after it came under criticism for labelling basic edited photos as AI-generated. TikTok also reaffirmed to the public, its efforts to crack down on AI content, showing how much companies are striving to make their content AI-free or at least "AI-responsible."