AI Watermarking
According to Tech Target, AI watermarking is to input a "unique, recognizable signal into the output of a AI model such as text OR images".
In general, it should be both
- Invisible to the naked eye
- Extractable using specialized software or algorithms
It is relatively new, but faces interest as people want to know if images are genuine or AI generated.
Benefits of AI watermarking would be to
- Prevent spreading of AI misinformation
- Indicate authorship
- Establishing authenticity
Help furries online track if AI art is being made and cancel them on twitter
Current issues are that primarily, these tools would have a high false positive rate and in general AI watermarking is hard to implement - if the initial product is not using AI watermarking tools, proving something is AI is now a end user problem, and in general third party companies are stepping in with private solutions with varying levels of accuracy - some students face issues with these as their papers or writings gets misidentified as AI when in fact they are just a human writing their papers normally - as a human does.
Moreover, an MIT Technology Review found that researchers could spoof and reverse engineer AI watermarks with 80 and 85% success rates respectively, showing that these watermarks are vulnerable and need significantly more research to be fully realized and appreciated.