What is the AI News project?
The project will start with collecting AI-related articles posted elsewhere and slowly move into personal analysis and opinion articles.
Start date: 25/7/2023
AI NEWS ARTICLES
Watermarks on AI-created content to increase safety
Source: Reuters
Short Comment: Watermarks to separate AI-generated content is likely a feature that should have been implemented before the technology was released into the wild. The watermarking feature should be:
- hidden for all content
- not optional
- as impervious to manipulation as possible
Blockchain technology could have likely been used to achieve this.
Also, it should follow a common format—like IP addresses are used worldwide to identify internet users—so that any AI-created content should be traced as such by computer robots and humans alike and so that its source can be located.
A second watermarking feature easily visible should be available so that the user would know that what they are consuming is AI-generated.
Google will put invisible watermarks on AI-generated images
Source: Interesting Engineering
Short Comment:A tool that detects and watermarks AI-generated content would be useful. Nevertheless, it is a tool that was created to help with the problem of AI content not being easily identifiable as such.
Usually, such tools spring up after some new tool or service creates a problem; for example, in this case, if AI-generated content had been used to spread false, malicious information. These detection and watermarking tools are now offered almost simultaneously with the new service (AI models), before any major case of manipulation has been publicly acknowledged and criticized.
Problem:These tools are made by the same companies that created the generative AI models and are not foolproof.
A watermark that is created automatically when an AI model is used, that is mandatory and cannot be manipulated, would offer better protection from ill-use. Theoretically, the companies that released AI-generative models into the wild could have bundled such a function with the initial AI models.
It was also very predictable that companies would prefer to safe-guard their propriety information and they would shy away from using those models. Now of course AI companies are offering that option too.
So why did the companies offering AI tools failed to offer either with their initial AI models?
Were they hoping that mass usage would give them access to a bottomless well of information, known or secret?
Or were they just excited to share this cool new thing they made?
Is the only way to get such a universal, mandatory watermarking feature via enforced regulation?
Would there be any downsides to such regulation?