
Emerging Technology
OpenAI Moves To Improve GenAI Content Transparency While Calling For AI Safety Measures
Updated on Mon, May 13, 2024
Additionally, with its impressive capabilities, GenAI tools make it easy for people to create content that can be used deceptively, i.e., they can create fake images and try to pass them off as real ones.
However, OpenAI, one of the leading GenAI companies and arguably the most popular one, is looking to solve this issue, as it issued an announcement on its website recently.
So, what is the AI company planning to do? Let’s explore!
What Did OpenAI Announce?
-
In a release published on its website, OpenAI announced it was making moves to help users learn more about the source of images, videos and audio generated by AI tools.
-
The idea is to enable AI researchers to determine content authenticity.
-
As per the release, “As generated audiovisual content becomes more common, we believe it will be increasingly important for society as a whole to embrace new technology and standards that help people understand the tools used to create the content they find online.”
-
This challenge is being addressed by the company by joining the Steering Committee of the Coalition for Content Provenance and Authenticity (C2PA), which is a widely used standard for digital content certification.
-
C2PA is developed and adopted by software companies, camera manufacturers, online platforms and others and is used to “prove the content comes a particular source”.
-
As a result, OpenAI has begun adding C2PA metadata to all the images created and edited by its latest image generator, DALL-E 3, as well as to ChatGPT and its OpenAI API.
-
Additionally, the C2PA metadata will be integrated into Sora, its video generation model, once the tool is launched broadly.
-
While this move won’t curb people from being able to create deceptive content without this information or preventing them from removing it completely, it will prevent them from easily faking or altering such information.
-
The company believes that as the standard’s adoption increases, the information can stick with the content through its sharing, modification and reuse lifecycle, with the company saying, "Over time, we believe this kind of metadata will be something people come to expect, filling a crucial gap in digital content authenticity practices.”
-
OpenAI is also building a tool to help users identify content created by the company’s tools, which includes enhanced, tamper-resistant watermarking capabilities and tools that leverage AI to detect if content has originated from GenAI models.
-
Furthermore, OpenAI and its primary backer, Microsoft, are working together to support AI education and understanding through a $2 million fund. This will include working with organizations such as Older Adults Technology Services from AARP, International IDEA and Partnership on AI.
What Else Is OpenAI Planning?
-
In a recent podcast, OpenAI’s CEO, Sam Altman, voiced his concerns about AI safety and the importance of an international agency put in place to regulate and monitor powerful AI tools to ensure “reasonable safety”.
-
Altman said, “I think there will come a time in the not-so-distant future, like we're not talking decades and decades from now, where frontier AI systems are capable of causing significant global harm.”
-
As per the CEO, an international agency would work better than leaving it to the inflexible laws of one country.
-
“I'd be super nervous about regulatory overreach here. I think we get this wrong by doing way too much or a little too much. I think we can get this wrong by doing not enough,” said Altman, adding “The reason I've pushed for an agency-based approach for kind of like the big-picture stuff and not like a write-it-in-law is in 12 months it will all be written wrong.”
Do you think OpenAI’s move will benefit AI technology by allowing users to determine the authenticity of content? Do you think Sam Altman has a point in the need of an international AI regulatory agency?
Let us know in the comments below!
First published on Mon, May 13, 2024
Enjoyed what you read? Great news – there’s a lot more to explore!
Dive into our content repository of the latest tech news, a diverse range of articles spanning introductory guides, product reviews, trends and more, along with engaging interviews, up-to-date AI blogs and hilarious tech memes!
Also explore our collection of branded insights via informative white papers, enlightening case studies, in-depth reports, educational videos and exciting events and webinars from leading global brands.
Head to the TechDogs homepage to Know Your World of technology today!
Disclaimer - Reference to any specific product, software or entity does not constitute an endorsement or recommendation by TechDogs nor should any data or content published be relied upon. The views expressed by TechDogs' members and guests are their own and their appearance on our site does not imply an endorsement of them or any entity they represent. Views and opinions expressed by TechDogs' Authors are those of the Authors and do not necessarily reflect the view of TechDogs or any of its officials. While we aim to provide valuable and helpful information, some content on TechDogs' site may not have been thoroughly reviewed for every detail or aspect. We encourage users to verify any information independently where necessary.
Trending TD NewsDesk
CES 2026 Updates: Intel, Atlas, Smart Bricks, And More
Intel Launches Next-Generation PC Chip at CES 2026
CES 2026 Is Here: Latest Reveals From Samsung, LG, And Plaud!
Microsoft Signs A 5-Year AI Deal With Premier League For Its 1.8 Billion Fans
Grok Is Under Fire As France And India Complain About Sexualized Deepfake Images
Join Our Newsletter
Get weekly news, engaging articles, and career tips-all free!
By subscribing to our newsletter, you're cool with our terms and conditions and agree to our Privacy Policy.
Join The Discussion