posted by
Dave
on
in
News

OpenAI releases deepfake detector in limited beta

Cade Metz and Tiffany Hsu writing in the New York Times:

As experts warn that images, audio and video generated by artificial intelligence could influence the fall elections, OpenAI is releasing a tool designed to detect content created by its own popular image generator, DALL-E. But the prominent A.I. start-up acknowledges that this tool is only a small part of what will be needed to fight so-called deepfakes in the months and years to come.

While touted as a tool aimed at disinformation researchers, it will clearly be of use to newsrooms facing a deluge of fakery:

OpenAI said its new detector could correctly identify 98.8 percent of images created by DALL-E 3, the latest version of its image generator. But the company said the tool was not designed to detect images produced by other popular generators like Midjourney and Stability.