Microsoft’s Deepfake Tool Can Detect AI-Manipulated Media

Microsoft’s Deepfake Tool Can Detect AI-Manipulated Media
With increasingly more use of synthetic intelligence (AI) on footage and movies, “deepfake” content material has turn into quite common on the web. Although many of those content material is made for enjoyable and leisure, these can nonetheless turn into sources of misinformation for the less-informed inhabitants. So, to battle the unfold of misinformation and faux information by way of “deepfake” photographs and movies, Microsoft has made a instrument that may detect photographs and movies which can be AI-made.

The “Video Authenticator” instrument, developed by the R&D division of the Redmond-based software program large, can analyze photographs and movies and detect if these are synthetic or actual. The instrument research the movies and the photographs and provides “a percentage chance, or confidence score” to them.

“Today, we’re announcing Microsoft Video Authenticator. Video Authenticator can analyze a still photo or video to provide a percentage chance, or confidence score, that the media is artificially manipulated.”, wrote Microsoft’s executives in an official blog post.

Animated GIF - Find & Share on GIPHY

According to Microsoft, in case of analyzing movies, the instrument will present the “percentage chance” for each body of the video in real-time.

Although detecting AI-manipulated media can powerful, the corporate says that the instrument “works by detecting the blending boundary of the deepfake and subtle fading or greyscale elements that might not be detectable by the human eye.”

Now, coming to some technical particulars, the “Video Authenticator” instrument was developed through the use of a public dataset from FaceForesnics++. It was then examined on the DeepFake Detection Challenge Dataset. And as per Microsoft, these two are the “leading models for training and testing deepfake detection technologies”.

The Redmond-based large is collaborating with an AI Foundation of San Francisco to distribute the instrument. The firm, with the introduction of this instrument, goals to forestall the unfold of misinformation, particularly forward of the US elections.

So, the instrument “will initially be available only through RD2020 [Reality Defender 2020], which will guide organizations through the limitations and ethical considerations inherent in any deepfake detection technology.”. Hence, it is going to be distributed to election marketing campaign organizers and information and media retailers masking political information. 

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.