A team of researchers has put together a new initiative with an available open-source code to help better detect deepfakes that have been edited to remove watermarks with the goal of avoiding the spread of misinformation.
Inpainting — also known as “Content-Aware Fill” for Photoshop users — is a method that uses machine-learning models to reconstruct missing pieces of an image or to remove unwanted objects. Although it is generally used as a tool among creatives to “clean up” the image for a more fine-tuned result, this technology can also used for malicious intentions, such as removing watermarks, reconstructing the reality by removing people or certain objects in the photos, adding false information, and more.
This type of technology has greatly developed in recent years, with the notable example of NVIDIA’s AI-powered “Content-Aware Fill”, which goes a step further than Photoshop’s already advanced tools. Manipulating images with malicious intent can cause not only profit loss from image theft by removing watermarks or other visual copyright identifying factors, but it can also lead to the spread misinformation in the case of its ability to remove a person from a crime scene photo, scam people or businesses, even destabilize politics in a case earlier reported by PetaPixel.
To make inpainting abuse more difficult, a team of researchers — David Khachaturov, Ilia Shumailov, Yiren Zhao, Nicolas Papernot, and Ross Anderson — have put together an initiative, called “Markpainting,” as spotted by Light Blue Touchpaper. It is a novel tool that can be used as “a manipulation alarm that becomes visible in the event of inpainting.”
This tool, described in detail by the team’s paper, uses “adversarial machine-learning techniques to fool the inpainter into making its edits evident to the naked eye”, whereby the “image owner can modify their image in subtle ways which are not themselves very visible, but will sabotage any attempt to inpaint it by adding visible information determined in advance by the markpainter.”
ML makes it easy to manipulate images. In a twist on adversarial ML, @DavidobotGames @iliaishacked prevent malicious applications of “inpainting” (filling in a missing portion of an image) by adding an adversarial example perturbation to an image. pic.twitter.com/09O6SAn3nB
— Nicolas Papernot (@NicolasPapernot) June 9, 2021
Which of the images was inpainted? Modern ML makes it easy to manipulate media, helping misinformation campaigns. How can one stop it? We developed Markpainting to make it harder. https://t.co/fKj8HQFGKf #ICML2021 w/ @rossjanderson @NicolasPapernot pic.twitter.com/78gFUJJ9cI
— Ilia Shumailov (@iliaishacked) June 7, 2021
This research, which is supported by CIFAR (through a Canada CIFAR AI Chair), EPSRC, Apple, Bosch Research Foundation, NSERC, and Microsoft, brings new ways for creators, companies, and agencies to better protect their digital assets in the future. Making watermarks less removable gives greater security and profit protection, while other images could be treated so that any future manipulation, such as object removal, becomes easier to detect.
The idea of manipulated image and video detection is not new; research and development in this area is ongoing, however, it is yet to be seen how and when this technology can catch up to successfully stop manipulation attempts.
The full research paper and the tests performed using this “Markpainting” technique can be found on the team’s research paper “Markpainting: Adversarial Machine Learning meets Inpainting.”
Continue reading...