This whole deepfake thing has me concerned and bordering on frightened. And I'm not the only one.
I wrote earlier about how neural networks can make realistic-looking faces. They are getting better. A whole collection of the faces is now available for free. These are faces of people who do not exist at all.
Samsung researchers have created a method to animate a single face and turn it into a convincing video. Consider someone creating a video to catfish (creating a fake identity to deceive) someone: the face in the video could be someone who never existed at all.
A thief convinced a worker with the faked the voice of his boss to transfer money to the thief.
Deepfake tools can convincingly alter the dialog in a video.
Tools like these could be used to create royalty-free video for learning. They could help designers update existing materials rapidly, thus saving money in training budgets. They could be used for movies and television production. High-priced actors could be replaced with digital artists.
But the tools could be used for less noble purposes, too. We have been taught to trust what we see. Users of social networks often demand pictures or videos - because they convincingly document life. If a politician or bad actor were to - quite literally - put words into an opponent's mouth, countering that would be virtually impossible.
The core issue here is that the sounds, images, and videos that we have learned to rely on cannot be trusted. We've been going down this road for a while, of course. Tools such as Photoshop and the Gimp led users create believably altered images. And sometimes when images seem outlandish we question them. People have altered video and audio in the past, too, but now it is far more convincing. This is far more than putting a person's face on a dog.
These deepfakes are so convincing that Google has released a collection of them in the hopes that researchers can develop tools to detect them. Others are or will do the same. The problem is that once a tool is developed to detect one aspect of a particular version of fake, that aspect can be corrected and the process will need to be restarted. One current tool is around 96% accurate at detecting fake videos by analyzing head movements. How soon will it be before that inaccuracy is corrected? Facebook, Amazon, and others have teamed up to address this issue, too. If you want, you can join a competition to detect deepfakes.
[sidebar_cta header="This isn't the only cyber security threat to worry about!" color="white" icon="" btn_href="https://www.learningtree.com/training-directory/cyber-security-training/" btn_href_en="https://www.learningtree.com/training-directory/cyber-security-training/" btn_href_ca="https://www.learningtree.ca/training-directory/cyber-security-training/" btn_href_uk="https://www.learningtree.co.uk/training-directory/cyber-security-training/" btn_href_se="https://www.learningtree.se/training-directory/cyber-security-training/" btn_text="Get Cyber Security Training HERE!"]
If you know anything about cyber security you are probably thinking something along the lines of, "but we have hashes to detect modified data". That's true and it's good. But can a politician convince the electorate that a video or audio is faked by waving a big long number around and shouting, "but the hash values are different!" I doubt it. I also doubt that it will be a long, long time before the average voter has any idea about hashes and digital signatures, if she ever does. And that means the video would need to be signed in the first place. That's unlikely for real videos and impossible for fake ones.
This is one of those cases where the question, "is it good or bad" is accurately answered with "yes" or "it depends". The possibilities are endless. For the record, the picture of me on this blog is really me, but my hair is a bit greyer now...
Update: The US Senate passed the Deepfake Report Act on October 3oth in order to begin to address the related issues.