Telltale signs to spot deepfakes are disappearing at an alarming rate, but a UNSW academic says these advances in AI also provide opportunities for a better, more accessible internet.
The boom in generative AI, with programs for creating ādeepfakesā becoming freely available for regular people, is causing just as many problems as itās creating opportunities.
AI generators used to be awful at drawing hands (ironic, given how many humans struggle with it). . That weird, plasticky sheen over generated photos? . 60% of respondents of more than 1000 people thought a video made by OpenAIās Sora program was real.
Cybersecurity and AI expert Professor Sanjay Jha says itās fast moving out of regular human reach to tell whatās real and what isnāt.
āWe couldn't foresee what was coming out of AI 10 years back. If you interviewed anyone like me, they wouldn't be able to tell you.ā
And while thereās danger in that, thereās also the potential for the technology to make the internet more accessible for everyone.
Prof. Jha says heās fully on board. Heās pressing on with his own technology, patent pending, that could break down barriers for people with disability.
Sanjayās technology
During COVID, Prof. Jha developed lower back nerve pain and couldnāt teach students to the best of his abilities.
āI was thinkingā¦is there an alternate way that I can generate content so that it can be automated, and then I could probably use it for teaching by not spending hours on recording the videos.ā
He started working on a prototype program that could make a digital version of himself without using lots of content (hours of footage, audio files) to save on time and energy usage.
I volunteered as a guinea pig for this article. Prof. Jhaās student, Wenbin Wang, took a minute of me speaking into a webcam and created a digital clone of myself teaching one of the professor's subjects.
A video comparing footage of the real me and the AI clone version. The clone is presenting lecture slides for a computer science class.
In the video above, youāll see a side-by-side comparison of myself and the AI version. Up close on the clone, you can see the unnatural sheen over my skin, the eyes flickering in and out and the mouth movements don't match. The voice, while capturing my natural way of speaking, does not have the same highs and lows that come with human speech.
Those things may be obvious, but when the video shrinks to fit with the lecture slides and the audio is compressed by the recording, it becomes harder to tell.
The clone only took three minutes to process.
āIf we had hours of your recording, then we can definitely do a lot better,ā Prof. Jha says, ābut I would like to highlight that it was your voice and you were listeningā¦an average person meeting you in casual settings would not be able to pick it up.ā
Prof. Jha sees no problem with this tech if people are honest about it.
Musician FKA twigs recently told a that she had created an āAI twigsā that will interact with fans while she focuses on her art, but her written statement doesnāt about whether people will know when theyāre talking to the real her.
āYou should tell your fans, for example, that this is my persona created by an AI agent,ā Prof. Jha says, āand I'm not sure whether that's going to be terribly popular, but time will tell.ā
The bright side of AI clones
Prof. Jhaās technology was made with accessibility in mind. He sees having a virtual clone as a game-changer for people who struggle with public speaking and presenting.
āThere are people I know personally who have speech stutters. They're fantastic researchers and they could be great teachers if they don't have to speak for an hour or two in front of the class because they don't feel that confident. By using our technology they could produce automated lecturing.
āI think with disability also there are possibilities of using this tech for, say, sign language. There are numerous opportunities for multilingual capabilities. It could be doing translation of my speech in Mandarin or Hindi or German. And when they ask questions it could translate back to me in English.ā
This idea was tested out in this yearās . Prime Minister Narendra Modiās party used AI to translate his speeches into several languages in a country that has more than 800 official and unofficial dialects.
The excitement of innovation must be paired with critical thinking.
Defending against the dark arts of deepfakes
When it comes to tips for spotting malicious deepfakes so you donāt get tricked or scammed, Prof. Jha says thereās no longer much of a point.
āWe need tools and techniques to detect that rather than relying on people.ā
Australians are being scammed by . UK engineering company Arup in an elaborate video-conference scam. Last year, a fake photo went viral and caused a stock market dip.
Detection tools are available and improving but theyāre scattered. found you can fool some of them by editing the content, doing things like lowering the resolution or cropping the image before submitting it for analysis (like how my video became more believable when it shrunk to fit the lecture slides).
An internet industry body called the Content Authenticity Initiative has developed a watermark system called . They are voluntary tags that show details of how content was made and its edit history. to label videos made with AI on its platform. Instagram now things made with AI before they post.
From the legal side, the Australian government is hoping to the sharing of non-consensual AI pornography with up to seven years in prison.
But audio might be the toughest nut to crack of them all. Deepfake audio is cheap to produce and arenāt great. On top of that, a deepfake phone call scam would have the element of surprise.
āYour reaction time [in this situation] is impulsive, you're not going to search [for answers] and find out when you're panicking,ā Prof. Jha says.
āWe are in an era of active research in this kind of area and it's a cat and mouse game as usual.ā
So what can you do? Prof. Jhaās best advice is to be wary when on the internet and know that urgency when doing anything online is a good sign something is off.
āSay if your boss is calling you and asking you to transfer $200,000 into some account and you are accountant in charge of the moneyā¦ask some questions and so forth to make sure that you get more context of it.
āBe vigilant. I would never ask people not to pay attention, always be suspicious, and if you have any doubts, do due diligence.
āLike any powerful tool, AI can be used for construction or destruction. The excitement of innovation must be paired with critical thinking.ā
Media enquiries
For enquiries about this story or to arrange interviews, please contact Jacob Gillard:
Email: jacob.gillard@unsw.edu.au
Phone: +61 2 9348 2511