Artificial Intelligence (AI) is a red-hot theme, passionately debated in our society. People like Elon Musk have been sounding the alarm about the dangers of this new technology, that would include the very real potential to destroy our civilization.
Among the various nefarious applications of AI that have been implemented, so far, one of the most controversial is the Deepfake AI, a capability that has started to be used for a range of criminal purposes.
Deepfake is a portmanteau word that mixes ‘deep learning’ and ‘fake’. It’s artificial intelligence used to create convincing images, audio and video that are false, and can be used for many applications, including hoaxes.
Its algorithms can generate lifelike people and animals, imitate the likeness of an individual, or manipulate images of real people into doing or saying things they haven’t in reality.
In its present state, it often transforms existing source content in a way where one person is swapped for another.
The implications for this technology are as clear as they are problematic.
Deepfakes, in general, are not illegal, as long as you are explicitly depicting them as deepfake videos and not using them for disingenuously misleading people. But that’s exactly what criminals are using it for.
Infosecurity Magazine reports:
“Deepfake technology will become a serious threat to businesses as our world adopts remote work and online ID verification as standard.”
We are faced with the emerging problem of “being able to tell fake content from reality and how this applies to our online identities – especially in a world that is becoming increasingly remote.”
“In the past five years, the possibilities of deepfakes have expanded far beyond the boundaries of entertainment or fake pornography. Cyber-criminals are also adopting the technology for elaborate scams, including fraud and identity theft.”
We are not talking about mere possibilities. Recently a video circulated online showing Elon Musk promoting an apparent cryptocurrency scam called BitVex. It was a deepfake.
“In the poorly-rendered (yet still likely convincing to some) video, Musk speaks in an April interview the billionaire gave with TED head Chris Anderson. The video syncs each person’s lips to a script delivered by a software-generated voice that sounds somewhat like Musk’s. The fake Musk claims that BitVex is a project he created to ensure Bitcoin is widely adopted and promises 30 percent returns every day over three months on any crypto deposited. Fake Anderson chimes in, adding that they’ve tried it themselves and promised it would for others as well.”
While not many people have fallen for it this time around, the game is on, and scammers are out to get us.
The trouble is deepfakes don’t have to be technically perfect to have a substantial impact and to be truly believable.
“Though deepfake technology can theoretically be used for any kind of content – anything from joking satire to malicious political disinformation campaigns – overwhelmingly, the tech is being used to create nonconsensual porn. According to a 2019 report, 96% of deepfake material online is pornographic.”
As The Guardian, reports, Demand for deepfake pornography is exploding. Suddenly, women and men who have never performed in pornography are featured in it.
“In 2018, fewer than 2,000 videos had been uploaded to the best-known deepfake streaming site; by 2022, that number had ballooned to 13,000, with a monthly view count of 16m. As deepfake revenge porn becomes more popular, the barrier to access is quite low: the app […] charges just $8 per week.”
[…] For now, the women and others who are targeted by deepfake revenge porn have few avenues of legal recourse. Most states have laws punishing revenge porn, but only four – California, New York, Georgia and Virginia – ban nonconsensual deepfakes.”
While the dangers of Deepfakes may not be as grave as the threat of destruction of mankind that the whole AI field is said to be posing at the moment, nevertheless its moral and ethical implications remain a deep concern to our society, raising alarming questions about privacy and consent in the digital future.