icon bookmark-bicon bookmarkicon cameraicon checkicon chevron downicon chevron lefticon chevron righticon chevron upicon closeicon v-compressicon downloadicon editicon v-expandicon fbicon fileicon filtericon flag ruicon full chevron downicon full chevron lefticon full chevron righticon full chevron upicon gpicon insicon mailicon moveicon-musicicon mutedicon nomutedicon okicon v-pauseicon v-playicon searchicon shareicon sign inicon sign upicon stepbackicon stepforicon swipe downicon tagicon tagsicon tgicon trashicon twicon vkicon yticon wticon fm
8 Oct, 2019 19:51

Fake or Deepfake? Why modern technology leaves us with little choice but to revive critical thinking

Fake or Deepfake? Why modern technology leaves us with little choice but to revive critical thinking

Deepfakes are keeping people from all walks of life up at night – just one photo can place anyone in a compromising situation. How can we judge reality in a post-deepfake world, when what we see is no longer what we get?

Rooted in the porn industry, deepfakes – computer-generated video forgeries – are pouring into the mainstream. There are 14,678 deepfake videos online, according to a report published last month by DeepTrace, which monitors “synthetic media” cyber-threats. While just four percent of those videos are not porn, that percentage is bound to increase as the tools for manufacturing deepfakes become more widely available and the rewards for making them increase.

Many social media users shocked by how far deepfake technology has advanced since a faux Barack Obama appeared on Youtube in 2017 shouldn't be. They've been helping its makers along for years with every photo they upload to Facebook and apps like FaceApp that transform a photo subject into an older, younger, male or female version of themselves. FaceApp uses generative adversarial networks to age photos – the same tech used by deepfake producers like Face2Face to make photos move.

The more faces “fed” to an AI, the more skilled it becomes at making faces move – or generating entirely new ones.  Intelligence services running fake social media profiles no longer have to steal other people’s profile photos – they can synthesize their own. Scammers have already availed of this service, creating bogus LinkedIn accounts to con people out of their personal information for espionage and self-enrichment. 

Facebook, which introduced a new global facial recognition setting shortly after deepfakes debuted in 2017, has a whole library of faces to help, and Google recently found itself in hot water after a third party contractor paid homeless black men $5 to give up their faces to its own facial recognition database. Social media users would be wise to remember – if you're not paying for a service, you are the product.

Also on rt.com Deepfake technology shows the emperor (or the girl of your dreams) has no clothes

Panic in Washington

US politicians are scrambling to regulate deepfakes, very much aware that one computer-generated scandal could be the end of their careers. Several bills have already been introduced either mandating the labeling of deepfakes or making their manufacture a federal crime – neither of which will do anything to stop the production of deceptive fakes, especially since the bills include a loophole for “national security purposes.” 

In a colossal irony, one of the congressmen who wrote to then-Director of National Intelligence Dan Coats last year warning that deepfakes would “undermine public trust in recorded images or videos as objective depictions of reality” was Adam Schiff, the California Democrat who recently gave a dramatic reading of a wholly fictional phone call between President Donald Trump and Ukrainian President Volodymyr Zelensky.

The Pentagon, too, is joining the battle against deepfakes – or so it claimed in a presentation soliciting examples to train its detection algorithms. In reality, the Defense Advanced Research Projects Agency (DARPA) has declared war on “polarizing viral content,” which includes memes and other “malicious dissent.” As it did with terrorism, the Pentagon is using a poorly understood threat to further roll back Americans' civil liberties. 

Also on rt.com DARPA unleashes anti-meme militia to fight deepfakes & ‘polarizing’ viral content

That does not mean deepfakes aren't a threat. They complete the destruction of the fundamental principle of the internet age – “pics or it didn't happen” – that Photoshop manipulation began to erode. We can no longer believe our eyes – at least, not without backup. Internet users will have to apply the same kind of critical thinking and analysis used to evaluate still photographs. Trusting sources will become extremely important.

Fool me twice, shame on me…

Cultures where critical thinking is highly valued have the potential to adjust quickly to this brave new world. In the US, however, viewers of mainstream media are required to suspend their critical faculties when tuning in, bombarded with absurd statements on a daily basis. Whether it's tear-stained testimony that the Iraqi army is throwing babies out of incubators, an abnormally well-spoken Syrian girl begging the US to bomb Syria, or a journalist insisting Donald Trump has been a Russian agent since 1987, many of the stories Americans are required to digest are incompatible with critical thinking. US media consumption habits must change dramatically if Americans are to avoid falling prey to the deepfake makers. The adjustment period will be difficult.

Already, “shallowfakes” like the slowed-down ‘drunk Nancy Pelosi’ video and a doctored broadcast of Trump sticking out his tongue, created without the aid of deepfake technology, have fooled viewers who uncritically accept whatever they're shown on TV. Audiences who place such trust in a medium that has lied to them about the rationale for going to war in Iraq, about “Russian collusion,” about the necessity for regime change in whatever enemy country is being vilified this week, don't stand a chance against well-made deepfakes.

Propaganda has evolved significantly since the days of “babies on bayonets,” but the desire the average news-consuming American has to believe the worst about their designated enemy has not ebbed. Venezuelans eating rats? Assad gassing his own people? Saddam stockpiling weapons of mass destruction? Sure, why not?

Also on rt.com ‘If you are suckers, it’s not my fault!’ Even creepy deepfake Putin won't admit to meddling in US elections (VIDEO)

The nation that cannot tell factual events from fake ones has no right to elect their own leaders,” a rather unconvincing deepfake of Vladimir Putin recently told an audience at the Massachusetts Institute of Technology. 

Collusion enthusiast Adam Schiff – and a handful of other congressmen – have been screaming to the skies that deepfakes will disrupt the 2020 election, and for once, he's not completely wrong (although he does blame Russia when he should be looking closer to home). Having viewed a convincing deepfake, he explained, “you will never completely shed the negative lingering impression you have.” In an era where just one careless racial slur can get a celebrity “cancelled,” deepfakes can wreak unimaginable destruction on a credulous populace.

The dawn of the post-truth era

In the same way that deepfakes allow anyone to be smeared, they also provide a seemingly ironclad defense against such smears. Malaysian Minister of Economic Affairs Azmin Ali blamed an expertly-crafted deepfake after a sex tape surfaced in June that apparently showed him engaged in homosexual activity with a rival minister's aide. While experts examining the video did not see signs of fakery, such doubt will shadow any incriminating video going forward. Going forward, “truth” will be the province of whoever has the better story.

This is most likely why already-powerful entities like Google and the Pentagon are encouraging the creation of deepfakes, ostensibly to give them practice detecting and defeating the technology but with the added benefit of forcing the average citizen to mistrust their perceptions. With near-total control of the American information apparatus, they can weaponize deepfakes and the attendant self-doubt to set themselves up as the ultimate arbiters of reality. Who are you going to believe – them or your lying eyes?

By Helen Buyniski, RT

Like this story? Share it with a friend!

The statements, views and opinions expressed in this column are solely those of the author and do not necessarily represent those of RT.

Podcasts
0:00
29:39
0:00
28:21