AI SHIELD

ABOUT

HOME

RESOURCES

CONTACT

Parenting in the Deepfake Era A Crash Course in AI Defence
Deepfake Icon

Written by Amplify for AI Shield Canada

In a digital era where reality seamlessly intertwines with technology, the rise of ​deepfakes has taken the art of deception to unprecedented levels. Buckle up, as we ​delve into the riveting landscape of deepfakes, uncovering the mechanics behind their ​creation, instances of real-life mischief, and essential insights on how to safeguard ​against this captivating yet potentially harmful phenomenon.


Join us on this journey to demystify the world of deepfakes, where the virtual and the ​tangible collide, leaving us all to question what we see and hear in an increasingly ​sophisticated digital age.

What is a deepfake?

A deepfake refers to a synthetic image or video, crafted through a unique form of machine learning known as "deep" learning. This ​technology involves the manipulation of visual elements using digital software, machine learning, and face-swapping techniques. ​Essentially, deepfakes are digitally fabricated videos wherein images are seamlessly melded together, portraying events, statements, or ​actions that never occurred in reality. The outcome of this process is often remarkably convincing.

source: Reddit r/BeAmazed u/globeworldmap

The fundamental idea behind this technology involves facial recognition, a feature familiar to Snapchat users through face swap or ​filters. Deep Fakes, although similar, are much more lifelike. They utilize a machine learning method known as a “generative adversarial ​network” or GAN. For instance, a GAN can analyze numerous photos of a person and generate a new image resembling those photos ​without replicating any single one. GANs can also create fresh audio from existing audio or new text from existing text, showcasing its ​versatility. The programming of Deep Fake technology focuses on mapping faces using key landmark points, such as the corners of the ​eyes and mouth, nostrils, and the contour of the jawline.

Understanding the risk

The real potential danger of false information and deepfake ​technology is creating mistrust or apathy in people about what we ​see or hear online. If deepfakes erode trust in videos, the challenges ​of false information and conspiracy theories may escalate, posing ​concerns for the future generation and their perception of reality.


Beyond mere perception, the accessibility of deepfake creation to ​any with internet access poses its own set of risks. The simplicity with ​which individuals can generate deepfakes underscores the need for ​responsible use of this technology.


As the saying goes,


“with great power comes great responsibility.”

The widespread accessibility of deepfake technology has unfortunately ​opened the door to various forms of misuse, notably in the alarming realms ​of revenge porn and cyberbullying.


One prevalent misuse involves the creation and dissemination of revenge ​porn, where individuals use deepfake techniques to manipulate explicit ​content, often targeting victims in malicious and harmful ways. Additionally, ​the technology is increasingly being exploited for cyberbullying, with ​instances of creating deepfake nude child pornography circulating among ​peers.


This disturbing trend poses serious threats to individuals' privacy and ​mental well-being, underscoring the urgent need for comprehensive ​measures to address and prevent such misuse of deepfake technology in ​order to protect vulnerable individuals, especially children, from the ​potential harms inflicted by these malicious practices.

Other forms of deepfake misuse include the manipulation of text, audio, or personal data to commit fraud or identity theft. By skillfully ​forging convincing text or audio content, malicious actors can deceive individuals, organizations, or even automated systems, leading to ​financial scams, unauthorized access, or compromise of sensitive information.


This multifaceted misuse underscores the versatility and potential harm of deepfake technology, emphasizing the pressing need for ​robust security measures and public awareness campaigns to safeguard against the diverse threats posed by fraudulent activities and ​identity theft in the digital age.

Real-life cases

selective focus of frustrated kid covered in blanket using laptop, cyberbullying concept

In the small town of Almendralejo, Spain, deepfake porn has recently become a prevalent problem, ​especially in high schools. More than 20 local teenagers reported receiving AI-generated naked images ​of themselves on their mobile phones.


In the original photos, the teenagers were fully clothed. These images, stolen from their Instagram ​accounts, were altered using an artificial intelligence app and then circulated in WhatsApp groups.


Boasting its eerily realistic abilities to “undress anybody,” ClothOff is the AI application responsible for ​these crimes. For €10 (nearly 15 CAD), users can unclothe 25 pictures.


Despite the artificial nature of the nudity, the distress felt by the girls upon seeing these images was ​undoubtedly very real. The most unsettling part—the perpetrators of this sexual harassment were also ​teenagers known to the girls.

Read more about it here.

This issue isn’t unique to Spain. More recently, November 2023, the same controversy has surfaced in New Jersey. While police is ​investigating the issue and schools are taking action as deemed appropriate by the board, permanent resolution is well out of their ​control. But we’ll get to that later.


According to Sensity AI, a deepfake detection service, 90% of all deepfakes on the internet are pornography. In fact, child pornography ​in particular is a growing problem, as online AI tools are increasingly available for free with little more than a Google search.

So what do we do?

As artificial intelligence grows, the presence of its extensions will follow. The possibility of eliminating deepfakes altogether is slim, so it ​is imperative to foster digital literacy. This can be achieved by being vigilant with our online presence, practicing critical judgement, and ​protecting our data.


DIGITAL LITERACY

post-it notes

EXERCISE Critical judgement.

  • Talk to your family and friends about the risks ​of posting images on social media
  • Educate yourself and others on how to spot a ​Deepfake.
Curved Down Arrow
post-it notes

Be prudent with your online presence.

  • Stop posting photos of yourself or your ​family on unprotected social media accounts
  • Always use caution when uploading photos of ​yourself to public sites
  • Make sure you maintain a high level of ​security on all your electronic devices
post-it notes

Deepfake videos are still at a stage where you can spot the signs yourself.


Look for the following characteristics of a Deepfake video:

  • jerky movement
  • shifts in lighting from one frame to the next
  • shifts in skin tone
  • strange blinking or no blinking at all
  • lips poorly synched with speech
  • digital artifacts in the image


But as Deepfakes get better, you'll get less help from your own eyes and ​more from a good cyber-security program.

source: Kaspersky

post-it notes

PROTECT YOUR DATA.

  • Complete regular backups to protect your ​data against ransomware and give you the ​ability to restore damaged data.
  • Use strong passwords, and change them for ​every account.
  • Be vigilant about who has access to your ​devices and accounts.

Remember, its's not about believing a lie; it's about doubting the truth.

The number of deepfake videos on the internet has doubled since 2018, reaching 14,678 videos in 2021. Today, in 2023, deepfake videos ​are ready in mere seconds, so this number has only grown exponentially. Experts predict over one million deepfake videos in circulation ​today. ­Some are just for fun, while others are trying to manipulate your opinions.


The area of non-consensual porn remains a distressing problem: according to Sensity AI, 96% of deepfakes are pornographic and used ​to target women, with the number of deepfake porn clips doubling every six months. Halting the tide of deepfakes will require a ​combination of investment and ongoing effort from tech companies and legislators if we want to see any real progress. In addition to ​making the detection of deepfakes a priority, tech companies must start removing them from their platform in addition to holding ​perpetrators to account for their actions (a lifetime ban, for instance).


From a legal perspective, change is necessary to prevent technology companies from washing their hands of the issues that can arise ​when platform users are free to share any content they wish. The development of such policy is crucial, and this can only be done with ​the help of legislators. As such, it is time to reach out to our local legislators and push for the protection of our children.


Not sure how? We’ve got you covered. Read our blog post about contacting local legislators here!

SOURCES

Britt, Kaeli. “How Are Deepfakes Dangerous?” University of Nevada, Reno, 31 Mar. 2023, www.unr.edu/nevada-today/news/2023/atp-​deepfakes.


Henry Ajder, Giorgio Patrini, Francesco Cavalli, and Laurence Cullen, The State of Deepfakes: Landscape, Threats, and Impact, ​September 2019.

“Explained: What Are Deepfakes?” Webwise.ie, 15 July 2022, www.webwise.ie/news/explained-what-are-deepfakes/

tech network ai digital pattern
tech network ai digital pattern

Want to see more of our blog posts?

Blue Neon Button

Click here!

tech network ai digital pattern
tech network ai digital pattern

Created by Amplify for AI SHIELD CANADA

Find us on socials
Instagram
Facebook F