Sameer Hinduja and Justin W Patchin
In the summer of 2020, a 50-year-old woman used technology to target some of her daughter's peers. The most interesting twist was not the age difference between the aggressor and the targets, but the fact that software was used to alter images found online to make it seem like the other girls – who belonged to a cheerleading club that her daughter previously attended – were nude, engaged in underage drinking or using vaping products. These "deepfakes" were spread via text messages from phone numbers unrecognisable to the girls and are an example of a newer trend that parents should be aware of.
The term "deepfake" ("deep learning + fake") seems to have originated when online communities of users began sharing fake celebrity pornography with each other. To create these, artificial intelligence software is used to produce incredibly realistic-looking fabricated content (e.g. photos and videos) intended to come across as real. Learning models are created by using computing power to analyse significant amounts of content (e.g. hours of video of a person, thousands of pictures of a person) with specific attention to key facial features and body language/position.
Next, what is learned is algorithmically applied to images/frames one might want to manipulate or create (e.g. superimposing lip movements upon original content (and dubbing in sound) to make it seem as though a person is saying something that they actually never said). Additional techniques such as adding artifacts (such as "glitching" that appears normal or incidental) or using masking/editing to improve realism are also employed, and the resultant products are surprisingly convincing. If you perform a web search for deepfake examples, you'll probably be surprised as to how authentic they seem. Below are some important points to know as you seek to safeguard your child from any possible deepfake victimisation, and equip them to separate fact from fiction.
While deepfakes are becoming increasingly realistic as technology advances, detecting them is often done by looking carefully for certain information in the photo or video content (for example, eyes that don't seem to blink naturally). It can be very helpful to zoom in and look for unnatural or blurred edges around the mouth, neck/collar or chest. This is often where misalignments and mismatches between the original content and the superimposed content can be seen.
On videos, one can slow down the clip and watch for visual inconsistencies such as possible lip syncing or jittering. Furthermore, keep an eye out for any moments where the subject displays a lack of emotion when there should be emotion based on what is being said, seems to mispronounce a word or is part of any other strange discrepancies. Finally, running reverse image searches on photos (or a screenshot from a video) can point you towards the original video before it was doctored. At that point, carefully compare the two pieces of content to determine which one has been manipulated. The bottom line is that you should trust your senses; when we slow down to look and listen very carefully at content, we can generally sense when something is amiss.
It is important to remind teens that everything they post online could be used to create a deepfake. On their social media accounts, they are likely to have created a library of content that others can access and manipulate without their consent. Their face, movements, voice and other characteristics might be appropriated and then superimposed onto the likeness of someone else – someone engaged in a behaviour that may cause extensive reputational harm. To facilitate dialogue in this regard, here are a few questions to ask them in a nonjudgmental and understanding way:
Deepfakes have the potential to compromise the well-being of teens when one considers the emotional, psychological and reputational damage they can inflict. While aural, visual and temporal inconsistencies may miss observation by the human eye, software is being refined to identify and flag any non-uniformity in image or video content. As these technologies continue to improve, parents, caregivers and other youth-serving adults must raise awareness about the reality of deepfakes, and work to prevent the consequences of their creation and distribution. At the same time, regularly remind your teen that you are always there to help them find their way out of any deepfake situations (and, of course, any other online harm they experience).