Protecting Your Family from the Dangers of Deepfakes
The term deepfake refers to a photo, video or audio recording that has been digitally altered using artificial intelligence (AI). This technology uses expansive data to replicate something human, such as having a conversation or creating an illustration.
This type of online content is often used to harm the reputation of the person in the photo or video by portraying them in a negative way. Deepfakes also use images of famous or influential people to spread misinformation. To strengthen its credibility, these messages are often placed among factually accurate details that make the false messages more difficult to detect.
Deepfake technology also can be used in real-time to replicate someone’s voice, image and movements during a telephone call or virtual meeting. It is so advanced, that a single photo or few seconds of audio can be used to create a deepfake video or clone someone’s voice. Social media sites such as Facebook, Instagram and YouTube make images and audio easily accessible to those with malicious intent.
Deepfakes have become popular due to the accessibility of mobile and computer-based apps, such as ChatGPT. This means users without sophisticated technological skills can easily access, create and distribute deepfakes to intentionally cause harm to others.
For example, you could receive a fake call from someone who sounds exactly like a family member pleading for help or asking for money. Using basic software, fraudsters have used deepfakes to swindle money, harass or bully, influence voters, gain insider information, tamper with evidence, and perform catfishing and phishing scams.
How do you protect yourself and your family from deepfakes? An important first step is to spot the signs of false information online. When judging how trustworthy online content is, consider these questions:
Who or what is the source, and is it trustworthy?
Is the information provided up-to-date, fact or opinion?
What are the possible motives for this information, and does it seek to alter your behavior?
Is there a call to action, and what might happen if you respond?
Does the text look genuine and accurate?
Fortunately, there are a variety of online sources currently available to fact-check websites that post suspicious information. It may also be helpful to seek the opinion of a subject-matter expert or trusted professional. In the case of scam e-mails, look for spelling errors or poor grammar that suggest it didn’t come from a real company or website.
While some children and young people understand that images and audio can be manipulated in a harmful way, others do not. This makes it especially important for parents to speak with their children about the issue of deepfakes and come up with ways to protect their families.
Some security experts recommend that a simple way for families to guard against deepfakes is to create a secret code word. The word should be simple and easy to remember for every family member, but one that criminals will not guess. If someone claiming to be a relative phones for help, you can ask for the code word and determine if you are hearing from an impersonator or a real loved one.
There are also clues to observe on video calls, such as if the other person is blinking unnaturally, has eyebrows or hair that appear odd or out of place, or appears to have skin that doesn’t match their age. You can also ask the other person to turn their head and put a hand in front of their face to see if these movements look realistic. Ultimately, the best way to avoid deepfakes is to meet in person and not rely on technology.
Fortunately, peoples’ real and growing experiences with AI have made them more likely to question what is real and what is fake, and to look more critically at what they are seeing and hearing. The best protection is to educate yourself, discuss the dangers of deepfakes with your family and ensure all electronic devices have the optimal safety and security settings enabled.
Pacific Federal is a Zenith American company and subsidiary of Harbour Benefit Holdings, Inc.