Saturday, March 16, 2019

Cybersecurity - Ten Clues to a Deepfake: How to determine whether a video is real or manipulated

Dr. Frank Kardasz (MPA, Ed.D.)
March 16, 2019. Updated May 12, 2024.

INTRODUCTION

Deepfake technology involves manipulating images and sometimes audio to produce deceptive graphics with accompanying sounds that are almost indiscernible from the true originals. 

Deepfake can be defined as believable media generated by a deep neural network (Mirsky & Lee, 2020). The digital counterfeiting technology has advanced to the point where it is difficult to tell original images from manufactured ones. Deepfake digital disinformation has challenging implications for investigators, digital forensic analysts, and jurisprudence. This work briefly describes deepfake technology, discusses the challenge to justice, and reviews possible methods for identifying Deepfakes.

WHAT IS A DEEPFAKE TECHNOLOGY?

Techtarget.com defines Deepfake as follows:

An artificial intelligence-based technology used to produce or alter video content so that it presents something that did not occur. The word, which applies to both the technologies and the videos created with it, is a portmanteau of deep learning and fake.

WHAT IS THE DEEPFAKE THREAT?

Public awareness and governmental concern about Deepfakes is increasing. In September, 2018, three members of Congress sent a letter to NIA Director Daniel R. Coates asking him to conduct research into deepfake technology. The letter stated, in part:

As deep fake technology becomes more advanced and more accessible, it could pose a threat to the United States public discourse and national security, with broad and concerning implications for offensive active measures campaigns targeting the United States” (Chappellet-Lanier, 2018).

The Office of the US Attorney General noted the future ramifications of Deepfakes. In a February, 2019 speech, Deputy US Attorney General Rod Rosenstein said,

Soon, we will need to come to terms with “deep fake” videos, which may defeat our ability to rely on things that we see and hear directly. The speed and volume of technological advances exceeds the capacity of most people to comprehend the risks, let alone to protect against them. Criminals are early adopters. They will deploy smarter, adaptive malware capable of thwarting existing defenses. They will use impenetrable communications platforms that defeat our ability to detect and prevent crimes“ (Rosenstein, 2019).

An informative video titled: Deep Video Portraits-SIGGRAPH 2018, posted to YouTube by Christian Theobalt, describes some recent (2018) advancements in Deepfake technology and can be found at https://www.youtube.com/watch?time_continue=277&v=qc5P2bvfl44

Prescient viewers of the video added the following comments:

  • Who's ready for our legal systems to become completely paralyzed by this technology?”
  • “Thank you for inventing the horrible future of fake news”
  • “We'll see a lot of porn made with this technology”
  •  “The nuclear bomb of software: Great technology; never want to see it used.”
  • “I had a bad feeling about this”
  • “Ya'll about to be catfished.”
  • “Incredible! now we need a system that can differentiate the real and fake videos

MARKET MANIPULATION

In August 2022, the Chief Communications Officer of Binance, a cryptocurrency exchange, began receiving thank you messages related to online meetings that he had never attended. He learned that he had been impersonated through Deepfake technology fraudsters who copied and used his image. The fraudsters used images from past videos that had been posted online.

REVENGE DEEPFAKES

In March 2021, police in Pennsylvania arrested a woman for sending her teen daughter's cheerleading coach fake photos and videos depicting the girls rivals naked, drinking, or smoking in an attempt to get the rival girls kicked off of the cheerleading squad.  The woman also sent harassing text messages to the victims who were all part of a traveling cheerleading group, the Victory Vipers based in Doylestown (Benscoter, 2021).

JOB APPLICATIONS

The Internet Crime Complaint Center (IC3) warns of deepfakes and stolen Personally Identifiable Information (PII) being used to fraudulently apply for remote work positions.  Deepfaked videos are being used in conjunction with voice spoofing to misrepresent the true job applicant (FBI IC3, June 2022).

PORNOGRAPHIC DEEPFAKES

Korean authorities reported pornographic deep fakes involving victims who are government officials. An elaborate group scheme involving chat rooms that have levels and tiers of participants with the levels for the purpose of avoiding detection (Jung-Houn, September 23, 2022).

A British woman who campaigns against pornography was herself targeted by Deepfake manufacturers who made disturbing images of her (McDermott, October 21, 2022).

HOW TO MAKE A DEEPFAKE

Deepfake videos are created using specialized software that in effect, blends images together. The process is time-consuming and typically requires significant computer hardware processing power.

The Deepfake software is first trained on the target face. Deepfakes often use original images from people who have been widely photographed from many angles including actors, models and politicians. A wide variety of images and poses helps to give the software with the ability to recognize and emulate various facial expressions. The software works to manipulate each individual frame of the source images. It is estimated that a minimum of 500 images are needed to assist the algorithm in the deepfake software to correctly learn the source image and subsequently produce a believable fake.

Audio dubbing and manipulation is accomplished using voice actors and/or separate audio-altering software. The video/audio editing process typically involves video running along one horizontal track while audio runs along separate horizontal track. Misalignment of the audio and video tracks can create a syncing issue where the visual movement of lips does not match the timing of the sounds heard.

THE DEEPFAKE SOCIETY

A short tutorial, along with links to example videos and the software needed to create Deepfakes can be found at the web site for The Deepfake Society. www.thedeepfakesociety.com. Github repositories exist where the open-source software can be downloaded and utilized to create Deepfakes. Notably, one link at The Deepfake Society leading to the FakeApp desktop app was blocked by Reddit with the accompanying statement: “This subreddit was banned due to a violation of our content policy, specifically our policy against involuntary pornography.” Some of the videos previously available at the Deepfake Society website are no longer posted, perhaps due to copyright infringement issues. According to the following information posted at The Deepfake Society website, the process of creating a Deepfake may take from 9-13 hours:
Times vary by hardware quality, but generally speaking the pipeline is:

  • Extraction: Producing uniform training data of a model’s face (5-20 min)
  • Training: Running a neural network to learn to emulate this face (8-12 hours)
  • Conversion: Using the neural network to project the target face onto the original face in a video frame-by-frame (5-20 min).

THE CHALLENGE TO JUSTICE

Videos, still images and audio are often relied upon for evidentiary purposes. In the past, few questions were asked about the authenticity of such evidence. Although image-altering techniques have been available for many years, the technology was often expensive and time-consuming. In the future, the integrity of video and audio evidence will face additional scrutiny as the falsification process becomes easier and cheaper to complete.

THE DEEPFAKE DEFENSE

Future defendants will inevitably dispute video “evidence” by claiming that they were “deepfaked”. Investigators and attorneys will be left to prove beyond preponderance or reasonable doubt that the images were not manufactured. Such proof will likely involve digital forensics examinations of the source videos and other corroborative information.

CHAIN OF POSSESSION

The rise of Deepfakes will threaten the credibility of graphic imagery as evidence. Those who present video and audio evidence should prepare to defend the integrity of the imagery against allegations of manipulation and alteration. This will mean closely following the chain of possession of evidence to help show that a video was not altered before it could be entered into evidence at the judicial proceeding.

CORROBORATIVE EVIDENCE

Corroborative Evidence is information that is supplementary to that already given and tending to strengthen or confirm it, for example, additional evidence of a different character but to the same point (Law Dictionary, n.d).

In corroborating deepfakes the evidence may come in the form of digital forensics artifacts from computers or digital devices. Corroborating evidence may originate from statements that come from interviews and interrogations of witnesses, suspects and victims. Corroborating the creation of a deepfake may involve finding images and/or software on the computer of the creator that was used in making the forgery.

HOW TO DISCERN A DEEPFAKE FROM REALITY

As technology evolves the following list of indicators will become less and less useful. Although singularly, none of the following are definitive proof of a Deepfake, here is a list of indicators to the possibility of a forged video:

No Audio
Deepfake visual creation software works with images but not sound. Sound manipulation requires a voice impersonator and audio manipulation software separate from the deepfake image software. Where there is no audio accompanying, it is an indication of a possible deepfake.

Poorly Timed Lip Syncing
When the timing of the spoken audio does not match the lip movement of the person depicted in the video, it is an indication of a possible fake (Baker, Hazel., March 11, 2019).

Suspicious Sources
There are more than 60 well-known fictitious satire comedy fake-news sources. A partial list can be found here: https://blog.feedspot.com/satire_blogs/. Videos sourced from, or posted at the satire sites should raise suspicions of a fake.

Short Duration Scenes
Because long-duration deep fakes are time-consuming to create and perfect, they are often produced in short episodes of only a few seconds. A short duration video may be a fake.

Background Distortion
Older deep fake technology had difficulty normalizing backgrounds that abutted the face of the person depicted. Look for some distortion close to the face of the targeted subject. The distortion may be an indication of a fake.

Portrait Style in 256 x 256 pixels
One implementation of deep fake software works best when the portrait style image is sized at 256 x 256 pixels. Other larger sizes that are rectangular instead of square can result in distorted images. A portrait style headshot videos in the 256 x 256 pixel format is a possible indicator of a fake (Theobalt, 2018).

Suspicious (or no) Blink Rate
The average person blinks 15-20 times per minute (Nakani et. al. 2013). A minute-long video in which the subject never blinks might be suspected as a deepfake. A video posted at the MIT Technology Review web site shows the real President Donald Trump side-by-side with the face of Nicholas Cage deepfaked into the Presidents head. In the video, Cage-Trump never blinks (Knight, 2018).

Robotic Subject
The deepfake character’s face and mouth may have a robotic appearance. The robot-like appearance comes from the fact that it is easier to manufacture an image where there are few facial expressions (Baker, 2019).

Software on the Source Computer
The creators’ computer contains software used to create Deepfakes. Some of the software used can include: FakeApp, TensorFlow, Facesswap, Faceapp, Photoshop and others. Do a search-engine check at the time of your review to learn of the newest deepfake, artificial intelligence and faceswap software.

Fact-Checking Sites
The hue and cry “Fake News!” has become popular in recent few years. Web sites have evolved with people who research the veracity of news and information posted in Cyberspace or seen in the media. Two such sites are:

A. Politifact (www.politifact.com)
Politifact at the Poynter Institute in Florida is a 501(c)(3) non-profit that conducts fact-checking on mostly political issues.

B. Snopes (www.snopes.com)
The web site Snopes.com run by Snopes Media Group purports to conduct fact-checking and verification on popular stories involving folklore, urbane legends and hoaxes.

Seconding some of the factors listed above, researchers Gallagher and Ross (2022) recommend that viewers look closely at an image for irregularities in the following items to help determine authenticity: Blending, Color, Focus, Discontinuity, Blinking.

DIGITAL FORENSICS TOOLS FOR IDENTIFYING DEEPFAKES

The US Department of Defense and the Defense Advanced Research Projects Agency (DARPA), Media Forensics (MediFor) group is working on a project to identify Deepfakes. According to Dr. Matt Turek’s (n.d.) DARPA sponsored web page:

If successful, the MediFor platform will automatically detect manipulations, provide detailed information about how these manipulations were performed, and reason about the overall integrity of visual media to facilitate decisions regarding the use of any questionable image or video.”

In an MIT Technology Review article Dr. Turk is quoted as saying, "We've discovered subtle cues in current GAN-manipulated images and videos that allow us to detect the presence of alterations” (Knight, 2018). The article goes on to discuss the lack of blinking in some deepfake videos as an indicator. 

Deepfake Spotting Tool 
Computer scientists at University of Buffalo developed a tool that identifies deepfakes with 94% effectiveness by analyzing light reflections in the eyes.  According to a report in SciTechDaily (March 12, 2021), “The cornea is almost like a perfect semisphere and is very reflective,” says the paper’s lead author, Siwei Lyu, PhD, SUNY Empire Innovation Professor in the Department of Computer Science and Engineering. So, anything that is coming to the eye with a light emitting from those sources will have an image on the cornea. The two eyes should have very similar reflective patterns because they’re seeing the same thing. It’s something that we typically don’t notice when we look at a face."

CONCLUSION

Deepfake technology is rapidly evolving and improving. Nefarious use of the technology has the potential to wrongfully influence the media, public opinion, politics, and the justice system. Video and audio evidence that was once unquestioned will soon be closely scrutinized and challenged. This work discussed how Deepfakes are made, the challenges from Deepfakes, and some possible indicators of a Deepfake. Although there are several indicators and a possible fledgling military-protected forensic solution that might show a forgery; none of the current indicators are perfect. The old saying, “A picture is worth 1,000 words” should be replaced with, “A picture is worth 1,000 skeptics”.

REFERENCES

Agarwal, Anuj. (2019). Feedspot. Top 60 Satire Websites and Blogs on the Web. Retrieved from https://blog.feedspot.com/satire_blogs/.

Artificial Intelligence (AI). (n.d.). Technopedia. Retrieved from https://www.techopedia.com/definition/190/artificial-intelligence-ai

Baker, Hazel. (March 11, 2019). Making a 'deepfake': How creating our own synthetic video helped us learn to spot one. Reuters. Retrieved from https://in.reuters.com/article/rpb-deepfake/making-a-deepfake-how-creating-our-own-synthetic-video-helped-us-learn-to-spot-one-idINKBN1QS2FO)
 
Barr, Kyle. (August 23, 2022). Hackers Use Deepfakes of Binance Exec to Scam Multiple Crypto Projects. Gizmodo. https://gizmodo.com/crypto-binance-deepfakes-1849447018
 
Benscoter, J., (March 12, 2021). Pa. woman created 'deepfake' videos to force rivals off daughter's cheerleading squad: Police. Penn Live Patriot-News. https://www.pennlive.com/news/2021/03/pa-woman-created-deepfake-videos-to-force-rivals-off-daughters-cheerleading-squad-police.html

Chappellet-Lanier, Tajha. (September 17, 2018). Congress wants the Intelligence Community to weigh in on how to counter ‘deepfakes’. Fedscoop, Tech. Retrieved from https://www.fedscoop.com/congress-dan-coats-letter-deepfakes/

Deepfake: (n.d.). Techtarget.com. Retrieved from https://whatis.techtarget.com/definition/deepfake

FBI IC3. (June 28, 2022).  Deepfakes and Stolen PII Utilized to Apply for Remote Work Positions. Alert Number I-062822-PSA. https://www.ic3.gov/Media/Y2022/PSA220628

Gallagher, S. and Ross, D. (2022). What are Deepfakes, and How Can We Detect Them? Software Engineering Institute | Carnegie Mellon University. https://www.youtube.com/watch?v=EMpPWxZ5xEY

Jung-Youn, Lee. (September 23, 2022). Sex crime chat room still near us, this time using deepfakes. The Korean Herald. https://asianews.network/sex-crime-chat-room-still-near-us-this-time-using-deepfakes/ 

Knight, Will. (August 7, 2018). The Defense Department has produced the first tools for catching deepfakes. MIT Technology Review. Retrieved 16 March 2019, from https://www.technologyreview.com/s/611726/the-defense-department-has-produced-the-first-tools-for-catching-deepfakes/

Law Dictionary. (n.d.). What is corroborating evidence? Gildersleeve v. Atkinson, 0 N. M. 250, 27 Pac. 477; Mills v. Comm., 93 Va. 815, 22 S. E. 803; Code Civ. Proc. Cal. 1903. Retrieved from https://thelawdictionary.org/corroborating-evidence/

McDermott, Sarah. (October 21, 2022.). Deepfaked: They put my face on a porn video. BBC.   https://www.bbc.com/news/uk-62821117

Mirsky & Lee. (2020). The Creation and Detection of Deepfakes: A Survey. ACM Computing Surveys (CSUR), 2020. https://doi.org/10.48550/arXiv.2004.11138

Nakani, T., Kato, M., et. al. (Raichle, M., Editor). (January 8, 2013). Blink-related momentary activation of the default mode network while viewing videos. Washington University in St. Louis, St. Louis, MO. NAS January 8, 2013 110 (2) 702-706; https://doi.org/10.1073/pnas.1214804110. Retrieved from https://www.pnas.org/content/110/2/702

Politifact. (2019). Politifact at the Poynter Institute. Retrieved from https://www.politifact.com/

Rosenstein, Rod. J. (February 21, 2019). Deputy Attorney General Rod J. Rosenstein Delivers Remarks at the Wharton School’s Legal Studies and Business Ethics Lecture Series. Justice News. Remarks as prepared for delivery. Philadelphia, PA. Retrieved from https://www.justice.gov/opa/speech/deputy-attorney-general-rod-j-rosenstein-delivers-remarks-wharton-school-s-legal-studies

Snopes.com. (2019). Snopes Media Group. Retrieved from https://www.snopes.com/

The Deepfake Society. (n.d.). How to create Deepfakes. Retrieved from https://www.thedeepfakesociety.com/

Theobalt, C. (May 17, 2018). Deep Video Portraits - SIGGRAPH 2018. H. Kim, P. Garrido , A. Tewari, W. Xu, J. Thies, M. Nießner, P. Pérez, C. Richardt, Michael Zollhöfer, C. Theobalt, Deep Video Portraits, ACM Transactions on Graphics. (SIGGRAPH 2018). Retrieved from https://www.youtube.com/watch?time_continue=277&v=qc5P2bvfl44

Turek, Matt. (n.d.). Media Forensics (MediFor). Defense Advanced Research Projects Agency. (DARPA). US Department of Defense Retrieved from https://www.darpa.mil/program/media-forensics

University at Buffalo. (March 12, 2021). New Deepfake Spotting Tool Proves 94% Effective – Here’s the Secret of Its Success. SciTechDaily. https://scitechdaily.com/new-deepfake-spotting-tool-proves-94-effective-heres-the-secret-of-its-success/

copyright © Frank Kardasz 2019

Bit.ly link:
http://bit.ly/10deepfakeclues

No comments:

Post a Comment

Thank you for your thoughtful comments.