Tech CEO Shows Shocking Deepfake Of Kari Lake At Hearing On AI Impact On Elections

Forbes Breaking News
18 Apr 202408:40

TLDRIn a hearing on the impact of AI on elections, Tech CEO Riddhiman Gupta, founder of Deep Media, discusses the alarming rise of deepfakes and their potential to disrupt society. He explains that deepfakes are AI-manipulated media that can mislead or harm, and emphasizes the rapid improvement and decreasing cost of producing such content. Gupta highlights the need for collaboration among government, AI companies, platforms, journalists, and deepfake detection companies to address the issue. He showcases his company's efforts in developing solutions to detect deepfakes, including working with major news outlets and participating in initiatives to label and authenticate content. The presentation includes an example of a high-quality deepfake video of Kari Lake, demonstrating the sophistication of current technology and the importance of staying ahead in the fight against misinformation.

Takeaways

  • 🧑‍💼 The speaker, Riddhiman Gupta, is a tech entrepreneur focused on addressing the deepfake problem, having founded Deep Media in 2017.
  • 📚 Deep fakes are synthetically manipulated AI images, audio, or video that can mislead or harm society, and they are not limited to text.
  • 🧠 The human mind is particularly susceptible to manipulation by image, audio, and video, which makes deep fakes a potent threat to society.
  • 💡 Key technologies behind generative AI include the Transformer architecture, Generative Adversarial Networks (GANs), and diffusion models.
  • 💻 These technologies require significant computational resources and large datasets, which are becoming more accessible and affordable.
  • ⏳ The quality of deep fakes is improving rapidly, with costs dropping from 10 cents to potentially 1 cent per minute of video.
  • 📈 It is projected that by 2030, up to 90% of online content could be deep fakes, which poses a significant challenge for society.
  • 🗳️ Deep fakes have already impacted elections, with manipulated videos of political figures being used for political assassination or to sway public opinion.
  • 🚨 The greater threat may be the erosion of trust in real content, as the prevalence of fake content leads to plausible deniability and a crisis of authenticity.
  • 🤝 Gupta emphasizes the need for a collaborative approach involving government, generative AI companies, platforms, journalists, and deepfake detection companies.
  • 🛡️ Deep Media is involved in various initiatives to combat deep fakes, including partnerships with media outlets, participation in research programs, and developing detection technologies.
  • 📈 The company's platform aims to deliver scalable solutions for detecting deep fakes across various media types, maintaining a low false positive and false negative rate.

Q & A

  • What is the primary concern expressed by the speaker, Riddhiman Gupta, about deepfakes?

    -Riddhiman Gupta's primary concern is that deepfakes have the potential to harm or mislead, and can completely dismantle society by manipulating image, audio, or video content. He emphasizes the rapid improvement and decreasing cost of producing deepfakes, which poses a significant threat to the authenticity of digital content and public trust.

  • What are the three fundamental technologies that Riddhiman Gupta asks legislators to keep in mind when discussing generative AI?

    -The three fundamental technologies mentioned by Riddhiman Gupta are the Transformer, which is a type of architecture; the Generative Adversarial Network (GAN); and the Diffusion model. These technologies cover about 90% of generative AI and require massive amounts of compute resources and data.

  • How does the speaker describe the current state and future projection of deepfakes on online platforms?

    -The speaker describes that the quality of deepfakes is nearly perfect now and they are becoming increasingly affordable to produce, with costs dropping from 10 cents per minute to potentially 1 cent. He also projects that by 2030, up to 90% of the content on online platforms could be deepfakes.

  • What are the two ways in which deepfakes have been used in the political context according to the transcript?

    -Deepfakes have been used for political assassination, such as fake videos of President Biden announcing the draft or President Trump getting arrested. They have also been used to create support, like the deepfakes of President Trump with black voters to make politicians seem more relatable.

  • What is the concept of 'plausible deniability' as it relates to deepfakes?

    -Plausibility deniability refers to the situation where anyone, including politicians or business figures, could claim an image, audio, or video is a deepfake, thereby casting doubt on the authenticity of real content. This can be fundamentally dangerous as it erodes trust in genuine content.

  • What is the solution approach proposed by Riddhiman Gupta to combat the deepfake problem?

    -Riddhiman Gupta proposes a collaborative solution involving five groups: government stakeholders, generative AI companies, platforms, investigative journalists, and deepfake detection companies. These groups need to work together to solve the problem, with the support of technologies and initiatives like DARPA's Semafor and AI Force program, and the Content Authority initiative.

  • How does Riddhiman Gupta's company, Deep Media, contribute to the detection and reporting of deepfakes?

    -Deep Media has helped journalists like Donnie O'Sullivan at CNN, Jeff Fowler at the Washington Post, and Amanda Floran at Forbes to detect and report on deepfakes. They are part of the Witness organization, which aids reporters in deepfake detection, and are involved in the DARPA Semafor and AI Force program, as well as the Content Authority initiative to label real and fake content.

  • What is the significance of the deepfake of Kari Lake shown in the hearing?

    -The deepfake of Kari Lake is significant as it demonstrates the high quality of current deepfake technology. It was produced using advanced generative models that were not publicly released, showcasing how convincing deepfakes have become and the importance of robust detection methods.

  • What is the role of generative AI technology in Deep Media's strategy to combat deepfakes?

    -Deep Media uses generative AI technology internally to improve their deepfake detection capabilities. They do not release this technology to the public but instead use it to train their detectors, setting a high standard in the field of deepfake detection.

  • How does Riddhiman Gupta view the potential societal impact of deepfakes if not properly addressed?

    -Riddhiman Gupta suggests that if not addressed properly, deepfakes could lead to a society resembling George Orwell's '1984', where trust in information is eroded, and manipulation becomes rampant. He believes that AI has the potential to create a negative externality in the form of fraud and misinformation, which can be mitigated through proper legislation.

  • What does Riddhiman Gupta believe about the role of the free market and AI in solving the deepfake problem?

    -Riddhiman Gupta is a believer in the free market and fundamentally thinks that AI can be used for good. He sees deepfakes as a market failure and a tragedy of the commons. He believes that through proper legislation, the negative externalities associated with deepfakes can be internalized, leading to a flourishing AI ecosystem.

  • What are the key points that an AI tracks when analyzing a person's face in a video?

    -When analyzing a person's face, an AI tracks certain key points on a person's face. These points are used to determine the authenticity of the video and to distinguish between real and deepfake content.

Outlines

00:00

💡 Introduction to Deep Fakes and Their Impact

Ridel Gupta, the founder of Deep Media, introduces himself as a tech-savvy entrepreneur with a background in machine learning. He explains the concept of deep fakes, which are AI-manipulated images, audio, or video created to deceive or harm. Gupta emphasizes the rapid advancement and decreasing cost of creating deep fakes, which poses a significant threat to society. He outlines the importance of understanding three key technologies behind generative AI: Transformers, Generative Adversarial Networks (GANs), and diffusion models. Gupta also discusses the societal harms caused by deep fakes, including political manipulation and the potential for plausible deniability, which could undermine trust in genuine content. He concludes by stressing the need for a collaborative approach involving government, AI companies, platforms, journalists, and deep fake detection companies to address the issue.

05:01

🛠️ Solutions to the Deep Fake Problem

Gupta presents a solution-oriented approach to tackling the deep fake problem. He believes in the potential of AI to be a force for good and views deep fakes as a market failure that requires legislative action to correct. He demonstrates how his platform can provide scalable solutions across various media types, emphasizing the importance of minimizing false positives and negatives in deep fake detection. Gupta illustrates the AI's perspective on media, showing how the system processes and detects both real and fake audiovisual content. He showcases the platform's capability by presenting an example of a high-quality deep fake video that was correctly identified by their system. Gupta highlights Deep Media's role in setting the gold standard for deep fake detection and their commitment to keeping their generative AI technology internal for training detectors, not for public release. He concludes by offering to answer questions and provide further information on technological solutions to the deep fake challenge.

Mindmap

Keywords

💡Deepfake

A deepfake refers to synthetically manipulated AI-generated images, audio, or video that can be used to deceive or mislead. In the context of the video, deepfakes pose a significant threat to society, particularly in the realm of politics and elections, as they can be used for political assassination or to create false narratives. The video emphasizes the rapid advancement and decreasing cost of deepfake technology, which makes it increasingly accessible and potentially disruptive to trust in media.

💡Generative AI

Generative AI is a branch of artificial intelligence that involves the creation of new content, such as images, audio, or video, that did not exist before. It is the underlying technology behind deepfakes. The video discusses generative AI in the context of its potential to cause harm if not properly regulated or managed, highlighting the need for solutions to detect and mitigate the spread of deepfakes.

💡Transformer

A Transformer is a type of AI architecture that is integral to the functioning of generative AI, including deepfakes. It is a model that processes sequential data and is fundamental to the creation and detection of deepfakes. The video mentions Transformers as one of the three key technologies that make up generative AI.

💡Generative Adversarial Network (GAN)

A GAN is a type of AI system consisting of two parts: a generator that creates content and a discriminator that evaluates it. This technology is used to create high-quality deepfakes. The video underscores the importance of understanding GANs when discussing the creation and detection of deepfakes.

💡Diffusion Model

A diffusion model is another fundamental technology in generative AI, which is used to generate data that is similar to the training data. In the context of the video, diffusion models are part of the suite of technologies that enable the creation of increasingly convincing deepfakes.

💡Compute Resources

Compute resources refer to the hardware and software capabilities required to perform complex calculations, such as those needed for AI and deepfake generation. The video highlights that creating deepfakes requires significant compute resources, which has implications for the scale and potential impact of deepfake technology.

💡Political Assassination

In the context of the video, political assassination refers to the use of deepfakes to discredit or harm political figures by creating false scenarios that can sway public opinion. Examples given include deepfakes of President Biden announcing a draft or President Trump being arrested, which are used to illustrate the potential misuse of deepfake technology in politics.

💡Plausible Deniability

Plausible deniability is the condition where someone can claim that they are not responsible for an action or information because they can provide a believable reason for disclaiming responsibility. The video discusses how deepfakes can lead to a situation where politicians or other individuals can falsely claim that real content is a deepfake to avoid accountability.

💡Free Market

The free market is an economic system where prices are determined by supply and demand with little to no government intervention. The video speaker is a believer in the free market and suggests that with proper legislation, AI, including deepfakes, can be used for good and contribute positively to the ecosystem.

💡Negative Externality

A negative externality is an unintended negative consequence that affects a third party who is not directly involved in an economic transaction. In the video, deepfakes are described as a market failure and a tragedy of the commons, representing a negative externality that needs to be addressed through proper legislation.

💡Content Authority Initiative

The Content Authority Initiative is a collaborative effort involving companies like Adobe, aimed at distinguishing real from fake content through labeling. The video discusses the initiative as part of the solution to the deepfake problem, emphasizing the importance of industry collaboration to establish trust in media content.

Highlights

Ridel Gupta, founder of Deep Media, testified before a hearing on AI's impact on elections, showcasing the potential dangers of deepfakes.

Gupta started building machine learning applications at 15 and founded Deep Media in 2017 to address the deepfake problem.

Deepfakes are synthetically manipulated AI images, audio, or video that can mislead or harm society.

The human mind is particularly susceptible to manipulation by image, audio, and video content.

Three key technologies behind generative AI are Transformer, Generative Adversarial Networks (GANs), and diffusion models.

Deepfakes are becoming high-quality, cheap to produce, and are increasingly prevalent on online platforms.

Deepfakes have already impacted elections, with manipulated videos of political figures causing public confusion.

The real threat of deepfakes lies in their potential to erode trust in genuine content.

Gupta warns of a future resembling George Orwell's 1984, where misinformation and plausible deniability are rampant.

Solutions to the deepfake problem require collaboration between government, AI companies, platforms, journalists, and detection companies.

Deep Media has assisted major news outlets like CNN and the Washington Post in detecting and reporting on deepfakes.

The company is part of the DARPA Semaphor and AI Force program, aiming to solve the deepfake issue.

Deep Media is also involved in the Content Authority initiative, working with companies like Adobe to label real and fake content.

Gupta emphasizes the importance of a free market approach and the potential for AI to be a force for good.

Deepfakes represent a market failure and a tragedy of the commons, which can be mitigated through proper legislation.

Deep Media uses its own generative AI technology to train detectors and set the gold standard for deepfake detection.

Gupta demonstrated how AI sees and processes media, focusing on the detection of both real and fake audio and video content.

The presentation included a high-quality deepfake video of Kari Lake, illustrating the sophistication of current deepfake technology.

Deep Media aims to stay ahead in the cat-and-mouse game of deepfake detection, ensuring public safety and trust in media.