Tech CEO Shows Shocking Deepfake Of Kari Lake At Hearing On AI Impact On Elections
TLDRIn a hearing on the impact of AI on elections, Tech CEO Riddhiman Gupta, founder of Deep Media, discusses the alarming rise of deepfakes and their potential to disrupt society. He explains that deepfakes are AI-manipulated media that can mislead or harm, and emphasizes the rapid improvement and decreasing cost of producing such content. Gupta highlights the need for collaboration among government, AI companies, platforms, journalists, and deepfake detection companies to address the issue. He showcases his company's efforts in developing solutions to detect deepfakes, including working with major news outlets and participating in initiatives to label and authenticate content. The presentation includes an example of a high-quality deepfake video of Kari Lake, demonstrating the sophistication of current technology and the importance of staying ahead in the fight against misinformation.
Takeaways
- 🧑💼 The speaker, Riddhiman Gupta, is a tech entrepreneur focused on addressing the deepfake problem, having founded Deep Media in 2017.
- 📚 Deep fakes are synthetically manipulated AI images, audio, or video that can mislead or harm society, and they are not limited to text.
- 🧠 The human mind is particularly susceptible to manipulation by image, audio, and video, which makes deep fakes a potent threat to society.
- 💡 Key technologies behind generative AI include the Transformer architecture, Generative Adversarial Networks (GANs), and diffusion models.
- 💻 These technologies require significant computational resources and large datasets, which are becoming more accessible and affordable.
- ⏳ The quality of deep fakes is improving rapidly, with costs dropping from 10 cents to potentially 1 cent per minute of video.
- 📈 It is projected that by 2030, up to 90% of online content could be deep fakes, which poses a significant challenge for society.
- 🗳️ Deep fakes have already impacted elections, with manipulated videos of political figures being used for political assassination or to sway public opinion.
- 🚨 The greater threat may be the erosion of trust in real content, as the prevalence of fake content leads to plausible deniability and a crisis of authenticity.
- 🤝 Gupta emphasizes the need for a collaborative approach involving government, generative AI companies, platforms, journalists, and deepfake detection companies.
- 🛡️ Deep Media is involved in various initiatives to combat deep fakes, including partnerships with media outlets, participation in research programs, and developing detection technologies.
- 📈 The company's platform aims to deliver scalable solutions for detecting deep fakes across various media types, maintaining a low false positive and false negative rate.
Q & A
What is the primary concern expressed by the speaker, Riddhiman Gupta, about deepfakes?
-Riddhiman Gupta's primary concern is that deepfakes have the potential to harm or mislead, and can completely dismantle society by manipulating image, audio, or video content. He emphasizes the rapid improvement and decreasing cost of producing deepfakes, which poses a significant threat to the authenticity of digital content and public trust.
What are the three fundamental technologies that Riddhiman Gupta asks legislators to keep in mind when discussing generative AI?
-The three fundamental technologies mentioned by Riddhiman Gupta are the Transformer, which is a type of architecture; the Generative Adversarial Network (GAN); and the Diffusion model. These technologies cover about 90% of generative AI and require massive amounts of compute resources and data.
How does the speaker describe the current state and future projection of deepfakes on online platforms?
-The speaker describes that the quality of deepfakes is nearly perfect now and they are becoming increasingly affordable to produce, with costs dropping from 10 cents per minute to potentially 1 cent. He also projects that by 2030, up to 90% of the content on online platforms could be deepfakes.
What are the two ways in which deepfakes have been used in the political context according to the transcript?
-Deepfakes have been used for political assassination, such as fake videos of President Biden announcing the draft or President Trump getting arrested. They have also been used to create support, like the deepfakes of President Trump with black voters to make politicians seem more relatable.
What is the concept of 'plausible deniability' as it relates to deepfakes?
-Plausibility deniability refers to the situation where anyone, including politicians or business figures, could claim an image, audio, or video is a deepfake, thereby casting doubt on the authenticity of real content. This can be fundamentally dangerous as it erodes trust in genuine content.
What is the solution approach proposed by Riddhiman Gupta to combat the deepfake problem?
-Riddhiman Gupta proposes a collaborative solution involving five groups: government stakeholders, generative AI companies, platforms, investigative journalists, and deepfake detection companies. These groups need to work together to solve the problem, with the support of technologies and initiatives like DARPA's Semafor and AI Force program, and the Content Authority initiative.
How does Riddhiman Gupta's company, Deep Media, contribute to the detection and reporting of deepfakes?
-Deep Media has helped journalists like Donnie O'Sullivan at CNN, Jeff Fowler at the Washington Post, and Amanda Floran at Forbes to detect and report on deepfakes. They are part of the Witness organization, which aids reporters in deepfake detection, and are involved in the DARPA Semafor and AI Force program, as well as the Content Authority initiative to label real and fake content.
What is the significance of the deepfake of Kari Lake shown in the hearing?
-The deepfake of Kari Lake is significant as it demonstrates the high quality of current deepfake technology. It was produced using advanced generative models that were not publicly released, showcasing how convincing deepfakes have become and the importance of robust detection methods.
What is the role of generative AI technology in Deep Media's strategy to combat deepfakes?
-Deep Media uses generative AI technology internally to improve their deepfake detection capabilities. They do not release this technology to the public but instead use it to train their detectors, setting a high standard in the field of deepfake detection.
How does Riddhiman Gupta view the potential societal impact of deepfakes if not properly addressed?
-Riddhiman Gupta suggests that if not addressed properly, deepfakes could lead to a society resembling George Orwell's '1984', where trust in information is eroded, and manipulation becomes rampant. He believes that AI has the potential to create a negative externality in the form of fraud and misinformation, which can be mitigated through proper legislation.
What does Riddhiman Gupta believe about the role of the free market and AI in solving the deepfake problem?
-Riddhiman Gupta is a believer in the free market and fundamentally thinks that AI can be used for good. He sees deepfakes as a market failure and a tragedy of the commons. He believes that through proper legislation, the negative externalities associated with deepfakes can be internalized, leading to a flourishing AI ecosystem.
What are the key points that an AI tracks when analyzing a person's face in a video?
-When analyzing a person's face, an AI tracks certain key points on a person's face. These points are used to determine the authenticity of the video and to distinguish between real and deepfake content.
Outlines
💡 Introduction to Deep Fakes and Their Impact
Ridel Gupta, the founder of Deep Media, introduces himself as a tech-savvy entrepreneur with a background in machine learning. He explains the concept of deep fakes, which are AI-manipulated images, audio, or video created to deceive or harm. Gupta emphasizes the rapid advancement and decreasing cost of creating deep fakes, which poses a significant threat to society. He outlines the importance of understanding three key technologies behind generative AI: Transformers, Generative Adversarial Networks (GANs), and diffusion models. Gupta also discusses the societal harms caused by deep fakes, including political manipulation and the potential for plausible deniability, which could undermine trust in genuine content. He concludes by stressing the need for a collaborative approach involving government, AI companies, platforms, journalists, and deep fake detection companies to address the issue.
🛠️ Solutions to the Deep Fake Problem
Gupta presents a solution-oriented approach to tackling the deep fake problem. He believes in the potential of AI to be a force for good and views deep fakes as a market failure that requires legislative action to correct. He demonstrates how his platform can provide scalable solutions across various media types, emphasizing the importance of minimizing false positives and negatives in deep fake detection. Gupta illustrates the AI's perspective on media, showing how the system processes and detects both real and fake audiovisual content. He showcases the platform's capability by presenting an example of a high-quality deep fake video that was correctly identified by their system. Gupta highlights Deep Media's role in setting the gold standard for deep fake detection and their commitment to keeping their generative AI technology internal for training detectors, not for public release. He concludes by offering to answer questions and provide further information on technological solutions to the deep fake challenge.
Mindmap
Keywords
💡Deepfake
💡Generative AI
💡Transformer
💡Generative Adversarial Network (GAN)
💡Diffusion Model
💡Compute Resources
💡Political Assassination
💡Plausible Deniability
💡Free Market
💡Negative Externality
💡Content Authority Initiative
Highlights
Ridel Gupta, founder of Deep Media, testified before a hearing on AI's impact on elections, showcasing the potential dangers of deepfakes.
Gupta started building machine learning applications at 15 and founded Deep Media in 2017 to address the deepfake problem.
Deepfakes are synthetically manipulated AI images, audio, or video that can mislead or harm society.
The human mind is particularly susceptible to manipulation by image, audio, and video content.
Three key technologies behind generative AI are Transformer, Generative Adversarial Networks (GANs), and diffusion models.
Deepfakes are becoming high-quality, cheap to produce, and are increasingly prevalent on online platforms.
Deepfakes have already impacted elections, with manipulated videos of political figures causing public confusion.
The real threat of deepfakes lies in their potential to erode trust in genuine content.
Gupta warns of a future resembling George Orwell's 1984, where misinformation and plausible deniability are rampant.
Solutions to the deepfake problem require collaboration between government, AI companies, platforms, journalists, and detection companies.
Deep Media has assisted major news outlets like CNN and the Washington Post in detecting and reporting on deepfakes.
The company is part of the DARPA Semaphor and AI Force program, aiming to solve the deepfake issue.
Deep Media is also involved in the Content Authority initiative, working with companies like Adobe to label real and fake content.
Gupta emphasizes the importance of a free market approach and the potential for AI to be a force for good.
Deepfakes represent a market failure and a tragedy of the commons, which can be mitigated through proper legislation.
Deep Media uses its own generative AI technology to train detectors and set the gold standard for deepfake detection.
Gupta demonstrated how AI sees and processes media, focusing on the detection of both real and fake audio and video content.
The presentation included a high-quality deepfake video of Kari Lake, illustrating the sophistication of current deepfake technology.
Deep Media aims to stay ahead in the cat-and-mouse game of deepfake detection, ensuring public safety and trust in media.