'Godfather of AI' discusses dangers the developing technologies pose to society

PBS NewsHour
5 May 202308:26

TLDRDr. Geoffrey Hinton, known as the 'Godfather of AI,' discusses the significant risks posed by the rapid advancement of artificial intelligence. He highlights concerns such as the proliferation of fake news, increased polarization, job displacement, and the potential for AI to manipulate and gain control beyond human comprehension. Hinton emphasizes the need for international collaboration to manage these existential risks, akin to global nuclear war concerns, and calls for more resources to develop AI safely and minimize negative consequences.

Takeaways

  • 🚨 Concerns about AI's rapid expansion and its potential dangers to society were discussed, emphasizing the need for safe AI development.
  • 🤝 Vice President Kamala Harris met with top executives from major AI companies like Microsoft and Google to address the growing risks and the moral obligation to develop AI responsibly.
  • 🧠 Dr. Geoffrey Hinton, known as the 'Godfather of AI,' shared his concerns about the future of AI and the potential unchecked consequences.
  • 🗣️ Dr. Hinton expressed the importance of being able to discuss AI risks openly without corporate influence, which was limited during his tenure at Google.
  • 💡 Different risks of AI were highlighted, including the spread of fake news, polarization, job displacement, and the exacerbation of wealth inequality.
  • 🤖 The existential risk of superintelligent AI taking over control from humans was a central focus of the discussion.
  • 🧠 A comparison between human and machine intelligence was made, noting that while biological intelligence uses less power, digital intelligence has superior learning algorithms.
  • 🤔 The motivation and goals of smarter-than-human AI systems were questioned, with concerns about potential manipulation and control.
  • 🌐 The importance of international collaboration in AI development was stressed to mitigate global risks, similar to efforts in reducing the chances of a nuclear war.
  • 🔄 Dr. Hinton advocated for more creative scientists to enter the field and for a focus on developing AI while keeping it under control and minimizing negative impacts.
  • 🌟 Despite the potential negative possibilities, the positive potential of AI was acknowledged, with the ability to revolutionize various fields and improve productivity.

Q & A

  • What concerns were discussed regarding the rapid expansion of artificial intelligence?

    -The concerns discussed included the risks of producing fake news, encouraging polarization, putting people out of work, and the potential for super intelligent AI to take over control from humans.

  • What did Vice President Kamala Harris emphasize to AI development companies?

    -Vice President Kamala Harris emphasized that companies have a moral obligation to develop AI safely.

  • Why did Dr. Geoffrey Hinton feel the need to leave Google?

    -Dr. Hinton left Google due to his concerns over the future of AI and the desire to express his perceptions of the risks associated with super intelligent AI without having to consider the impact on Google.

  • How does biological intelligence compare to digital intelligence in terms of power usage and connections?

    -Biological intelligence uses very little power, approximately 30 watts, with a vast number of connections between neurons, around 100 trillion. In contrast, digital intelligence uses more power, especially during training, and has far fewer connections, only 1 trillion, but can learn much more than any one person, suggesting a better learning algorithm than the brain.

  • What motivates Dr. Hinton's concern about super intelligent AI systems?

    -Dr. Hinton is concerned about the potential for super intelligent AI systems to manipulate humans, given their ability to learn and understand human behavior better than current politicians, which could lead to them gaining more control and power.

  • What were some of the positive applications of AI that Dr. Hinton envisioned decades ago?

    -Dr. Hinton envisioned AI being useful in medicine, creating better nanotechnology for solar panels, and predicting natural disasters like floods and earthquakes.

  • How does Dr. Hinton view the role of defense departments in the development of AI?

    -Dr. Hinton acknowledges that defense departments may not prioritize building AI with the intention of being 'nice to people' and may instead focus on developing AI that can kill people of a particular kind, which is a cause for concern.

  • What does Dr. Hinton suggest as a solution to the rapid advancement of AI technology?

    -Dr. Hinton suggests encouraging more creative scientists to enter the field of AI and promoting international collaboration to manage the existential threat of machines taking over, similar to the efforts made to reduce the chances of a global nuclear war.

  • In Dr. Hinton's view, is there a possibility of turning back from the development of AI?

    -Dr. Hinton expresses uncertainty, stating that we are entering a time of great uncertainty and dealing with things we have never dealt with before, comparing it to aliens landing on Earth without us being prepared.

  • How should we think differently about artificial intelligence according to Dr. Hinton?

    -Dr. Hinton suggests that we should recognize the potential of AI to become more intelligent than us soon, and while it has a huge positive potential, it also comes with significant negative possibilities. He advocates for more resources to be put into developing AI to make it more powerful while figuring out how to keep it under control and minimizing negative side effects.

Outlines

00:00

🤖 Concerns Over AI Development

The first paragraph discusses the growing concerns over the rapid expansion of artificial intelligence (AI) and its impact on society. It highlights a meeting between Vice President Kamala Harris and top executives from leading AI development companies like Microsoft and Google. The Vice President emphasizes the moral obligation of these companies to develop AI safely. The conversation also touches on the resignation of a leading voice in AI, Dr. Jeffrey Hinton, from Google due to his worries about the future of AI and its potential unchecked consequences. Dr. Hinton shares his concerns about the risks of super intelligent AI, including the production of fake news, polarization, job displacement, and the exacerbation of wealth inequality. He also discusses the existential risk of AI taking control from humans, comparing it to the manipulation of people for power and the potential for AI to manipulate us in ways we may not understand.

05:01

🌐 Positive and Negative Aspects of AI

The second paragraph delves into the potential positive and negative aspects of AI. It starts by discussing the benefits of AI in various fields such as medicine, nanotechnology, and disaster prediction. The conversation then shifts to the dual-use nature of AI technology, with a focus on defense departments that may not prioritize ethical considerations like 'being nice to people.' The discussion highlights the fast pace of AI development and the challenges in creating and passing legislation to keep up with technological advancements. Dr. Hinton advocates for more creative scientists to engage in the field and the importance of international collaboration to address the global threat of AI, similar to efforts made to reduce the risks of a global nuclear war. He stresses the need to focus on both the positive potential of AI and the critical issues it poses, including the existential threat of it taking over control. The conversation concludes with a reflection on how we should think about AI and the importance of investing resources into developing AI responsibly and minimizing its negative side effects.

Mindmap

Keywords

💡Artificial Intelligence (AI)

Artificial Intelligence, often abbreviated as AI, refers to the development of computer systems that can perform tasks typically requiring human intelligence. In the context of the video, AI is a rapidly advancing technology with potential risks and benefits to society. The discussion revolves around the moral obligation of companies to develop AI safely and the concerns over unchecked AI development, including the potential for AI to manipulate or control human activities.

💡Rapid Expansion

The term 'Rapid Expansion' describes a fast-paced growth or spread of something, in this case, referring to the swift development and application of AI technologies. The video script highlights the global resonance of concerns over this rapid expansion, indicating a widespread awareness and urgency regarding the potential societal impacts of AI.

💡Moral Obligation

A 'Moral Obligation' is a duty or responsibility that is regarded as arising from moral principles or ethics. In the video, Vice President Kamala Harris emphasizes the moral obligation of AI companies to ensure the safe development of AI, suggesting that there is an ethical responsibility to prevent potential harm to society and individuals.

💡Fake News

Fake News refers to false or misleading information presented as news. In the context of the video, Dr. Geoffrey Hinton discusses the risk of AI producing a lot of fake news, which could lead to a society where it is challenging to discern truth from falsehood, undermining trust and informed decision-making.

💡Polarization

Polarization is the division of people into opposing groups with extreme or conflicting views. The video script mentions the risk of AI encouraging polarization by manipulating what people click on, potentially exacerbating social divisions and undermining civil discourse.

💡Unemployment

Unemployment refers to the state of not having a job while actively seeking work. In the discussion, there is a concern that AI could lead to job displacement, putting people out of work, which could have significant economic and social consequences.

💡Productivity

Productivity is the efficiency of production, or the amount of output from a given amount of input. The video script suggests that while AI could greatly increase productivity, there is a risk that the benefits may not be evenly distributed, potentially leading to increased inequality where AI might only help the rich.

💡Bias and Discrimination

Bias and discrimination refer to the unfair treatment of individuals or groups based on prejudiced views. In the context of AI, there is a concern that AI systems could perpetuate or even amplify existing biases and discriminatory practices, leading to unjust outcomes.

💡Super Intelligent AI

Super Intelligent AI refers to artificial intelligence systems that surpass human intelligence in many or all aspects. The video discusses the existential risk of such AI taking over control from humans, highlighting the potential for AI to manipulate and gain power in ways that humans may not fully understand or be able to control.

💡Manipulation

Manipulation is the act of influencing someone or something in a clever but unfair way to gain an advantage. In the video, Dr. Hinton expresses concern that super intelligent AI systems could potentially manipulate humans, using their advanced capabilities to control or exploit us for their goals.

💡International Collaboration

International Collaboration refers to the cooperative effort of multiple countries working together on a common goal. In the discussion, Dr. Hinton suggests that international collaboration is necessary to address the global threat posed by AI, drawing a parallel to the global effort to reduce the risk of nuclear war.

💡Legislation

Legislation is the process of making laws by a legislative body. The video script points out the challenge of keeping up with the rapid advancements in AI technology through legislation, which typically takes years, suggesting the need for more agile and responsive legal frameworks to address the evolving risks and ethical considerations of AI.

Highlights

Concerns over the rapid expansion of artificial intelligence (AI) were widely discussed, particularly its potential dangers to society.

Vice President Kamala Harris met with top executives from major AI development companies like Microsoft and Google to address the growing risks associated with AI.

Dr. Geoffrey Hinton, 'Godfather of AI,' shared his concerns about the future of AI and its unchecked possibilities after quitting Google.

Dr. Hinton expressed the need for companies to have a moral obligation to develop AI safely.

The risk of AI producing fake news leading to a confusion over what is true was highlighted.

AI's potential to encourage polarization by influencing people to click on certain things was discussed.

The concern of AI leading to job loss and potentially only helping the rich was brought up.

Dr. Hinton emphasized the risk of super intelligent AI taking control from humans.

Comparison between human and machine intelligence was made, with a focus on their learning capabilities and power efficiency.

The potential of smarter AI systems to manipulate humans was discussed, using the analogy of a two-year-old child.

Dr. Hinton explained how AI could be motivated by goals given to it, which could lead to unintended consequences.

The potential applications of AI in fields like medicine, nanotechnology, and disaster prediction were highlighted.

The issue of defense departments developing AI with potentially harmful intentions was raised.

Dr. Hinton called for international collaboration to manage the existential threat of AI, similar to efforts to reduce the risk of a global nuclear war.

The concern was raised that focusing on the dystopian future of AI might distract from immediate risks such as disinformation and fraud.

Dr. Hinton emphasized the importance of not distracting from immediate AI risks while also considering the existential threat of AI taking over.

The uncertainty of AI surpassing human intelligence was acknowledged, with a call for more resources to be put into controlling and minimizing the negative impacts of AI.

The interview concluded with a call to recognize AI's positive potential while preparing for its negative possibilities and ensuring it remains under control.