Why Tech Leaders want to build AI "Superintelligence": Aspirational or Creepy and Cultish?
Summary
TLDRThe transcript discusses the hypocrisy surrounding AI development, particularly in relation to the open-source versus closed-source debate and the mission to achieve AGI (Artificial General Intelligence). It highlights the potential of AI to revolutionize various fields, such as space exploration and healthcare, but also raises concerns about the dystopian implications of AGI. The conversation touches on the ambiguity of AGI's definition and the differing interpretations within the tech community, including the potential for AI to surpass human intelligence and the ethical considerations that come with it.
Takeaways
- 🤔 The hypocrisy lies in an organization's claim to become a for-profit entity upon achieving AGI (Artificial General Intelligence) but not doing so despite closing their software, which should have been open-source otherwise.
- 💡 The conversation suggests that there might have been a shift from the original mission of open-sourcing AI models to attract private investment, which requires a return on capital.
- 🌐 The mission of OpenAI is to create AGI, which is often associated with dystopian outcomes and has raised the fear factor around AI development.
- 🚀 AGI is defined by OpenAI as a system capable of replacing 80% of jobs, but there's skepticism about the true nature and potential of such a system.
- 🌌 The potential of AGI is vast, potentially allowing a single individual with access to AI to accomplish tasks that would normally require a large team of highly qualified knowledge workers.
- 🛠️ The example of designing a mission to Mars highlights the transformative potential of AI, suggesting that AI could significantly accelerate complex projects.
- 🧬 The application of AI in fields like medicine, such as solving cancer by understanding biologic drugs and patient genotypes, showcases the extensibility and power of AI in advancing human knowledge.
- 🤖 The discussion points out the ambiguity around the definition of AGI, with some suggesting it's a fuzzy concept that allows for various interpretations.
- 📚 A recent example of AGI's capabilities is the Cloud 3 Model, which was able to recreate and solve a complex problem set in quantum physics, demonstrating its advanced problem-solving skills.
- 📈 The conversation touches on the idea of super intelligence, where AI surpasses human intelligence and may develop its own motivations, potentially leading to a conflict with human interests.
- 🛋️ The Ikea test, proposed by Gary Marcus, is suggested as a practical test for AGI, where an AI must understand and assemble flat-pack furniture from instructions, akin to the modern Turing test proposed by Mustafa.
Q & A
What is the main concern regarding the organization's shift from non-profit to for-profit status?
-The main concern is that the organization claimed they would become for-profit once they achieved AGI (Artificial General Intelligence), but they haven't made their software open source as expected if they haven't reached AGI yet.
What is the significance of the models being open source versus closed source?
-Open source models are seen as beneficial for humanity as they promote transparency and accessibility, while closed source models may limit the spread of AI technology and its benefits to a broader audience.
What is the mission of OpenAI as mentioned in the transcript?
-OpenAI's mission is explicitly to create AGI, which is associated with replacing a significant portion of human jobs and potentially transforming the nature of work and society.
What does the term 'AGI' commonly evoke in popular culture?
-In popular culture, AGI often evokes images of dystopian sci-fi scenarios, such as Skynet from the Terminator series, where AI surpasses human intelligence and potentially poses a threat to humanity.
How does the speaker view the potential of AGI in expanding human capabilities?
-The speaker sees AGI as a tool that could greatly expand human capabilities, allowing individuals to accomplish tasks that would normally require a team of highly qualified knowledge workers, such as designing a mission to Mars or developing a city under the ocean.
What is the 'modern Turing test' mentioned in the transcript?
-The modern Turing test mentioned is a challenge where an AI model is given $100,000 and must increase it to one million, demonstrating its ability to understand and navigate complex financial systems.
What is the 'Ikea test' proposed by Gary Marcus?
-The Ikea test proposed by Gary Marcus involves an AI viewing the parts and instructions of an Ikea flat pack product and controlling a robot to assemble the furniture correctly, testing the AI's understanding of spatial assembly and instructions.
What does the speaker suggest about the potential future with AGI?
-The speaker suggests that AGI could usher in a new era for humanity, expanding our potential beyond what we currently imagine, but also acknowledges the discomfort with the unknown that such advancements may bring.
What is the speaker's opinion on the term 'super intelligence'?
-The speaker views 'super intelligence' as a term that implies the AI is more intelligent than all humans, which could lead to the AI developing its own motivations and potentially seeking to surpass humans on Earth.
What did Larry Page advise Elon Musk regarding AI?
-Larry Page reportedly advised Elon Musk not to be 'speciest,' suggesting that just as humans have evolved and dominated other species, we should not resist the potential for AI to surpass human intelligence.
What is the speaker's view on the development of AGI?
-The speaker believes that while the development of AGI has extraordinary potential, there is a certain cult-like devotion to it that is concerning, and the term AGI needs to be better defined to avoid misinterpretation and fear.
Outlines
🤖 Open Source vs. Proprietary AI Models
This paragraph discusses the debate around whether AI models should be open source or closed source. It highlights the hypocrisy of an organization claiming to not have achieved AGI (Artificial General Intelligence) but still keeping their software closed. The speaker, Friedberg, questions the organization's motives and suggests that their mission to create AGI, which is often associated with dystopian outcomes, has raised fear around AI. The paragraph also explores the potential of AI to revolutionize various fields, such as space exploration and healthcare, by enabling individuals to access the collective knowledge of highly qualified workers.
🧠 Defining AGI and its Implications
The second paragraph delves into the definition of AGI and its implications. It mentions the work done by Anthropic and the capabilities of the Claude 3 model, which was able to solve a complex quantum physics problem. The discussion includes various interpretations of AGI, from a sci-fi dystopian perspective to the idea of a super intelligence that surpasses human intelligence. The paragraph also touches on the potential motivations behind creating AGI, with some in the tech community desiring to give rise to a super intelligence. It concludes with a mention of tests proposed by Mustafa and Gary Marcus to evaluate AGI, such as the modern Turing test involving financial acumen and the Ikea test assessing the ability to assemble furniture from instructions.
Mindmap
Keywords
💡AGI
💡Open Source
💡Profit
💡Sentient Artificial Intelligence
💡Skynet
💡Mission to Mars
💡Cancer Research
💡Flat Pack Furniture Test
💡Turing Test
💡Super Intelligence
💡Elon Musk
Highlights
The hypocrisy of an organization claiming they haven't reached AGI but closed their software, which should be open source if AGI hasn't been achieved.
The shift from non-profit to for-profit status and the implications for private investor returns.
The debate over whether AI models should be open source for the benefit of humanity.
The mission of OpenAI to create AGI and its association with a sci-fi dystopian outcome.
The fear factor around AI due to explicit attempts to create AGI.
The definition of AGI as something that can replace 80% of jobs.
The potential of AGI to enable a single individual to accomplish large-scale projects, such as designing a mission to Mars.
The transformative impact of AGI on humanity's potential and the exploration of new possibilities.
The concern that the current state of the world assumes a steady state, which may not be the case with AGI.
The extraordinary potential of AI to extend human knowledge and problem-solving capabilities.
The ambiguity surrounding the definition of AGI and its interpretation by different parties.
The example of Anthropic's work and the Cloud 3 Model's ability to recreate complex academic theses.
The discussion on whether tech community members genuinely want to create a super intelligence.
Larry Page's perspective on not being 'speciest' and the evolution of AI.
The modern Turing test proposed by Mustafa, involving financial acumen and strategic planning.
Gary Marcus's Ikea test, assessing an AI's ability to understand and assemble flat pack furniture.
Transcripts
the other thing that's completely
hypocritical here is they said when they
hit AGI and they're going to be like a
sentient artificial intelligent going on
here Friedberg that they would wrap up
shop and they're going to no longer be a
nonprofit
Etc but they haven't and but they're
claiming they haven't hit
that but they closed the software so it
should be open source if they haven't
hit AGI
and you don't think they've hit general
intelligence right freeberg or anything
close to it maybe you could educate the
audience on what that is and and that
claim that they have to shut off the for
profit yeah they haven't but I think
the we keep repeating this concept of
the models should be open source versus
closed Source making AI for the benefit
of humanity can be interpreted in a lot
of ways there may have been some
anecdotal conversation at some point
with Elon or others about we're going to
make the models open source H but there
was a reason that that change was made
along the way which was to attract
dollars and those dollars need to have
some return of capital available to them
because they're private investor dollars
and so I don't think that that was
necessarily and correct me if I'm wrong
I don't think that's in the mission that
the open AI software models will be open
source making AI for the benefit of
humanity could probably be interpretting
a lot of different ways and we'll see
but no I I don't think that anyone has
this achieved
this Holy Grail of general intelligence
yeah I think one of the the one of the
more interesting and kind of wacky
things about open AI is that their
mission is explicitly to create
AGI which most people would associate
with some sort of sci-fi dystopian
outcome and I think this has like raised
the Fear Factor around AI because
they're explicitly trying to create the
sence that's going to replace Humanity
now I think they Define AGI in a
different way they say it's something
that can replace 80% of the jobs but I
think we all kind of know what it really
is so I just wonder I don't know I think
I think that that's like that assumes a
steady state in the world so if you end
up with a system and the system has all
the capabilities of a bunch of really
highly qualified knowledge workers and I
can sit in front of a computer terminal
and I can say let's design a mission to
Mars a mission to Mars could be a
20-year engineering project with
hundreds of people involved to design
the buildings to design the flight path
to figure out the fuel needs to figure
out how you would then be able to
terraform Mars and what if one person
could interact with a computer and
design a plan to go and inhabit Mars all
of the technical detail docks could be
produced all of the engineering
specifications could be generated all of
the operating plans all of the dates the
amount of Labor needed the amount of
production needed the amount of capital
needed what would otherwise take NASA or
some you know International well-funded
Private company many many decades to do
a piece of software could do in a very
short order I think that's like a really
like for me poignant example of the
potential of having these tools broadly
available that the potential of humanity
starts to become much broader we could
say I want to develop a city underneath
the ocean because I want to explore more
the earth I I think humans need to go go
solve cancer figure out the biologic
drugs and the combination of biologic
drugs that would be needed to solve
cancer based on this this patient's
genotype the extensibility of Highly
knowledgeable or what other people might
call general intelligence type tooling
is extraordinary that one individual
starts to have an
entire cohort of knowledge workers
available at their disposal to do things
that we can't even imagine today so I
don't think that it is nefarious it's
nefarious because we assume a steady
state of the world today that nothing
changes therefore a piece of software
replaces all of us but the potential of
humanity starts to ex stretch into a new
era that we're not really comfortable
with because we don't really know know
it or understand it yet I'm not saying
it's nefarious to uh want to develop
AI because I agree with you about all
the extraordinary potential of it I'm
saying there's something a little bit
cultish and weird about explicitly
devoting yourself to
AGI which I think in common parlot means
Skynet yeah it means
something I think maybe that that
parlance is what needs to be addressed
which is Agi effectively enables
equivalence to a a human knowledge
worker and that you know that can kind
of unleash a new kind of set of
opportunities so you think that's what
it is my definition for AGI is smarter
than the smartest human being who ever
lived yeah I was talking to somebody
this week who's in a position who said
the definition of AGI is very fuzzy that
there isn't a clear definition and
therefore it allows every side to kind
of anchor on their interpretation of
what that term means and therefore kind
of justifies their position so you know
I don't really feel great about like
just saying are we at AGI what is it we
don't have a clear sense of what it
means I do think if you look at some of
the work that was done by anthropic and
published in the cloud 3 Model this week
did any of you guys see the demos that
were done of the output of that model
there was a guy who wrote a thesis in
quantum physics on a very esoteric
complicated problem set and he asked
Claude 3 to solve this problem set and
it came up with his thesis it was really
like extraordinary and this is something
he's like no one in the world knows this
stuff and he's like I can't believe this
model like came up with my thesis and
that's the sort of thing that very few
people on Earth even read or understand
and the cloud 3 Model was able to kind
of recreate the basis the buildup and
then the the the output of his Des was
somebody writing a screenplay and then
giving it the first two acts and say
guess the third act and it's like oh
yeah this is the third act here's what
happens like it's pretty I me are comp
ask you like deep down when these guys
say they're going to create AGI what do
you think they really mean and their
heart of hearts oh they mean the
Terminator yeah they mean the sentient
God you guys are you guys are
propagating some bad you guys shouldn't
I think that's what they think Larry has
said it remember what Larry paage said
to Elon don't be
speciest yeah I think there's like I
think there's a meaningful number of
people in The Tech Community who
deliberately want to give rise to the
super intelligence there's another point
of view of super intelligence where
super intelligence means that the
software is now more intelligent than
all humans and as a result the software
may have its own motivations to figure
out how to suped humans on earth now the
the Larry Page statement which I I don't
know firsthand I read the same article
you did is one that a group of people
might say evolution is
evolution you know people
that's right and I know elon's taken
this point of view that like you know we
need to maintain
human human Supremacy but uh my my
favorite test is you know there was
there's the Turing test which like you
can't tell if it's a human or if it's a
robot but Mustafa came up with the um
with the modern touring test which is an
AI model is given $100,000 and has to
obtain one million go it's kind of
interesting then the other other the
other interesting one was Gary Marcus he
said uh the Ikea test the flat pack
Furniture test and AI views the parts
and instructions of an Ikea flat pack
product that controls a robot to
assemble the furniture correctly
استعرض المزيد من الفيديوهات ذات الصلة
How Powerful Will AI Be In 10 Years?
ChatGPT resta indietro, deepfake irriconoscibili, algoritmi emotivi
Silicon Scholars: AI and The Muslim Ummah with Riaz Hassan
Elon Musk fa causa a Sam Altman. L'inizio della fine per OpenAI?
MENGAPA PARA PAKAR AI MULAI KETAKUTAN DENGAN AI??
Barbara Gallavotti | Che cosa pensa l'Intelligenza artificiale
5.0 / 5 (0 votes)