DARPA's STUNNING AGI BOMBSHELL | AGI Timeline, Gemini plus search, OpenAI's GPT-5 & AI Cyber Attacks
Summary
TLDRThe video script delves into a comprehensive discussion on the pace of AI development, highlighting DARPA's contributions and perspectives on the field. It contrasts the advancement rates of reinforcement learning and transformer models, mentioning DARPA's collaboration with major AI players like Google, Microsoft, and OpenAI. The discussion touches on DARPA's initiatives in AI for cybersecurity, cryptographic modernization for quantum safety, and the challenges in merging AI technologies for enhanced capabilities. It underscores the speculative nature of achieving Artificial General Intelligence (AGI) soon, citing technological, ethical, and practical hurdles still to be overcome. The script also explores DARPA's strategic focus on solving unique problems not addressed by the industry, emphasizing the organization's pivotal role in pioneering technologies like the internet and GPS, and its ongoing efforts to safeguard national security through advanced AI applications.
Takeaways
- π Jimmy Apples, a notable figure in AI, discusses DARPA's advancements and challenges in AI, highlighting differing progression rates between AI technologies.
- π Reinforcement learning is not advancing as rapidly as Transformer models, indicating a disparity in development speed across AI fields.
- π€ DARPA is focusing on integrating planning aspects into large language models (LLMs), but faces challenges in achieving full transparency.
- π DARPA's Q&A reveals ongoing efforts at the intersection of AI and traditional hardware, showcasing a blend of technological advancement.
- π‘οΈ In cybersecurity, DARPA acknowledges the rise of quantum-safe security but isn't directly working on it, instead focusing on quantum networking and other areas.
- π DARPA's collaboration with AI companies like Anthropic, Google, Microsoft, and OpenAI signifies a strategic alliance for advancing AI technologies.
- π§ Speculation suggests combining the power of LLMs with reinforcement learning could lead to major AI breakthroughs, merging reasoning and planning capabilities.
- ποΈ Despite rapid AI advancements, DARPA indicates that achieving Artificial General Intelligence (AGI) still involves solving numerous complex problems.
- π DARPA's insights suggest that the delay in TSMC chip production impacts AI development timelines, including the training of GPT-5.
- π‘ DARPA's mission focuses on tackling unique, challenging problems that may not immediately benefit from industry-driven solutions.
Q & A
What is DARPA's stance on the advancement pace of AI technologies like reinforcement learning compared to transformer models?
-DARPA acknowledges that not all AI technologies are advancing at the same pace, noting that reinforcement learning is not progressing as quickly as transformer models. This disparity in advancement rates highlights the varied focus and success across different AI research areas.
What are DARPA's initiatives in quantum computing and cybersecurity?
-DARPA is engaged in quantum computing through the QUET program, which aims to enhance networking security using quantum technologies. However, DARPA has explicitly stated that they are not currently working on quantum-safe security, despite acknowledging advancements in quantum computers and the emerging field of quantum-safe cybersecurity.
How does DARPA view the integration of planning capabilities into large language models (LLMs)?
-DARPA expresses interest in the integration of planning capabilities into LLMs, specifically referencing the Gemini model's efforts in this area. However, they also indicate a lack of full transparency and certainty about the progress and outcomes of such integrations, pointing to ongoing research challenges.
Why hasn't OpenAI started training GPT-5, according to the DARPA Q&A?
-The delay in starting the training of GPT-5 by OpenAI is attributed to slowdowns in the release of the H100 chips by the Taiwan Semiconductor Manufacturing Company (TSMC). This has provided a temporary breathing space for other projects, according to the DARPA Q&A.
What is DARPA's approach to AI and its collaboration with private companies?
-DARPA collaborates closely with private companies like Anthropic, Google, Microsoft, and OpenAI on AI projects, including competitions like the AI Cyber Challenge. This partnership aims to leverage advancements in AI technologies for various applications, while keeping a close watch on the developments within these companies.
What challenges does DARPA acknowledge in achieving Artificial General Intelligence (AGI)?
-DARPA highlights several challenges in reaching AGI, including the halting problem and the need for more resources and breakthroughs beyond simply scaling existing models. This suggests that DARPA views the path to AGI as complex and fraught with significant scientific and technical hurdles.
How is DARPA addressing the risks and opportunities of AI in cybersecurity?
-DARPA is actively exploring the use of AI to improve cybersecurity, notably through initiatives like the AI Cyber Challenge, which focuses on leveraging AI tools to automatically find and suggest repairs for vulnerabilities in open source software. This reflects DARPA's broader commitment to enhancing national security through advanced technology.
What does DARPA believe about the automation of coding by AI?
-DARPA maintains a cautious stance on the automation of coding by AI, suggesting that AI will serve as a tool to assist in coding rather than fully automate the process. They believe that AI will accelerate the development of boilerplate software but will not replace the need for skilled software developers.
How does DARPA plan to maintain relevance amid rapid advancements in AI technology?
-To maintain relevance, DARPA focuses on structuring programs such as the AI Cyber Challenge to leverage advancements in AI technology. They aim to address research areas and challenges not immediately pursued by the private sector, thus ensuring their contributions remain pivotal to national security and technological progress.
What is DARPA's perspective on the use of AI in generating and verifying software code?
-DARPA is interested in the potential of AI to generate high-quality software code and believes that large language models (LLMs) could play a significant role in this area. However, they recognize the complexity of generating verified and secure code, indicating ongoing exploration and research in this domain.
Outlines
π Overview of DARPA's AI Initiatives and Industry Observations
The discussion opens with Jimmy Apples highlighting DARPA's recent Q&A from November 2023, pointing out the varying pace of advancements in AI, notably between reinforcement learning and Transformer models. DARPA, known for its defense and advanced research projects, is scrutinized for its transparency and strategic plans in integrating AI with military strategies. The Gemini model's integration into large language models (LLMs) without full transparency raises questions. The dialogue also touches on DARPA's stance on quantum computing, cybersecurity, and the potential impacts of AI on creating bioweapons. Concerns about the speed of AI advancements and the participation of major AI entities like Google, Microsoft, and OpenAI in DARPA's projects suggest a collaborative yet competitive landscape. The narrative underscores the complexity and challenges of AI development, pointing to DARPA's critical role in shaping the future of AI and national security.
π Impact of Semiconductor Delays on AI Progress and DARPA's Strategic Response
This segment delves into the implications of semiconductor production delays, particularly by TSMC, on the advancement of AI models like GPT-5. The slowdown in the release of Nvidia's H100 chips is highlighted as a bottleneck for AI progress, granting DARPA and other AI research entities a 'breathing space' to catch up. The conversation further explores DARPA's insights on the integration of planning and reinforcement learning with LLMs, suggesting a potential breakthrough in AI development. However, the challenge of achieving Artificial General Intelligence (AGI) is acknowledged, alongside the halting problem and other computational and resource constraints. DARPA's focus remains on bridging the gap between current capabilities and the next generation of AI advancements.
π DARPA's Unique Position in AI Research and Collaboration
The narrative shifts to DARPA's strategic role in tackling AI challenges that the private sector may overlook, emphasizing government-led initiatives in areas such as multimodal LLMs and encryption for defense. Through examples like creating a national data library and focusing on multi-level security, DARPA's endeavors illustrate its commitment to solving complex problems beyond the scope of industry efforts. The segment reveals DARPA's approach to fostering innovation while addressing critical security and ethical issues, suggesting a proactive stance in advancing AI technologies while safeguarding national interests.
π‘οΈ Cybersecurity and AI's Role in National Defense Infrastructure
This part discusses DARPA's focus on protecting critical infrastructure from cyber attacks, with a particular emphasis on electrical power systems and industrial control systems. The potential of AI to both pose and prevent threats is examined, referencing DARPA's AI cyber challenge and collaborations with the tech industry and government agencies to enhance cybersecurity. The narrative underscores the importance of AI in identifying and repairing vulnerabilities in open-source software, highlighting DARPA's leadership in leveraging AI for national security purposes.
π¨βπ» The Future of Coding and AI's Influence on Software Development
Exploring DARPA's perspective on the future of coding and AI's impact, this segment presents a balanced view on automation and human roles in software development. DARPA suggests that while AI will streamline and accelerate the creation of boilerplate software, it won't replace the need for skilled programmers. The dialogue touches on the potential for AI to assist rather than usurp human developers, reflecting on the broader implications of AI integration into software engineering and the enduring value of human expertise.
π€ DARPA, AI, and the Ongoing Quest for Innovation
Concluding with a broad overview of DARPA's engagement with AI, this section encapsulates DARPA's efforts to remain at the forefront of technological innovation. It highlights the agency's significant contributions to AI, the internet, GPS, and more, while acknowledging the challenges and opportunities ahead. DARPA's cautious yet optimistic stance on the potential of AI reflects a deep understanding of its complexities and the critical balance between advancement and ethical considerations. The discussion underscores DARPA's pivotal role in shaping the future of AI and technology at large, emphasizing collaboration, transparency, and strategic foresight.
Mindmap
Keywords
π‘DARPA
π‘Transformer model
π‘Reinforcement learning
π‘Quantum computing
π‘Large Language Models (LLMs)
π‘Artificial General Intelligence (AGI)
π‘Halting problem
π‘Cybersecurity
π‘AI Cyber Challenge
π‘Code generation and verification
Highlights
DARPA's Q&A session revealed disparities in the advancement pace of AI technologies, with reinforcement learning lagging behind transformer models.
DARPA's interaction with major tech companies like Google, Microsoft, and OpenAI in advancing AI and cybersecurity through initiatives like the AI Cyber Challenge.
The focus on integrating planning capabilities into large language models (LLMs) to potentially address limitations in AI's reasoning and decision-making processes.
Concerns over the stagnation in AI model advancements and the delayed start of GPT-5 training due to hardware production issues.
Speculations on DARPA's involvement in futuristic and critical AI projects, outside the commercial tech industry's focus, aimed at solving complex, high-impact problems.
Discussion on the role of AI in cybersecurity, particularly in creating quantum-safe security measures and enhancing network security through quantum computing.
Exploration of the potential combination of reinforcement learning and LLMs for breakthroughs in AI, suggesting a merging of two distinct AI research areas.
The necessity for ongoing innovation in AI to maintain DARPA's relevance amidst rapid technological progress, highlighting program structures and collaborations.
The critical role of DARPA in developing foundational technologies like the internet and GPS, emphasizing its historical impact and future potential in AI.
Acknowledgment of the challenges in achieving Artificial General Intelligence (AGI), with current AI advancements not yet close to realizing AGI.
The importance of AI in addressing cybersecurity threats, with DARPA leveraging AI to identify and rectify vulnerabilities in software and hardware systems.
The potential impact of AI on software development, with AI tools accelerating the creation of code but not replacing human developers' need for quality and innovation.
DARPA's strategic focus on areas not immediately pursued by the industry, such as multi-level security and advanced cryptographic techniques for national defense.
The exploration of AI's role in enhancing electrical power systems' resilience against cyber attacks, demonstrating DARPA's broader focus on infrastructure security.
Speculation about the future integration of planning mechanisms within LLMs, hinting at significant advancements in AI's capability to reason and strategize.
DARPA's emphasis on addressing the ethical and practical challenges of AI data usage, suggesting a nuanced approach to leveraging AI for societal benefit.
Transcripts
so Jimmy apples a somewhat notorious
account in the AI space just posted this
DARPA Q&A November 2023 another thing is
that not all the Frontiers are advancing
at the same Pace reinforcement learning
is not going as fast as the Transformer
model and he links this attachment here
from DARPA military. M which that's
DARPA Mill that is the defense Advanced
research projects agency so that's
coming from them directly let's so let's
take a look at that here he continues
the Gemini model getting the planning
piece integrated into the llm we are not
sure we lack full transparency what is
happening let's take a look so this is
DARPA information Innovation office the
Q&A what is darpa's interface between
traditional hardware and artificial
intelligence so they're saying some of
the programs are in fact already at the
interface between software and Hardware
now I'll link this down below if you
want to read it yourself we're not going
to go too deep into some of the stuff
and just highlight the most important
things so another question is quantum
computers are making progress cyber
security is getting into a new area of
quantum safe security is there any new
plans or programs from the i2o the
information Innovation office on
cryptographic engineering modernization
of cryptography for Quantum safe
security they're saying they're not
doing anything on Quantum safe security
we do have the quet program which is
using Quantum on making networking more
secure DSO has a number of efforts on
Quantum DSO looks like is another part
of DARPA as far as you can tell the
defense Sciences office so there's some
push back in terms of they're talking
about nist which is the National
Institute of Standards and Technology
the Department of Defense the NSA and
who to talk to about that there's a
question about the president's effective
order and safe secure and trustworthy AI
looks like there are restrictions on
what it takes to work on a Frontier
Model in the new document the concern is
people can use various Frontier models
to generate bioweapons but they're still
working on figuring out what how that's
going to affect them they're asking how
does DARPA maintain relevance when it's
such a fast- based progress in Ai and so
here they're answering one area is by
program structure the AI cyber Challenge
and so this is where we're getting a
little bit more into the interesting
bits we've covered this briefly a number
of months ago I believe it's a
competition where we partner with large
language models the companies that
produce them you know such as anthropic
Google Microsoft and open AI whoops okay
so where's where's meta how come uh
Zuckerberg is not on this is it because
they are a open source model yeah but
this is interesting so anthropic is of
course clawed three that's the model
released by anthropic you have Google
and Gemini right and you have kind of
Microsoft and open AI that have sort of
a union some sort of a cooperation
agreement they're not one and the same
but definitely there's a big overlap
they tend to work on projects together
including that whole Stargate project
with the supercomputers and some
potential uh Fusion Energy projects as
well and so they're saying as the
capability advances so two will the
performers using them be able to
leverage the advanced capability at the
same time that is one model another
piece is that we will be keeping an eye
on what is happening if the capability
we are working on and the program
becomes outmatched we will stop the
program and regenerate or do something
else so I'm reading this as it sounds
like I need to learn a lot more about
DARPA and how it plays with all the
other big companies what kind of AI
projects it has but I'm reading this as
they're saying if one of these companies
like completely blows us out of the
water then we're going to either try
again or just try something else I mean
they're saying we're keeping these guys
close you know we're keeping an eye on
them these are the people we're
interested in they're close by so we
know what they're doing and here's where
we get to the other part so another
thing is that not all Frontiers are
advancing at the same Pace reinforcement
learning is not going as fast as the
Transformer model so we've talked about
this for example if Andre karpathy that
tried to build autonomous agents way
back in the days before openai before he
worked at Tesla by using reinforcement
learning he basically says that's kind
of a dead end or at least for some
things it's like I think he example he
gave if you're trying to get a computer
to go and book a flight for you online
you can't really use reinforcement
learning for that right because that
would require it to like randomly click
on all the buttons and see which ones
work you need something that has a
little bit more reasoning skills or
whatever you want to call that and so it
seems like they're saying that the
Transformer model kind of the the thing
that's behind these neural Nets that's
behind GPT 4 it sounds like that's
behind Sora behind I mean pretty much
everything here Gemini all the open
source models both LMS and other mod
models that's kind of the Transformer
model so it sounds like if I'm reading
this correctly they're saying well that
model that Frontier is advancing much
more rapidly than reinforcement learning
they're also saying the pace of the
frontier models is slowing down a little
bit which that's interesting because you
know we're kind of expecting these
amazing things but they're saying well
the progress is slowing down a bit a lot
of the results that we are seeing right
now include understanding what they are
doing and what they are not doing
they're saying they haven't released a
GPT 5 so they're referring to open AI
here and they're saying they haven't
even started training GPT 5 due to the
slowdown in the release of the h100s due
to the production problems at the Taiwan
semiconductor Manufacturing Company the
tsmc so this is the biggest company
producing the chip so Taiwan is the
biggest producer of chips tsmc is the
biggest company in Taiwan producing the
chips so this is like the the Lynch pin
behind of a lot of this AR hardware for
for NVIDIA for a lot of other people if
this thing just poof and disappears it's
not like we would go back to the dark
ages but boy would a lot of our Tech
take a big big hit cuz we need computers
or rather chips for everything we need
them in in our cars and our phones in
our in our dishwashers and the drones
and like everything so they're saying so
we have a little bit of breathing space
so that's interesting so they're almost
saying because of these alleged
slowdowns that open AI they have some
time to catch up the Gemini model
getting the planning piece integrated in
the llm we are not sure we lack full
transparency so there's a lot of uh
speculation on what that whole qar leak
out of um openingi was and we still
don't know exactly what it is we kind of
know that yes the leak was real it was a
real project a real research project it
was leaked Sam Alman and others have
confirmed it but no one's talking about
it and so we don't know what it is a lot
of smart AI researchers that kind of
know what they're talking about have
suggested that this is a combination of
two big Ideas one is the LMS right the
Transformer models the gbt 4S and the
the other piece is kind of the
reinforcement learning piece so this is
what a lot of the Deep Mind Technologies
do the Superhuman chess playing AI the
Superhuman AI that that beats everybody
at go there seems to be a lot of
speculation that maybe sort of the next
Frontier the next big breakthroughs that
come in AI will come from a combination
of those two things the power of llms
that are really good at like reasoning
but they can't really like think through
a lot of different steps and then kind
of review their plans they kind of have
that weakness whereas the chess playing
eye the go playing eye can think through
a million different combinations kind of
figure out what has the best sort of
possible reward right figure out kind of
like what the best steps are then Trace
its thoughts back and and kind of plan
so this is what and again this is total
speculation but the planning piece that
they're referring to here in the Gemini
getting integrated in the llm to me I
mean based on everything we've seen I
that's what that sounds like I'd be
pretty surprised if it wasn't because
the Gemini model is of course Google
Deep Mind Google deep mind they're the
people behind all the alpha right Alpha
fold Alpha go and they have many others
like Alpha coder like a lot of that is
stemming from what can be referred to as
the planning piece right so combining
that with the llm I mean in November of
2023 when the co qar leaked we we went
deep into this the llm plus kind of the
Gemini alphao technology so this sounds
like it but they're saying well we know
we LA full transparency if they did or
not but and then they're saying but
there are large research problems that
still need to be solved hearing people
say we're just a little bit away from
Full artificial general intelligence AGI
is a bit more optimistic than reality
yikes so that's uh that's interesting
he's saying there's still things like
the halting problem so halting problem
seems like it's a computer science
conundrum that goes back to Alan Turing
in 1936 it refers to the impossibility
of creating a universal algorithm that
can determine whether any given program
when run with a specific input will
eventually stop as an halt or continue
running indefinitely Loop and the
halting problem has important
implications in computer science as it
helps us understand the limitations of
algorithms and highlights the existence
of problems that cannot be completely
automated this emphasizing the need for
heris and approximations in complex
problem solving scenarios then they
continue listing some of the other
problems that we have we still have
exponential things we still need
resource right I'm assuming hardware and
other things I think there are still
going to be super hard problems that are
not going to be fixed by scaling and the
followup question is my question is when
you have you might not have AGI so let's
say we don't have AGI it's not human
level general intelligence but you might
have a system that helps humans and
everyone in this room to advance so
quickly that before AGI Comes This Apex
where not an apex let missing there's
some commas missing here that makes this
a little bit difficult to so he's saying
this Asm totic growth where we are
dealing with that constantly so
asymptotic growth is like it's a
function where like if you're
approaching some limit it goes to
Infinity so I'm having a little bit of a
hard time parsing this question I think
this guy wanted to uh sound smart and it
l some clarity my understanding he's
saying okay so we don't have aegi isn't
this still going to mean that we have
this break next speed of progress isn't
this still going to create this fast
change that still is very difficult to
predict and the follow-up answer so from
DARPA he's saying we try very hard on to
get in the way of what industry is going
to do we're trying to work to solve
problems that industry isn't going to do
tomorrow so you know using their kind of
government resources they're going after
important hard problems that the tech
industry maybe is unwilling to solve
maybe it's not doesn't have enough
profit in it or just is too complex so
they're working kind of outside of the
things that places like Google and open
eye Microsoft so outside of what they're
shooting for they're kind of working on
problems that are important that are
outside of that so he's saying we aren't
planning to work on multimodal large
language model models because they
meaning you know the industry The Profit
seeking entities they are going to do
that sometime we're not trying to work
on incorporating new information into an
llm because they are going to do that as
soon as they can we are trying to work
on things they won't they won't work on
right away so like one example that I
think is interesting that I've heard
kind of in this scenario is you know lot
of these AI models need data and right
now there's a big debate on where are
they're getting this data right are they
just kind of taking everybody's data
without permission right is that okay is
that not okay and actually the founder
of of uh stability AI kind of suggested
that each nation has their own sort of
data library that all of the AIS in that
country or that culture they can just go
and train on that it has all the
cultural works all the and all the data
that you need to train a model that
anyone if you wanted to create a model
for that culture like let's say You're
Building something in the US the US
would provide this database of all books
and images and whatnot that you would
need to train up that model a high
quality data set right obviously
Microsoft or Google they're probably not
going to do that but having the
government fund something like that
might be beneficial to progress as a
whole now they're probably now Dara is
probably not doing something like that
they're probably doing something a
little more like weird and crazy and
futuristic but I think that's an example
so they continue we haven't done this
yet because we might do multi-level
security because we think that is
something the Department of Defense
might care more about than industry
would right right because it sounds like
uh open aai did have some research into
to you know encryption and security and
stuff like that but you know that's not
their main focus they're saying maybe
that is on the industry's road map but
any further future time frame the point
here is basically without encryption if
there's some way to break encryption I
mean as I understand it all information
would be visible all our bank accounts
all our chats and texts and messaging
like I don't think you can have online
banking or online shopping or most of
the things online that that have to be
even semi secet just would not work so
they continue I don't know what the
right answers are but the question of
what are they going to do and in that
time frame what should we do is
something we talk about all the time do
we have perfect answers no but do we ask
that question constantly yes next
question is you have been pointing out
here today code generated by AI systems
is just going to increase in scope and
scale in ways we can hardly imagine how
important is it to DARPA that the code
gets verified for correct functionality
and security properties that's the thing
that we've been nudl he says they're
noling it over been noodling it over
quite a bit clearly companies are going
to be working a lot on generating a lot
of code in one of my next videos we're
probably going to talk about some of the
startups that I think it it was why
combinator that has announced uh a 100
or so AI startups that are coming out of
stealth and a lot of them a lot of them
are working on code on coding and
programming both generating code as well
as testing and and tons more stuff like
that so he continues we are not so sure
they are going to generate code that is
high quality or care about generating
code that is of high quality clearly
generating proofs about codee and
generating specifications specifications
codes and proofs are all languages those
are all in the Wheelhouse of llm so
large language models certainly seem
likely would be able to do it tying them
together could be hard definitely
newling over to trying to generate
specification code and proofs that are
checked then they give some uh people to
talk to for more information there are
tons of code on the web a lot of it is
not good code I believe a study from
five years ago from stack Overflow uh
found that there's usually a good
security answer to the question but it's
usually number 10 that means there are
nine Bad answers before the good answer
by the way I think a lot of this does go
back to this idea of having llms in a
planning piece cuz yeah llms can spit
out millions of lines of code and if you
test it and it throws an error you can
even say hey this is wrong and they'll
try again right so it's kind of like
this thing that just spits out a lot of
likely answers but to really kind of
supercharge that ability to make it
really useful there got to be some sort
of like a reflection or a planning piece
that's getting that right is is going to
be the next big breakthrough so question
what is DARPA specifically interested in
related to protecting Electrical Power
Systems and their industrial Control
Systems so looks like they did look into
this a while ago and the initial
response from the power industry was
like yeah we totally know how to cold
start a power plant this is a in our
wheelhouse we do this all the time from
hurricanes and natural disasters the way
he's phrasing this I'm guessing that's
not the case let's see the part was not
so much in their wheelhouse the part
that wasn't in their wheelhouse was how
do you do that when your sensors are
lying to you which of course is
completely in the Wheelhouse of
attackers who take over the output of
sensors there was this sci-fi book
called demon by Daniel Suarez about this
kind of Rogue AI I mean it was designed
to do the damage that it was doing but
really interesting book I believe it was
published in 2006 it's fascinating how
many things it got correctly about what
something like this could do right this
idea of what if a attacker whether it's
an AI system or just some sort of a
hacker takes over the sensors I mean how
do you do any of the stuff that you want
to do when your sensors are lying to you
I thought that was kind of interesting
great book by the way for those that are
interested in such things and so Dara
continues I think the program was a
success cuz we opened the eyes of the
power industry of what a Cyber attack
would look like he talks a little bit
more about that particular effort but
then he goes back to this AI by CC so
the they called it the AI cyber
challenge so again this is where they
took these companies then they brought
them to White House they're working with
the White House with the government and
a lot of initiatives with large language
models and code and hacking and stuff
like that cyber attacks Etc so you're
saying that this approach in this whole
power plant cyber potential cyber
security attacks is part of a larger
effort of cyber infrastructure or
infrastructure in general which is that
ai ai cyber challenge effort which got
launched at black hat in Las Vegas which
is can we use AI based tools to help
automatically find and suggest repairs
to open source software he mentioned a
paper that came out a few months ago
saying that Chad GPT just out of the box
was roughly as good as some tools that
were made specifically for that for
finding and suggesting fixes to software
but in these tools you know a very
common response was I need more
information and with Chad PT you can ask
what information do you want and then
you can have a conversation with it and
so that ability to converse back and
forth with it well it was able to find
and fix substantially more more problems
and so kind of based on that little
insight the that's why the LA lach the
AI cyber challenge it focused on open
source software they partnered with open
source software foundation and so this
is interesting I have to look deeper
into this I guess there's averil Haynes
there was a testimony that suggested
that they had to find and fix bugs at
scale really really fast like uh-oh what
was that testimony who is that person so
it looks like ail payes is former United
States deputy director of the CIA and
ail Haynes uh I haven't watched a
testimony it sounds like she's going hey
guys like for real we need to make sure
our cyber security is like super tight
and like really fast like for real and I
think this is the the testimony that
they're referring to so it's on C-Span
you can see it I quickly kind of tried
to go over it a little bit this was
March 11th 2024 about global threats so
kind of like my understanding is this so
she's saying we seems like we do have a
lot of weaknesses um for number of
reasons one more and more of our data
like us as individuals as as companies
as communities cities Etc we're putting
more and more stuff out there more data
and as that's growing and also you know
the world uh I think it's fair to say
maybe is getting a little bit more
hostile there's a little bit more of a
divide between the various Nations
there's just a little bit more potential
threats from all of that right I mean if
you kind of know what's surrounding the
whole Taiwan China us I mean there's a
lot of people that are legitimately
scared about how that whole thing is
going to come to pass right cuz China
wants Taiwan they produce all the chips
there's tons of stuff happening there
right us has a interest in Taiwan
obviously right us is slowly trying to
build chips away from the Taiwanese
Shores and there are better people than
me that can explain stuff like this
what's happening but my point I think
from listening to people that know what
they're talking about is like there's a
lot of risk there there's a lot of
potential conflict there and so one of
the things that she's saying at its
testimony is that the rise of AI is also
playing a role in a sense that all this
data that before or yeah it could be
sensitive or maybe not so much like
right if you post a couple harmless
things here and there each individual
piece of data wasn't sensitive right but
with AI with this ability to gather this
data and then make certain predictions
certain inference from it all of a
sudden that's a whole different playing
field right you know if genomics for
example right if somebody post their you
know whatever 23 and me results or
whatever it is online well maybe that's
not a big deal but all that data in the
aggregate if you're able to run it
through AI potentially could reveal
certain I don't know certain patterns
that could be exploited I mean I mean
that could leave to some pretty scary
stuff so a lot of this stuff uh is
seemingly coming from you know that
testimony and idea that we have to find
fix bugs at scale like everywhere in our
software in our code and do so really
really fast next question is how
seriously does DARPA consider the
possibility of software being developed
by AI so this is a very interesting
question and this is in a lot of
people's minds in fact a lot of videos
recently have been covering will
software developers have a job in a
couple years or five years 10 years
whatever is that even a good thing to go
into and Sam maltman during his
interview with Lex Freedman kind of said
that yeah he believes that it's going to
write really good software Jensen hang
of Nvidia is saying that yeah I mean he
had kind of a a strong stance on of like
yeah you're not going to need to learn
how to code and here the answer is that
yes DARPA has a position on this topic
he's saying my opinion is that will be a
tool that will help people write
software faster which is true this is
certainly what we're seeing a lot of
people are saying that it's really
helping develop how fast they're able to
do stuff and particularly boring
boilerplate software faster but it will
not automate the process all right so
this is interesting I mean this is uh
you know DARPA pretty smart folks over
there saying that no they're not seeing
you know code automation anytime soon or
perhaps even at all and he's saying I
don't think that people who write good
code will be out of a job anytime in the
foreseeable future he's saying maybe I'm
overly optimistic but that seems
inconceivable inconceivable he continues
I think a lot of the boilerplate
software like coming in Frameworks or
something like that the code everyone
hates to write I think AI will write it
anyway in the near future that is an
interesting interesting take next
question can you give the office view of
a minimum bible program MVP and how you
think it's going to affect program size
complexity and funding so I'll skip this
one but if you're running Tech startups
this is interesting because they kind of
talk about how startups approach
thinking about when to launch a product
the minimal viable product how to test
hypotheses Etc again I'll link this
below if you want to read it then they
continue of questions which PMS would be
interested in ideas on computer vision
he's saying well I don't know if there's
any specific people but obviously it's a
big part of certain problems if you need
to have autonomous systems operating in
the real world you they need to be able
to perceive there small business
networking opportunities uh as it
relates to DARPA and they're you know
working on it mentioning an event that's
AI forward it seems like that worked
well looks like there's various programs
like the embedded entrepreneurship
initiative there's a focus on helping
create companies for people that maybe
are not from the states or just not
aware like there's a lot of these
government initiatives in the US that
they kind of blur the line between
government and in this case you know the
tech sector there's a lot of stuff where
it's like I mean there's just a lot of
like overlap and then some of the stuff
is not too visible to Outsiders which
certainly makes sense I mean if you have
various spy agencies they can't just be
publishing all their secrets all the big
tech companies don't want to be
publishing all their secrets so I mean
there's people that raise questions
about if this is you know a good thing
or not and and uh well those people are
never heard from again I'm totally
kidding but no but legitimately I love
DARPA I do have um tons of respect for
what they do I they've developed some
really cool things that we all use and
enjoy that might have not come about if
they weren't there or at least maybe
would have been kind of corrupted and
not as good as it was I mean they're
behind the internet right so Dara's
arpanet project in 1960s laid the
foundation for the modern internet
they're behind GPS the global
positioning system so kind of that
network of satellites that allows us to
know where we are in the world and and
you know the whole world is is using
that we're all benefiting from that
technology from the internet they've
pushed tons of things with stealth
technology and autonomous vehicles
robotics Quantum Computing and the fact
that they're now uh looking into what
needs to be done on the AI front is
certainly exciting I'm I'm very excited
about that I made the joke about people
disappearing and I got kind of scared so
I mean I I love Dara for real and
there's tons of more things that I think
some of you would find highly highly
interesting so again I'll leave the link
below but to me I think in terms of AI I
think we covered the most interesting
little bits and pieces here overall I
mean my big takeaways is one I think the
speculation about the combination of
planning plus llms Transformers kind of
like the merging of those two things
because they were kind of separate sort
of fields of study of AI progress right
reinforcement learning did a lot of cool
stuff it took us a long way right then
llms come out and it's kind of like this
brand new thing right and now we're
trying to take the strength of each and
kind of combine them but he's also
saying that this idea that we're just a
little bit away from Full artificial
general intelligence well maybe not so
much right so there's tons of problems
that we still have to solve and them
saying this like they haven't even
really started training GPT 5 that's a
weird thing to hear right that's a
that's very different than sort of the
word in the street is but I mean this is
DARPA this is the government the
military they're working very closely
with these companies I mean they do say
here that they don't have you know full
visibility to what's happening but so as
far as I can tell this was released on
November 13th 2023 so I mean if so if
I'm reading everything correctly they're
saying as of November they haven't
started training GPT 5 which which uh is
surprising but uh with that said let's
uh let's nle this over a bit you and I
let me know in the comments what you
think if you think I'm wrong about
something if I missed something obvious
definitely let me know obviously a lot
of things here take it with I mean
usually say take it with a grain of salt
this seems like a pretty obviously a
legit resource now maybe some of those
things that they're saying that's
opinion you know he's saying oh yeah
like coding will not be automated you
know maybe that's an opinion but
certainly I think most people would
agree that it's probably a very very
informed very educated opinion right
certainly there's a lot of weight behind
it but yeah certainly this person is
saying that all of the hype behind Ai
and a lot of the stuff that we think is
going to happen well we're not quite
there yet the GPT here the the GPT 5 is
not really released it's not being
trained as of you know like what 5
months ago that the pace of how the
development of these Frontier models
it's it's slowing down that automated
coding is uh not quite as realistic as
one would think but the potential for
cyber security attacks could be and and
it is very real so wow when I sum it up
like that it's actually quite quite
depressing also how did Allan Turing
just know everything from like the 1930s
how did he just like know everything
there is about a i in 1936 of course his
story if you're not aware of it is
covered pretty well I I enjoyed this
movie so this is called the imitation
game with Alan Turing of course played
by uh by a great actor whose name is um
I want to say benro Cabbage Patch cover
Bund I think that's it nailed it but
yeah great movie He plays alen Turing
and is very good at it with that said
let me know what you thought of this my
name is Wes rth and thank you for
watching
Browse More Related Video
O lado sombrio da DARPA - O custo humano da supremacia tecnolΓ³gica dos EUA!
AI Generativa
Microsoft Reveals SECRET NEW MODEL | GPT-5 DELAYED | Sam Altman speaks out against "Doomers"
Barbara Gallavotti | Che cosa pensa l'Intelligenza artificiale
OpenAI ORION (GPT-5) Arrives with Strawberry AI This Fall: AGI Soon!
What is AI?
5.0 / 5 (0 votes)