DARPA's STUNNING AGI BOMBSHELL | AGI Timeline, Gemini plus search, OpenAI's GPT-5 & AI Cyber Attacks

AI Unleashed - The Coming Artificial Intelligence Revolution and Race to AGI
31 Mar 202426:25

Summary

TLDRThe video script delves into a comprehensive discussion on the pace of AI development, highlighting DARPA's contributions and perspectives on the field. It contrasts the advancement rates of reinforcement learning and transformer models, mentioning DARPA's collaboration with major AI players like Google, Microsoft, and OpenAI. The discussion touches on DARPA's initiatives in AI for cybersecurity, cryptographic modernization for quantum safety, and the challenges in merging AI technologies for enhanced capabilities. It underscores the speculative nature of achieving Artificial General Intelligence (AGI) soon, citing technological, ethical, and practical hurdles still to be overcome. The script also explores DARPA's strategic focus on solving unique problems not addressed by the industry, emphasizing the organization's pivotal role in pioneering technologies like the internet and GPS, and its ongoing efforts to safeguard national security through advanced AI applications.

Takeaways

  • 🔍 Jimmy Apples, a notable figure in AI, discusses DARPA's advancements and challenges in AI, highlighting differing progression rates between AI technologies.
  • 🚀 Reinforcement learning is not advancing as rapidly as Transformer models, indicating a disparity in development speed across AI fields.
  • 🤖 DARPA is focusing on integrating planning aspects into large language models (LLMs), but faces challenges in achieving full transparency.
  • 🌐 DARPA's Q&A reveals ongoing efforts at the intersection of AI and traditional hardware, showcasing a blend of technological advancement.
  • 🛡️ In cybersecurity, DARPA acknowledges the rise of quantum-safe security but isn't directly working on it, instead focusing on quantum networking and other areas.
  • 📈 DARPA's collaboration with AI companies like Anthropic, Google, Microsoft, and OpenAI signifies a strategic alliance for advancing AI technologies.
  • 🧠 Speculation suggests combining the power of LLMs with reinforcement learning could lead to major AI breakthroughs, merging reasoning and planning capabilities.
  • 🏗️ Despite rapid AI advancements, DARPA indicates that achieving Artificial General Intelligence (AGI) still involves solving numerous complex problems.
  • 🔒 DARPA's insights suggest that the delay in TSMC chip production impacts AI development timelines, including the training of GPT-5.
  • 💡 DARPA's mission focuses on tackling unique, challenging problems that may not immediately benefit from industry-driven solutions.

Q & A

  • What is DARPA's stance on the advancement pace of AI technologies like reinforcement learning compared to transformer models?

    -DARPA acknowledges that not all AI technologies are advancing at the same pace, noting that reinforcement learning is not progressing as quickly as transformer models. This disparity in advancement rates highlights the varied focus and success across different AI research areas.

  • What are DARPA's initiatives in quantum computing and cybersecurity?

    -DARPA is engaged in quantum computing through the QUET program, which aims to enhance networking security using quantum technologies. However, DARPA has explicitly stated that they are not currently working on quantum-safe security, despite acknowledging advancements in quantum computers and the emerging field of quantum-safe cybersecurity.

  • How does DARPA view the integration of planning capabilities into large language models (LLMs)?

    -DARPA expresses interest in the integration of planning capabilities into LLMs, specifically referencing the Gemini model's efforts in this area. However, they also indicate a lack of full transparency and certainty about the progress and outcomes of such integrations, pointing to ongoing research challenges.

  • Why hasn't OpenAI started training GPT-5, according to the DARPA Q&A?

    -The delay in starting the training of GPT-5 by OpenAI is attributed to slowdowns in the release of the H100 chips by the Taiwan Semiconductor Manufacturing Company (TSMC). This has provided a temporary breathing space for other projects, according to the DARPA Q&A.

  • What is DARPA's approach to AI and its collaboration with private companies?

    -DARPA collaborates closely with private companies like Anthropic, Google, Microsoft, and OpenAI on AI projects, including competitions like the AI Cyber Challenge. This partnership aims to leverage advancements in AI technologies for various applications, while keeping a close watch on the developments within these companies.

  • What challenges does DARPA acknowledge in achieving Artificial General Intelligence (AGI)?

    -DARPA highlights several challenges in reaching AGI, including the halting problem and the need for more resources and breakthroughs beyond simply scaling existing models. This suggests that DARPA views the path to AGI as complex and fraught with significant scientific and technical hurdles.

  • How is DARPA addressing the risks and opportunities of AI in cybersecurity?

    -DARPA is actively exploring the use of AI to improve cybersecurity, notably through initiatives like the AI Cyber Challenge, which focuses on leveraging AI tools to automatically find and suggest repairs for vulnerabilities in open source software. This reflects DARPA's broader commitment to enhancing national security through advanced technology.

  • What does DARPA believe about the automation of coding by AI?

    -DARPA maintains a cautious stance on the automation of coding by AI, suggesting that AI will serve as a tool to assist in coding rather than fully automate the process. They believe that AI will accelerate the development of boilerplate software but will not replace the need for skilled software developers.

  • How does DARPA plan to maintain relevance amid rapid advancements in AI technology?

    -To maintain relevance, DARPA focuses on structuring programs such as the AI Cyber Challenge to leverage advancements in AI technology. They aim to address research areas and challenges not immediately pursued by the private sector, thus ensuring their contributions remain pivotal to national security and technological progress.

  • What is DARPA's perspective on the use of AI in generating and verifying software code?

    -DARPA is interested in the potential of AI to generate high-quality software code and believes that large language models (LLMs) could play a significant role in this area. However, they recognize the complexity of generating verified and secure code, indicating ongoing exploration and research in this domain.

Outlines

00:00

🔍 Overview of DARPA's AI Initiatives and Industry Observations

The discussion opens with Jimmy Apples highlighting DARPA's recent Q&A from November 2023, pointing out the varying pace of advancements in AI, notably between reinforcement learning and Transformer models. DARPA, known for its defense and advanced research projects, is scrutinized for its transparency and strategic plans in integrating AI with military strategies. The Gemini model's integration into large language models (LLMs) without full transparency raises questions. The dialogue also touches on DARPA's stance on quantum computing, cybersecurity, and the potential impacts of AI on creating bioweapons. Concerns about the speed of AI advancements and the participation of major AI entities like Google, Microsoft, and OpenAI in DARPA's projects suggest a collaborative yet competitive landscape. The narrative underscores the complexity and challenges of AI development, pointing to DARPA's critical role in shaping the future of AI and national security.

05:01

🔌 Impact of Semiconductor Delays on AI Progress and DARPA's Strategic Response

This segment delves into the implications of semiconductor production delays, particularly by TSMC, on the advancement of AI models like GPT-5. The slowdown in the release of Nvidia's H100 chips is highlighted as a bottleneck for AI progress, granting DARPA and other AI research entities a 'breathing space' to catch up. The conversation further explores DARPA's insights on the integration of planning and reinforcement learning with LLMs, suggesting a potential breakthrough in AI development. However, the challenge of achieving Artificial General Intelligence (AGI) is acknowledged, alongside the halting problem and other computational and resource constraints. DARPA's focus remains on bridging the gap between current capabilities and the next generation of AI advancements.

10:03

🚀 DARPA's Unique Position in AI Research and Collaboration

The narrative shifts to DARPA's strategic role in tackling AI challenges that the private sector may overlook, emphasizing government-led initiatives in areas such as multimodal LLMs and encryption for defense. Through examples like creating a national data library and focusing on multi-level security, DARPA's endeavors illustrate its commitment to solving complex problems beyond the scope of industry efforts. The segment reveals DARPA's approach to fostering innovation while addressing critical security and ethical issues, suggesting a proactive stance in advancing AI technologies while safeguarding national interests.

15:03

🛡️ Cybersecurity and AI's Role in National Defense Infrastructure

This part discusses DARPA's focus on protecting critical infrastructure from cyber attacks, with a particular emphasis on electrical power systems and industrial control systems. The potential of AI to both pose and prevent threats is examined, referencing DARPA's AI cyber challenge and collaborations with the tech industry and government agencies to enhance cybersecurity. The narrative underscores the importance of AI in identifying and repairing vulnerabilities in open-source software, highlighting DARPA's leadership in leveraging AI for national security purposes.

20:04

👨‍💻 The Future of Coding and AI's Influence on Software Development

Exploring DARPA's perspective on the future of coding and AI's impact, this segment presents a balanced view on automation and human roles in software development. DARPA suggests that while AI will streamline and accelerate the creation of boilerplate software, it won't replace the need for skilled programmers. The dialogue touches on the potential for AI to assist rather than usurp human developers, reflecting on the broader implications of AI integration into software engineering and the enduring value of human expertise.

25:05

🤖 DARPA, AI, and the Ongoing Quest for Innovation

Concluding with a broad overview of DARPA's engagement with AI, this section encapsulates DARPA's efforts to remain at the forefront of technological innovation. It highlights the agency's significant contributions to AI, the internet, GPS, and more, while acknowledging the challenges and opportunities ahead. DARPA's cautious yet optimistic stance on the potential of AI reflects a deep understanding of its complexities and the critical balance between advancement and ethical considerations. The discussion underscores DARPA's pivotal role in shaping the future of AI and technology at large, emphasizing collaboration, transparency, and strategic foresight.

Mindmap

Keywords

💡DARPA

The Defense Advanced Research Projects Agency (DARPA) is a research and development agency of the United States Department of Defense responsible for the development of emerging technologies for use by the military. In the video, DARPA is discussed in the context of its involvement in artificial intelligence (AI) advancements and collaborations with major tech companies. DARPA's role illustrates the intersection of government research initiatives and private sector innovation in AI, highlighting its efforts to maintain a cutting-edge position in technology development.

💡Transformer model

The Transformer model is a type of deep learning model that has revolutionized the field of natural language processing (NLP). It is known for its efficiency and accuracy in handling sequence-to-sequence tasks. The video mentions that the Transformer model is advancing more rapidly than reinforcement learning in AI development. This advancement is significant as it underscores the impact of the Transformer architecture on the current AI landscape, particularly in the development of large language models (LLMs) like GPT.

💡Reinforcement learning

Reinforcement learning is an area of machine learning concerned with how agents ought to take actions in an environment to maximize some notion of cumulative reward. The video contrasts its pace of advancement with that of the Transformer model, suggesting that reinforcement learning is not advancing as quickly. This comparison sheds light on the diverse paths of progress within AI research and the varying speeds at which different AI methodologies are evolving.

💡Quantum computing

Quantum computing is a type of computation that utilizes quantum-mechanical phenomena, such as superposition and entanglement. In the video, DARPA's engagement with quantum computing is discussed, especially in the context of cybersecurity and the development of quantum-safe security measures. This reflects the broader trend of exploring quantum computing's potential to revolutionize fields ranging from cryptography to material science.

💡Large Language Models (LLMs)

LLMs are AI models capable of understanding and generating human language. The video discusses DARPA's interest in integrating planning capabilities into LLMs, highlighting the ongoing efforts to enhance AI's problem-solving and reasoning abilities. LLMs' role in the narrative underscores their significance in AI's evolution and their potential to drive future innovations.

💡Artificial General Intelligence (AGI)

AGI refers to the hypothetical intelligence of a machine that could understand, learn, and apply its intelligence to solve any problem, much like a human being. The video touches on the challenges and speculative nature of achieving AGI, indicating that while significant progress is made in AI, the path to AGI remains fraught with unresolved problems and complexities.

💡Halting problem

The halting problem is a concept in computer science that concerns determining whether a computer program will eventually halt or continue to run indefinitely. The video mentions this to illustrate the limitations and challenges in AI development, highlighting some of the fundamental problems in computer science that still pose challenges to AI's progress toward AGI.

💡Cybersecurity

Cybersecurity is the practice of protecting systems, networks, and programs from digital attacks. The video discusses DARPA's involvement in cybersecurity, especially in the context of quantum computing and AI. This underscores the growing importance of cybersecurity in an increasingly digital and AI-driven world, as well as the continuous efforts to safeguard information in the face of evolving threats.

💡AI Cyber Challenge

The AI Cyber Challenge is mentioned as a competition where DARPA partners with companies to leverage AI in addressing cybersecurity challenges. This initiative reflects the collaborative efforts between government agencies and the private sector to utilize AI for national security purposes, showcasing the practical applications of AI in enhancing cyber defense mechanisms.

💡Code generation and verification

The video addresses the role of AI in generating and verifying code, indicating a shift towards automation in software development processes. DARPA's interest in this area highlights the potential for AI to transform how software is created and maintained, emphasizing the need for reliable verification methods to ensure the correctness and security of AI-generated code.

Highlights

DARPA's Q&A session revealed disparities in the advancement pace of AI technologies, with reinforcement learning lagging behind transformer models.

DARPA's interaction with major tech companies like Google, Microsoft, and OpenAI in advancing AI and cybersecurity through initiatives like the AI Cyber Challenge.

The focus on integrating planning capabilities into large language models (LLMs) to potentially address limitations in AI's reasoning and decision-making processes.

Concerns over the stagnation in AI model advancements and the delayed start of GPT-5 training due to hardware production issues.

Speculations on DARPA's involvement in futuristic and critical AI projects, outside the commercial tech industry's focus, aimed at solving complex, high-impact problems.

Discussion on the role of AI in cybersecurity, particularly in creating quantum-safe security measures and enhancing network security through quantum computing.

Exploration of the potential combination of reinforcement learning and LLMs for breakthroughs in AI, suggesting a merging of two distinct AI research areas.

The necessity for ongoing innovation in AI to maintain DARPA's relevance amidst rapid technological progress, highlighting program structures and collaborations.

The critical role of DARPA in developing foundational technologies like the internet and GPS, emphasizing its historical impact and future potential in AI.

Acknowledgment of the challenges in achieving Artificial General Intelligence (AGI), with current AI advancements not yet close to realizing AGI.

The importance of AI in addressing cybersecurity threats, with DARPA leveraging AI to identify and rectify vulnerabilities in software and hardware systems.

The potential impact of AI on software development, with AI tools accelerating the creation of code but not replacing human developers' need for quality and innovation.

DARPA's strategic focus on areas not immediately pursued by the industry, such as multi-level security and advanced cryptographic techniques for national defense.

The exploration of AI's role in enhancing electrical power systems' resilience against cyber attacks, demonstrating DARPA's broader focus on infrastructure security.

Speculation about the future integration of planning mechanisms within LLMs, hinting at significant advancements in AI's capability to reason and strategize.

DARPA's emphasis on addressing the ethical and practical challenges of AI data usage, suggesting a nuanced approach to leveraging AI for societal benefit.

Transcripts

play00:00

so Jimmy apples a somewhat notorious

play00:02

account in the AI space just posted this

play00:04

DARPA Q&A November 2023 another thing is

play00:07

that not all the Frontiers are advancing

play00:10

at the same Pace reinforcement learning

play00:12

is not going as fast as the Transformer

play00:14

model and he links this attachment here

play00:16

from DARPA military. M which that's

play00:20

DARPA Mill that is the defense Advanced

play00:23

research projects agency so that's

play00:25

coming from them directly let's so let's

play00:27

take a look at that here he continues

play00:29

the Gemini model getting the planning

play00:31

piece integrated into the llm we are not

play00:34

sure we lack full transparency what is

play00:37

happening let's take a look so this is

play00:39

DARPA information Innovation office the

play00:41

Q&A what is darpa's interface between

play00:43

traditional hardware and artificial

play00:45

intelligence so they're saying some of

play00:46

the programs are in fact already at the

play00:48

interface between software and Hardware

play00:49

now I'll link this down below if you

play00:51

want to read it yourself we're not going

play00:52

to go too deep into some of the stuff

play00:54

and just highlight the most important

play00:56

things so another question is quantum

play00:58

computers are making progress cyber

play01:00

security is getting into a new area of

play01:01

quantum safe security is there any new

play01:04

plans or programs from the i2o the

play01:07

information Innovation office on

play01:09

cryptographic engineering modernization

play01:11

of cryptography for Quantum safe

play01:13

security they're saying they're not

play01:14

doing anything on Quantum safe security

play01:16

we do have the quet program which is

play01:18

using Quantum on making networking more

play01:20

secure DSO has a number of efforts on

play01:22

Quantum DSO looks like is another part

play01:25

of DARPA as far as you can tell the

play01:27

defense Sciences office so there's some

play01:29

push back in terms of they're talking

play01:31

about nist which is the National

play01:32

Institute of Standards and Technology

play01:34

the Department of Defense the NSA and

play01:37

who to talk to about that there's a

play01:39

question about the president's effective

play01:40

order and safe secure and trustworthy AI

play01:43

looks like there are restrictions on

play01:44

what it takes to work on a Frontier

play01:46

Model in the new document the concern is

play01:49

people can use various Frontier models

play01:51

to generate bioweapons but they're still

play01:53

working on figuring out what how that's

play01:54

going to affect them they're asking how

play01:56

does DARPA maintain relevance when it's

play01:58

such a fast- based progress in Ai and so

play02:00

here they're answering one area is by

play02:02

program structure the AI cyber Challenge

play02:05

and so this is where we're getting a

play02:06

little bit more into the interesting

play02:08

bits we've covered this briefly a number

play02:10

of months ago I believe it's a

play02:12

competition where we partner with large

play02:13

language models the companies that

play02:15

produce them you know such as anthropic

play02:17

Google Microsoft and open AI whoops okay

play02:20

so where's where's meta how come uh

play02:23

Zuckerberg is not on this is it because

play02:25

they are a open source model yeah but

play02:27

this is interesting so anthropic is of

play02:28

course clawed three that's the model

play02:30

released by anthropic you have Google

play02:33

and Gemini right and you have kind of

play02:34

Microsoft and open AI that have sort of

play02:37

a union some sort of a cooperation

play02:39

agreement they're not one and the same

play02:42

but definitely there's a big overlap

play02:43

they tend to work on projects together

play02:45

including that whole Stargate project

play02:47

with the supercomputers and some

play02:48

potential uh Fusion Energy projects as

play02:51

well and so they're saying as the

play02:52

capability advances so two will the

play02:54

performers using them be able to

play02:56

leverage the advanced capability at the

play02:58

same time that is one model another

play03:00

piece is that we will be keeping an eye

play03:01

on what is happening if the capability

play03:03

we are working on and the program

play03:05

becomes outmatched we will stop the

play03:07

program and regenerate or do something

play03:09

else so I'm reading this as it sounds

play03:11

like I need to learn a lot more about

play03:13

DARPA and how it plays with all the

play03:15

other big companies what kind of AI

play03:16

projects it has but I'm reading this as

play03:18

they're saying if one of these companies

play03:20

like completely blows us out of the

play03:21

water then we're going to either try

play03:24

again or just try something else I mean

play03:26

they're saying we're keeping these guys

play03:28

close you know we're keeping an eye on

play03:29

them these are the people we're

play03:30

interested in they're close by so we

play03:32

know what they're doing and here's where

play03:34

we get to the other part so another

play03:36

thing is that not all Frontiers are

play03:38

advancing at the same Pace reinforcement

play03:40

learning is not going as fast as the

play03:43

Transformer model so we've talked about

play03:45

this for example if Andre karpathy that

play03:47

tried to build autonomous agents way

play03:49

back in the days before openai before he

play03:51

worked at Tesla by using reinforcement

play03:53

learning he basically says that's kind

play03:56

of a dead end or at least for some

play03:58

things it's like I think he example he

play04:00

gave if you're trying to get a computer

play04:01

to go and book a flight for you online

play04:03

you can't really use reinforcement

play04:05

learning for that right because that

play04:06

would require it to like randomly click

play04:09

on all the buttons and see which ones

play04:10

work you need something that has a

play04:11

little bit more reasoning skills or

play04:14

whatever you want to call that and so it

play04:15

seems like they're saying that the

play04:16

Transformer model kind of the the thing

play04:18

that's behind these neural Nets that's

play04:21

behind GPT 4 it sounds like that's

play04:23

behind Sora behind I mean pretty much

play04:25

everything here Gemini all the open

play04:27

source models both LMS and other mod

play04:29

models that's kind of the Transformer

play04:31

model so it sounds like if I'm reading

play04:33

this correctly they're saying well that

play04:34

model that Frontier is advancing much

play04:37

more rapidly than reinforcement learning

play04:40

they're also saying the pace of the

play04:41

frontier models is slowing down a little

play04:44

bit which that's interesting because you

play04:46

know we're kind of expecting these

play04:47

amazing things but they're saying well

play04:48

the progress is slowing down a bit a lot

play04:50

of the results that we are seeing right

play04:52

now include understanding what they are

play04:53

doing and what they are not doing

play04:56

they're saying they haven't released a

play04:57

GPT 5 so they're referring to open AI

play05:01

here and they're saying they haven't

play05:02

even started training GPT 5 due to the

play05:05

slowdown in the release of the h100s due

play05:08

to the production problems at the Taiwan

play05:09

semiconductor Manufacturing Company the

play05:11

tsmc so this is the biggest company

play05:13

producing the chip so Taiwan is the

play05:15

biggest producer of chips tsmc is the

play05:17

biggest company in Taiwan producing the

play05:19

chips so this is like the the Lynch pin

play05:21

behind of a lot of this AR hardware for

play05:23

for NVIDIA for a lot of other people if

play05:26

this thing just poof and disappears it's

play05:28

not like we would go back to the dark

play05:29

ages but boy would a lot of our Tech

play05:32

take a big big hit cuz we need computers

play05:35

or rather chips for everything we need

play05:37

them in in our cars and our phones in

play05:39

our in our dishwashers and the drones

play05:41

and like everything so they're saying so

play05:42

we have a little bit of breathing space

play05:44

so that's interesting so they're almost

play05:45

saying because of these alleged

play05:47

slowdowns that open AI they have some

play05:50

time to catch up the Gemini model

play05:52

getting the planning piece integrated in

play05:54

the llm we are not sure we lack full

play05:56

transparency so there's a lot of uh

play05:59

speculation on what that whole qar leak

play06:01

out of um openingi was and we still

play06:03

don't know exactly what it is we kind of

play06:05

know that yes the leak was real it was a

play06:08

real project a real research project it

play06:10

was leaked Sam Alman and others have

play06:13

confirmed it but no one's talking about

play06:15

it and so we don't know what it is a lot

play06:17

of smart AI researchers that kind of

play06:19

know what they're talking about have

play06:21

suggested that this is a combination of

play06:23

two big Ideas one is the LMS right the

play06:27

Transformer models the gbt 4S and the

play06:29

the other piece is kind of the

play06:31

reinforcement learning piece so this is

play06:33

what a lot of the Deep Mind Technologies

play06:36

do the Superhuman chess playing AI the

play06:38

Superhuman AI that that beats everybody

play06:40

at go there seems to be a lot of

play06:42

speculation that maybe sort of the next

play06:44

Frontier the next big breakthroughs that

play06:47

come in AI will come from a combination

play06:50

of those two things the power of llms

play06:52

that are really good at like reasoning

play06:55

but they can't really like think through

play06:57

a lot of different steps and then kind

play06:58

of review their plans they kind of have

play07:00

that weakness whereas the chess playing

play07:02

eye the go playing eye can think through

play07:04

a million different combinations kind of

play07:06

figure out what has the best sort of

play07:07

possible reward right figure out kind of

play07:09

like what the best steps are then Trace

play07:11

its thoughts back and and kind of plan

play07:14

so this is what and again this is total

play07:15

speculation but the planning piece that

play07:18

they're referring to here in the Gemini

play07:20

getting integrated in the llm to me I

play07:24

mean based on everything we've seen I

play07:26

that's what that sounds like I'd be

play07:28

pretty surprised if it wasn't because

play07:29

the Gemini model is of course Google

play07:31

Deep Mind Google deep mind they're the

play07:33

people behind all the alpha right Alpha

play07:36

fold Alpha go and they have many others

play07:39

like Alpha coder like a lot of that is

play07:41

stemming from what can be referred to as

play07:43

the planning piece right so combining

play07:45

that with the llm I mean in November of

play07:48

2023 when the co qar leaked we we went

play07:51

deep into this the llm plus kind of the

play07:54

Gemini alphao technology so this sounds

play07:58

like it but they're saying well we know

play07:59

we LA full transparency if they did or

play08:01

not but and then they're saying but

play08:02

there are large research problems that

play08:04

still need to be solved hearing people

play08:06

say we're just a little bit away from

play08:08

Full artificial general intelligence AGI

play08:11

is a bit more optimistic than reality

play08:14

yikes so that's uh that's interesting

play08:17

he's saying there's still things like

play08:18

the halting problem so halting problem

play08:20

seems like it's a computer science

play08:22

conundrum that goes back to Alan Turing

play08:24

in 1936 it refers to the impossibility

play08:27

of creating a universal algorithm that

play08:29

can determine whether any given program

play08:32

when run with a specific input will

play08:33

eventually stop as an halt or continue

play08:36

running indefinitely Loop and the

play08:37

halting problem has important

play08:39

implications in computer science as it

play08:41

helps us understand the limitations of

play08:43

algorithms and highlights the existence

play08:45

of problems that cannot be completely

play08:47

automated this emphasizing the need for

play08:49

heris and approximations in complex

play08:51

problem solving scenarios then they

play08:53

continue listing some of the other

play08:55

problems that we have we still have

play08:56

exponential things we still need

play08:58

resource right I'm assuming hardware and

play09:01

other things I think there are still

play09:02

going to be super hard problems that are

play09:05

not going to be fixed by scaling and the

play09:07

followup question is my question is when

play09:09

you have you might not have AGI so let's

play09:11

say we don't have AGI it's not human

play09:13

level general intelligence but you might

play09:14

have a system that helps humans and

play09:16

everyone in this room to advance so

play09:19

quickly that before AGI Comes This Apex

play09:22

where not an apex let missing there's

play09:25

some commas missing here that makes this

play09:26

a little bit difficult to so he's saying

play09:28

this Asm totic growth where we are

play09:30

dealing with that constantly so

play09:33

asymptotic growth is like it's a

play09:35

function where like if you're

play09:35

approaching some limit it goes to

play09:37

Infinity so I'm having a little bit of a

play09:40

hard time parsing this question I think

play09:41

this guy wanted to uh sound smart and it

play09:43

l some clarity my understanding he's

play09:45

saying okay so we don't have aegi isn't

play09:47

this still going to mean that we have

play09:48

this break next speed of progress isn't

play09:50

this still going to create this fast

play09:53

change that still is very difficult to

play09:54

predict and the follow-up answer so from

play09:57

DARPA he's saying we try very hard on to

play09:59

get in the way of what industry is going

play10:00

to do we're trying to work to solve

play10:02

problems that industry isn't going to do

play10:04

tomorrow so you know using their kind of

play10:07

government resources they're going after

play10:09

important hard problems that the tech

play10:11

industry maybe is unwilling to solve

play10:13

maybe it's not doesn't have enough

play10:14

profit in it or just is too complex so

play10:17

they're working kind of outside of the

play10:19

things that places like Google and open

play10:21

eye Microsoft so outside of what they're

play10:22

shooting for they're kind of working on

play10:24

problems that are important that are

play10:25

outside of that so he's saying we aren't

play10:26

planning to work on multimodal large

play10:28

language model models because they

play10:30

meaning you know the industry The Profit

play10:33

seeking entities they are going to do

play10:35

that sometime we're not trying to work

play10:36

on incorporating new information into an

play10:39

llm because they are going to do that as

play10:41

soon as they can we are trying to work

play10:43

on things they won't they won't work on

play10:46

right away so like one example that I

play10:48

think is interesting that I've heard

play10:49

kind of in this scenario is you know lot

play10:51

of these AI models need data and right

play10:53

now there's a big debate on where are

play10:54

they're getting this data right are they

play10:55

just kind of taking everybody's data

play10:57

without permission right is that okay is

play10:58

that not okay and actually the founder

play11:00

of of uh stability AI kind of suggested

play11:03

that each nation has their own sort of

play11:06

data library that all of the AIS in that

play11:09

country or that culture they can just go

play11:11

and train on that it has all the

play11:13

cultural works all the and all the data

play11:15

that you need to train a model that

play11:17

anyone if you wanted to create a model

play11:19

for that culture like let's say You're

play11:21

Building something in the US the US

play11:23

would provide this database of all books

play11:26

and images and whatnot that you would

play11:27

need to train up that model a high

play11:29

quality data set right obviously

play11:31

Microsoft or Google they're probably not

play11:33

going to do that but having the

play11:34

government fund something like that

play11:36

might be beneficial to progress as a

play11:38

whole now they're probably now Dara is

play11:39

probably not doing something like that

play11:41

they're probably doing something a

play11:41

little more like weird and crazy and

play11:43

futuristic but I think that's an example

play11:45

so they continue we haven't done this

play11:47

yet because we might do multi-level

play11:49

security because we think that is

play11:50

something the Department of Defense

play11:52

might care more about than industry

play11:54

would right right because it sounds like

play11:56

uh open aai did have some research into

play11:59

to you know encryption and security and

play12:01

stuff like that but you know that's not

play12:03

their main focus they're saying maybe

play12:04

that is on the industry's road map but

play12:06

any further future time frame the point

play12:09

here is basically without encryption if

play12:10

there's some way to break encryption I

play12:12

mean as I understand it all information

play12:14

would be visible all our bank accounts

play12:16

all our chats and texts and messaging

play12:19

like I don't think you can have online

play12:20

banking or online shopping or most of

play12:22

the things online that that have to be

play12:25

even semi secet just would not work so

play12:27

they continue I don't know what the

play12:28

right answers are but the question of

play12:30

what are they going to do and in that

play12:32

time frame what should we do is

play12:33

something we talk about all the time do

play12:35

we have perfect answers no but do we ask

play12:38

that question constantly yes next

play12:39

question is you have been pointing out

play12:41

here today code generated by AI systems

play12:43

is just going to increase in scope and

play12:45

scale in ways we can hardly imagine how

play12:48

important is it to DARPA that the code

play12:49

gets verified for correct functionality

play12:51

and security properties that's the thing

play12:53

that we've been nudl he says they're

play12:55

noling it over been noodling it over

play12:57

quite a bit clearly companies are going

play12:59

to be working a lot on generating a lot

play13:01

of code in one of my next videos we're

play13:03

probably going to talk about some of the

play13:04

startups that I think it it was why

play13:07

combinator that has announced uh a 100

play13:09

or so AI startups that are coming out of

play13:11

stealth and a lot of them a lot of them

play13:14

are working on code on coding and

play13:16

programming both generating code as well

play13:18

as testing and and tons more stuff like

play13:20

that so he continues we are not so sure

play13:22

they are going to generate code that is

play13:24

high quality or care about generating

play13:26

code that is of high quality clearly

play13:27

generating proofs about codee and

play13:30

generating specifications specifications

play13:32

codes and proofs are all languages those

play13:34

are all in the Wheelhouse of llm so

play13:36

large language models certainly seem

play13:38

likely would be able to do it tying them

play13:40

together could be hard definitely

play13:42

newling over to trying to generate

play13:44

specification code and proofs that are

play13:47

checked then they give some uh people to

play13:49

talk to for more information there are

play13:51

tons of code on the web a lot of it is

play13:52

not good code I believe a study from

play13:55

five years ago from stack Overflow uh

play13:57

found that there's usually a good

play13:59

security answer to the question but it's

play14:01

usually number 10 that means there are

play14:03

nine Bad answers before the good answer

play14:05

by the way I think a lot of this does go

play14:07

back to this idea of having llms in a

play14:10

planning piece cuz yeah llms can spit

play14:11

out millions of lines of code and if you

play14:14

test it and it throws an error you can

play14:16

even say hey this is wrong and they'll

play14:17

try again right so it's kind of like

play14:19

this thing that just spits out a lot of

play14:20

likely answers but to really kind of

play14:23

supercharge that ability to make it

play14:25

really useful there got to be some sort

play14:26

of like a reflection or a planning piece

play14:29

that's getting that right is is going to

play14:31

be the next big breakthrough so question

play14:32

what is DARPA specifically interested in

play14:34

related to protecting Electrical Power

play14:36

Systems and their industrial Control

play14:38

Systems so looks like they did look into

play14:40

this a while ago and the initial

play14:42

response from the power industry was

play14:44

like yeah we totally know how to cold

play14:45

start a power plant this is a in our

play14:47

wheelhouse we do this all the time from

play14:49

hurricanes and natural disasters the way

play14:51

he's phrasing this I'm guessing that's

play14:53

not the case let's see the part was not

play14:55

so much in their wheelhouse the part

play14:57

that wasn't in their wheelhouse was how

play14:58

do you do that when your sensors are

play15:01

lying to you which of course is

play15:02

completely in the Wheelhouse of

play15:04

attackers who take over the output of

play15:06

sensors there was this sci-fi book

play15:08

called demon by Daniel Suarez about this

play15:11

kind of Rogue AI I mean it was designed

play15:13

to do the damage that it was doing but

play15:15

really interesting book I believe it was

play15:17

published in 2006 it's fascinating how

play15:21

many things it got correctly about what

play15:23

something like this could do right this

play15:25

idea of what if a attacker whether it's

play15:28

an AI system or just some sort of a

play15:30

hacker takes over the sensors I mean how

play15:32

do you do any of the stuff that you want

play15:33

to do when your sensors are lying to you

play15:35

I thought that was kind of interesting

play15:36

great book by the way for those that are

play15:38

interested in such things and so Dara

play15:41

continues I think the program was a

play15:42

success cuz we opened the eyes of the

play15:44

power industry of what a Cyber attack

play15:46

would look like he talks a little bit

play15:48

more about that particular effort but

play15:50

then he goes back to this AI by CC so

play15:52

the they called it the AI cyber

play15:54

challenge so again this is where they

play15:56

took these companies then they brought

play15:57

them to White House they're working with

play15:59

the White House with the government and

play16:00

a lot of initiatives with large language

play16:02

models and code and hacking and stuff

play16:05

like that cyber attacks Etc so you're

play16:07

saying that this approach in this whole

play16:09

power plant cyber potential cyber

play16:11

security attacks is part of a larger

play16:13

effort of cyber infrastructure or

play16:16

infrastructure in general which is that

play16:18

ai ai cyber challenge effort which got

play16:20

launched at black hat in Las Vegas which

play16:23

is can we use AI based tools to help

play16:25

automatically find and suggest repairs

play16:27

to open source software he mentioned a

play16:30

paper that came out a few months ago

play16:31

saying that Chad GPT just out of the box

play16:33

was roughly as good as some tools that

play16:36

were made specifically for that for

play16:37

finding and suggesting fixes to software

play16:40

but in these tools you know a very

play16:41

common response was I need more

play16:43

information and with Chad PT you can ask

play16:45

what information do you want and then

play16:47

you can have a conversation with it and

play16:48

so that ability to converse back and

play16:50

forth with it well it was able to find

play16:52

and fix substantially more more problems

play16:55

and so kind of based on that little

play16:57

insight the that's why the LA lach the

play16:59

AI cyber challenge it focused on open

play17:01

source software they partnered with open

play17:03

source software foundation and so this

play17:05

is interesting I have to look deeper

play17:06

into this I guess there's averil Haynes

play17:08

there was a testimony that suggested

play17:10

that they had to find and fix bugs at

play17:13

scale really really fast like uh-oh what

play17:17

was that testimony who is that person so

play17:19

it looks like ail payes is former United

play17:22

States deputy director of the CIA and

play17:25

ail Haynes uh I haven't watched a

play17:27

testimony it sounds like she's going hey

play17:29

guys like for real we need to make sure

play17:32

our cyber security is like super tight

play17:34

and like really fast like for real and I

play17:37

think this is the the testimony that

play17:39

they're referring to so it's on C-Span

play17:41

you can see it I quickly kind of tried

play17:43

to go over it a little bit this was

play17:45

March 11th 2024 about global threats so

play17:49

kind of like my understanding is this so

play17:51

she's saying we seems like we do have a

play17:53

lot of weaknesses um for number of

play17:56

reasons one more and more of our data

play17:58

like us as individuals as as companies

play18:00

as communities cities Etc we're putting

play18:02

more and more stuff out there more data

play18:04

and as that's growing and also you know

play18:07

the world uh I think it's fair to say

play18:09

maybe is getting a little bit more

play18:10

hostile there's a little bit more of a

play18:12

divide between the various Nations

play18:14

there's just a little bit more potential

play18:16

threats from all of that right I mean if

play18:18

you kind of know what's surrounding the

play18:19

whole Taiwan China us I mean there's a

play18:23

lot of people that are legitimately

play18:25

scared about how that whole thing is

play18:27

going to come to pass right cuz China

play18:29

wants Taiwan they produce all the chips

play18:31

there's tons of stuff happening there

play18:32

right us has a interest in Taiwan

play18:35

obviously right us is slowly trying to

play18:37

build chips away from the Taiwanese

play18:40

Shores and there are better people than

play18:41

me that can explain stuff like this

play18:43

what's happening but my point I think

play18:45

from listening to people that know what

play18:46

they're talking about is like there's a

play18:48

lot of risk there there's a lot of

play18:50

potential conflict there and so one of

play18:52

the things that she's saying at its

play18:53

testimony is that the rise of AI is also

play18:55

playing a role in a sense that all this

play18:57

data that before or yeah it could be

play18:59

sensitive or maybe not so much like

play19:01

right if you post a couple harmless

play19:03

things here and there each individual

play19:05

piece of data wasn't sensitive right but

play19:07

with AI with this ability to gather this

play19:09

data and then make certain predictions

play19:11

certain inference from it all of a

play19:12

sudden that's a whole different playing

play19:14

field right you know if genomics for

play19:16

example right if somebody post their you

play19:18

know whatever 23 and me results or

play19:19

whatever it is online well maybe that's

play19:21

not a big deal but all that data in the

play19:23

aggregate if you're able to run it

play19:26

through AI potentially could reveal

play19:28

certain I don't know certain patterns

play19:29

that could be exploited I mean I mean

play19:31

that could leave to some pretty scary

play19:33

stuff so a lot of this stuff uh is

play19:35

seemingly coming from you know that

play19:37

testimony and idea that we have to find

play19:39

fix bugs at scale like everywhere in our

play19:41

software in our code and do so really

play19:43

really fast next question is how

play19:45

seriously does DARPA consider the

play19:47

possibility of software being developed

play19:49

by AI so this is a very interesting

play19:50

question and this is in a lot of

play19:52

people's minds in fact a lot of videos

play19:54

recently have been covering will

play19:56

software developers have a job in a

play19:58

couple years or five years 10 years

play20:00

whatever is that even a good thing to go

play20:02

into and Sam maltman during his

play20:04

interview with Lex Freedman kind of said

play20:05

that yeah he believes that it's going to

play20:07

write really good software Jensen hang

play20:09

of Nvidia is saying that yeah I mean he

play20:11

had kind of a a strong stance on of like

play20:13

yeah you're not going to need to learn

play20:15

how to code and here the answer is that

play20:17

yes DARPA has a position on this topic

play20:20

he's saying my opinion is that will be a

play20:22

tool that will help people write

play20:24

software faster which is true this is

play20:26

certainly what we're seeing a lot of

play20:28

people are saying that it's really

play20:29

helping develop how fast they're able to

play20:31

do stuff and particularly boring

play20:33

boilerplate software faster but it will

play20:36

not automate the process all right so

play20:38

this is interesting I mean this is uh

play20:39

you know DARPA pretty smart folks over

play20:41

there saying that no they're not seeing

play20:44

you know code automation anytime soon or

play20:47

perhaps even at all and he's saying I

play20:49

don't think that people who write good

play20:50

code will be out of a job anytime in the

play20:52

foreseeable future he's saying maybe I'm

play20:54

overly optimistic but that seems

play20:56

inconceivable inconceivable he continues

play20:59

I think a lot of the boilerplate

play21:01

software like coming in Frameworks or

play21:03

something like that the code everyone

play21:04

hates to write I think AI will write it

play21:07

anyway in the near future that is an

play21:09

interesting interesting take next

play21:10

question can you give the office view of

play21:12

a minimum bible program MVP and how you

play21:15

think it's going to affect program size

play21:16

complexity and funding so I'll skip this

play21:18

one but if you're running Tech startups

play21:20

this is interesting because they kind of

play21:22

talk about how startups approach

play21:24

thinking about when to launch a product

play21:26

the minimal viable product how to test

play21:28

hypotheses Etc again I'll link this

play21:31

below if you want to read it then they

play21:32

continue of questions which PMS would be

play21:33

interested in ideas on computer vision

play21:35

he's saying well I don't know if there's

play21:37

any specific people but obviously it's a

play21:39

big part of certain problems if you need

play21:40

to have autonomous systems operating in

play21:42

the real world you they need to be able

play21:44

to perceive there small business

play21:45

networking opportunities uh as it

play21:47

relates to DARPA and they're you know

play21:49

working on it mentioning an event that's

play21:51

AI forward it seems like that worked

play21:53

well looks like there's various programs

play21:54

like the embedded entrepreneurship

play21:56

initiative there's a focus on helping

play21:57

create companies for people that maybe

play22:00

are not from the states or just not

play22:01

aware like there's a lot of these

play22:02

government initiatives in the US that

play22:04

they kind of blur the line between

play22:06

government and in this case you know the

play22:08

tech sector there's a lot of stuff where

play22:10

it's like I mean there's just a lot of

play22:11

like overlap and then some of the stuff

play22:14

is not too visible to Outsiders which

play22:16

certainly makes sense I mean if you have

play22:18

various spy agencies they can't just be

play22:19

publishing all their secrets all the big

play22:21

tech companies don't want to be

play22:22

publishing all their secrets so I mean

play22:24

there's people that raise questions

play22:25

about if this is you know a good thing

play22:28

or not and and uh well those people are

play22:30

never heard from again I'm totally

play22:32

kidding but no but legitimately I love

play22:33

DARPA I do have um tons of respect for

play22:36

what they do I they've developed some

play22:38

really cool things that we all use and

play22:40

enjoy that might have not come about if

play22:43

they weren't there or at least maybe

play22:45

would have been kind of corrupted and

play22:46

not as good as it was I mean they're

play22:48

behind the internet right so Dara's

play22:51

arpanet project in 1960s laid the

play22:53

foundation for the modern internet

play22:55

they're behind GPS the global

play22:57

positioning system so kind of that

play22:58

network of satellites that allows us to

play23:00

know where we are in the world and and

play23:02

you know the whole world is is using

play23:04

that we're all benefiting from that

play23:05

technology from the internet they've

play23:07

pushed tons of things with stealth

play23:09

technology and autonomous vehicles

play23:10

robotics Quantum Computing and the fact

play23:13

that they're now uh looking into what

play23:15

needs to be done on the AI front is

play23:17

certainly exciting I'm I'm very excited

play23:19

about that I made the joke about people

play23:20

disappearing and I got kind of scared so

play23:22

I mean I I love Dara for real and

play23:24

there's tons of more things that I think

play23:27

some of you would find highly highly

play23:28

interesting so again I'll leave the link

play23:30

below but to me I think in terms of AI I

play23:33

think we covered the most interesting

play23:35

little bits and pieces here overall I

play23:37

mean my big takeaways is one I think the

play23:40

speculation about the combination of

play23:42

planning plus llms Transformers kind of

play23:45

like the merging of those two things

play23:46

because they were kind of separate sort

play23:48

of fields of study of AI progress right

play23:51

reinforcement learning did a lot of cool

play23:53

stuff it took us a long way right then

play23:55

llms come out and it's kind of like this

play23:57

brand new thing right and now we're

play23:59

trying to take the strength of each and

play24:01

kind of combine them but he's also

play24:03

saying that this idea that we're just a

play24:05

little bit away from Full artificial

play24:07

general intelligence well maybe not so

play24:09

much right so there's tons of problems

play24:11

that we still have to solve and them

play24:13

saying this like they haven't even

play24:16

really started training GPT 5 that's a

play24:19

weird thing to hear right that's a

play24:21

that's very different than sort of the

play24:23

word in the street is but I mean this is

play24:26

DARPA this is the government the

play24:27

military they're working very closely

play24:29

with these companies I mean they do say

play24:31

here that they don't have you know full

play24:33

visibility to what's happening but so as

play24:34

far as I can tell this was released on

play24:36

November 13th 2023 so I mean if so if

play24:39

I'm reading everything correctly they're

play24:40

saying as of November they haven't

play24:41

started training GPT 5 which which uh is

play24:44

surprising but uh with that said let's

play24:47

uh let's nle this over a bit you and I

play24:50

let me know in the comments what you

play24:51

think if you think I'm wrong about

play24:52

something if I missed something obvious

play24:54

definitely let me know obviously a lot

play24:56

of things here take it with I mean

play24:58

usually say take it with a grain of salt

play24:59

this seems like a pretty obviously a

play25:01

legit resource now maybe some of those

play25:03

things that they're saying that's

play25:05

opinion you know he's saying oh yeah

play25:06

like coding will not be automated you

play25:08

know maybe that's an opinion but

play25:09

certainly I think most people would

play25:10

agree that it's probably a very very

play25:12

informed very educated opinion right

play25:15

certainly there's a lot of weight behind

play25:17

it but yeah certainly this person is

play25:18

saying that all of the hype behind Ai

play25:21

and a lot of the stuff that we think is

play25:22

going to happen well we're not quite

play25:24

there yet the GPT here the the GPT 5 is

play25:27

not really released it's not being

play25:29

trained as of you know like what 5

play25:31

months ago that the pace of how the

play25:34

development of these Frontier models

play25:35

it's it's slowing down that automated

play25:37

coding is uh not quite as realistic as

play25:41

one would think but the potential for

play25:43

cyber security attacks could be and and

play25:45

it is very real so wow when I sum it up

play25:48

like that it's actually quite quite

play25:50

depressing also how did Allan Turing

play25:52

just know everything from like the 1930s

play25:54

how did he just like know everything

play25:57

there is about a i in 1936 of course his

play26:01

story if you're not aware of it is

play26:02

covered pretty well I I enjoyed this

play26:04

movie so this is called the imitation

play26:06

game with Alan Turing of course played

play26:08

by uh by a great actor whose name is um

play26:12

I want to say benro Cabbage Patch cover

play26:16

Bund I think that's it nailed it but

play26:17

yeah great movie He plays alen Turing

play26:19

and is very good at it with that said

play26:21

let me know what you thought of this my

play26:23

name is Wes rth and thank you for

play26:24

watching

Rate This
★
★
★
★
★

5.0 / 5 (0 votes)

Related Tags
DARPAAI technologyReinforcement learningTransformer modelsCybersecurityQuantum computingGPT modelsSoftware developmentAutonomous systemsFuture of coding