5 AI Trends You Must Be Prepared for by 2025
Summary
TLDRThe video outlines five pivotal AI trends to watch by 2025, emphasizing the quiet yet significant transformation in AI technology. It discusses specialized AI agents, both custom and general, that will streamline complex tasks. The rise of natural language APIs预示着 a future where interactions with technology are as seamless as conversation. Emerging architectures like thermodynamic computing address the energy efficiency of AI, crucial for scaling intelligence. The advent of affordable, advanced humanoid robots signals a shift in labor and industry. Lastly, looming government regulations could either guide or stifle AI innovation, impacting the global tech landscape.
Takeaways
- 😲 2025 will see a quiet transformation in AI, not marked by AGI or humanoid robots, but by the merging of existing AI technologies into something greater.
- 🤖 Specialized AI agents will become more prevalent, offering both custom and general solutions for specific roles, with faster deployment times.
- 💼 The integration of AI with natural language APIs will allow for more intuitive and efficient interactions with technology, reducing the need for traditional UI/UX.
- 🌐 Government regulations, such as the EU AI Act and the US Responsible AI Act, will impact AI development by imposing restrictions on models based on computational power and application areas.
- 🏭 The emergence of new computing architectures, like thermodynamic computing by companies like Extropic, could revolutionize AI efficiency and scalability.
- 🤝 Partnerships between AI companies and robotics firms, like Figure's collaboration with OpenAI, will lead to advancements in humanoid robots capable of complex tasks.
- 🚀 The development of specialized AI agents will democratize AI usage, making it more accessible to small and medium-sized businesses.
- 📈 Investments in AI startups are indicating a significant shift towards AI automation in various industries, including software development, legal, and scientific research.
- 🔐 The introduction of regulations like California's SB 1047 could stifle innovation and development in AI, potentially giving an edge to countries with less restrictive policies.
- 📚 For individuals and businesses, preparing for these trends involves staying informed, leveraging AI advancements, and considering the implications of emerging regulations.
Q & A
What is the main theme of the video regarding AI in 2025?
-The main theme of the video is that AI technologies will merge into something greater without reaching AGI or having robots walking the streets, and it will bring opportunities for those well-prepared.
What are the five key AI trends that the video suggests we should prepare for by 2025?
-The five key AI trends are specialized AI agents, natural language APIs, emerging architectures, humanoid robots, and government regulation.
What is the difference between custom AI agents and specialized AI agents as discussed in the video?
-Custom AI agents are fine-tuned on specific business processes, while specialized AI agents are pre-made and trained on general processes for specific roles, making them more general but deployable much faster.
How does the video describe the impact of specialized AI agents on businesses?
-Specialized AI agents can significantly lower the barriers to entry in the AI space, making AI more accessible for small and medium-sized businesses, and potentially allowing businesses to operate without employees.
What is the significance of natural language APIs in the context of the video?
-Natural language APIs are designed for large language models and will transform how we interact with technology, allowing for more natural and efficient interactions without the need for clicking, scrolling, and typing.
How does the video suggest that emerging architectures could impact AI development?
-Emerging architectures, such as thermodynamic computing, could make AI models more energy-efficient and faster, potentially unlocking the path to AGI and reducing the cost of intelligence.
What role do humanoid robots play in the future of AI as per the video?
-Humanoid robots are expected to become more prevalent and affordable, transforming the way we work and potentially eliminating and creating jobs, depending on their integration and training.
How might government regulation affect the AI industry according to the video?
-Government regulation, such as the EU AI Act and the Responsible AI Act in the US, could slow AI development, favor major players, and potentially allow other countries like China to catch up, posing new challenges.
What is the video's stance on the potential of AI to assist in the development of harmful technologies?
-The video acknowledges the potential of AI to assist in the development of harmful technologies but emphasizes the need for proper regulation and safety measures to mitigate such risks.
What advice does the video give to those looking to prepare for the AI trends discussed?
-The video advises using free resources to develop specialized AI agents, learning about natural language APIs, keeping an eye on emerging architectures, understanding the training of humanoid robots, and monitoring the impact of government regulations.
Outlines
🤖 AI Trends to Watch by 2025
The paragraph introduces the concept that by 2025, AI technologies will merge into a greater whole without reaching AGI or having robots walking the streets. It emphasizes the importance of being well-positioned to take advantage of the opportunities arising from AI. The speaker shares their practical experience and expertise in implementing AI models and running an AI agency. The paragraph outlines five key AI trends to prepare for by 2025, including specialized AI agents, which are models that can take actions on our behalf. It differentiates between custom AI agents, which are fine-tuned for specific business processes, and specialized AI agents, which are pre-made and trained on general processes for specific roles. The speaker provides examples of specialized AI agents and discusses their benefits, such as faster deployment and utility for businesses needing quick solutions.
💡 Preparing for Specialized AI Agents
This paragraph discusses how to prepare for the trend of specialized AI agents. It suggests utilizing free training tokens offered by OpenAI to develop agents for roles like marketing or sales. The speaker advises using a framework for faster development and seeking funding if aiming to create an agent reusable across businesses. The paragraph also highlights the potential of combining multiple specialized agents for increased leverage, possibly allowing for businesses without employees. It emphasizes testing new agents as they become available and considering the impact of specialized AI agents on accessibility and the AI agent market, drawing a parallel to how Netflix popularized movie streaming.
🗣️ Natural Language APIs and Their Impact
The paragraph explores the role of natural language APIs designed for large language models. It explains how these APIs can transform cloud products and user interactions, moving away from traditional UI/UX towards more natural, conversational interfaces. The speaker discusses the evolution of voice assistants like Siri, highlighting improvements in natural speech, contextual relevance, and language understanding. The paragraph also covers how natural language APIs can enable real-time voice and video interactions, performing actions on behalf of users without the need for clicking or typing. It provides an overview of how companies like Apple and Microsoft are leveraging these technologies, and the potential for developers to create new types of API-based services that integrate with large language models.
🌐 Emerging AI Architectures for Energy Efficiency
This paragraph focuses on the challenges and innovations in AI architecture, particularly regarding energy efficiency. It points out the high energy consumption of AI models and the potential limitations this poses for scaling intelligence. The speaker introduces companies like Xtropic, which is pioneering thermodynamic computing to embed AI algorithms into thermal processes, aligning with the probabilistic nature of AI. The paragraph discusses the potential of these new architectures to significantly reduce the energy consumption and increase the speed of AI computations, comparing them to traditional GPU computing. It suggests keeping an eye on such developments as they could revolutionize the AI industry and unlock new possibilities for AI applications.
🤖 Humanoid Robots and Their Impending Impact
The paragraph discusses the rise of humanoid robots and their potential to transform the workforce. It mentions companies like Figure Robotics, which is valued at 2.6 billion and has partnerships with major tech companies. The speaker highlights Figure's integration of large language models for real-time voice and audio processing, which is crucial for humanoid robots. The paragraph also touches on other companies like Tesla and Boston Dynamics that are working on similar technologies. It suggests that while humanoid robots will eliminate many jobs, they will also create new job opportunities, particularly in training robots for specific tasks and environments. The speaker advises preparing for this trend by understanding how to train digital agents, as this skill may become highly valuable in the context of humanoid robots.
⚖️ Government Regulation and Its Effect on AI Development
This paragraph addresses the potential impact of government regulation on AI development. It discusses recent regulatory developments in the United States and Europe, which aim to regulate AI models based on certain computational metrics. The speaker outlines the requirements for models that exceed these metrics, including risk assessments and transparency requirements. The paragraph also covers the potential negative effects of such regulations, such as favoring major players, stifling the open-source community, and potentially slowing AI development. It suggests monitoring the regulatory landscape and emphasizes the importance of developing useful AI applications for positive purposes, regardless of regulatory changes.
Mindmap
Keywords
💡AI Transformation
💡Specialized AI Agents
💡Custom AI Agents
💡Natural Language APIs
💡Emerging Architectures
💡Humanoid Robots
💡Government Regulation
💡Model Customization
💡Energy Efficiency
💡OpenAI
💡Quantum Computing
Highlights
2025 will bring a quiet transformation in AI technologies without reaching AGI or widespread robot presence.
AI technologies will merge into something greater, creating opportunities for those well-prepared.
Five key AI trends to prepare for by 2025 include specialized AI agents, natural language APIs, emerging architectures, humanoid robots, and government regulation.
Specialized AI agents can be custom or pre-made for specific roles, offering faster deployment than custom AI.
Cassine's Genie AI agent outperforms Devon by cognition in software development tasks.
Harvey AI is trained on tax and legal data, performing various legal processes more efficiently than base models.
Sakana AI is fine-tuned for general research processes, including idea generation and scientific review.
OpenAI's model customization program offers 1 million training tokens for GPT 4.0 per day until September 2023.
Natural language APIs will change how we interact with cloud products, moving towards voice and video-based interactions.
Apple's Siri is evolving with richer language understanding and contextual awareness, thanks to Apple Intelligence.
Emerging architectures like thermodynamic computing aim to solve energy efficiency and scalability issues in AI.
Xtropic is pioneering thermodynamic computing, embedding AI algorithms into thermal processes for efficiency.
Humanoid robots like Figure's models are becoming more affordable and capable, with production lines starting next year.
Government regulation, such as the EU AI Act and the US Responsible AI Act, may impose restrictions on AI development based on compute power.
California's new AI safety bill could have significant implications for AI development, including holding developers responsible for misuse.
The impact of regulation may favor major AI companies, slow down innovation, and potentially allow other countries like China to catch up.
Transcripts
2025 will change our lives as we know it, but it will be a quiet transformation.
We won't reach the AGI and robots will not be walking on the streets.
What will happen, however, is that every AI technology that has
been developed so far will slowly merge into something much greater.
It will bring a ton of opportunities, but only for those who are well
positioned before it begins.
So in this video, I'll dive deep into the five key AI trends that
you need to prepare for by 2025.
I'll explain what those trends are, how they will impact the industry,
and how you can prepare to take full advantage of the opportunities Okay,
before we get started, am I the right person to talk about all this?
Well, I'm certainly not a researcher and I don't have a PhD.
But I do have something different.
I have four years of practical experience Implementing AI models in
various industries I also run an AI agency and my own AI agent framework
on github with more than 2, 000 stars Now, let's get to the trends.
Trend number one specialized AI agents AI agents are models that can take
actions on our behalf, and there are two types of AI agents that you can make.
You can make either custom AI agents or you can make specialized AI agents.
Custom AI agents are agents that are fine tuned on a specific business
process because they know your or your client's procedures inside and out.
They can automate incredibly complex tasks, tasks where there's some
decision making involved in the process or where things can go wrong
and where the AI needs to adapt based on circumstances rather than just
follow a set of predefined steps.
Think about roles like research, marketing, or management.
None of these functions could be fully automated before
with simple AI automations.
However, sometimes what we found in our agency is that some businesses
simply don't have the standard operating procedures to automate.
They just want to automate a very general process for a given role.
That's where our second approach comes in.
Specialized AI agents.
These are pre made agents that have already been trained on the
general process for a specific role.
So although they're called specialized agents, they're not
They're actually specialized for a given role, not for a business.
So in fact, they are more general than custom agents.
And the primary benefit of having these pre made AI agents fine
tuned on a general process is that you can deploy them much faster.
You can have them up and running in a matter of hours.
So while these agents might not provide as much value as completely custom AI
agents, They are incredibly useful for businesses that need quick solutions
or don't have the necessary resources.
Here are some examples.
Cassine's Genie AI agent is a software development agent that outperforms
the famous Devon by cognition almost three times compared to the
score that they initially claimed.
And 10 times compared to the actual score.
We've designed new techniques to derive human reasoning from real examples of
software engineers doing their jobs.
Our data represents perfect information lineage, incremental knowledge
discovery, and step by step decision making, representing everything
a human engineer does logically.
It is backed by OpenAI and it is using their fine tuned GPT 4.
0 model to That was trained on the reasoning steps typical
of software developers.
By actually training Genie on this unique data set, rather than simply
prompting base models, which is what everyone else is doing, we've seen
that we're no longer simply generating random code until some works.
It's tackling problems like a human.
It also has a user interface for streamlined onboarding.
and setup.
You'll notice you can prompt Genie with a natural language prompt,
ticket, or in our case, a github issue.
So I'll go ahead and start.
There we go, all the tests have now passing, Genie has successfully
solved this problem, and it solved it in just 84 seconds.
Next, we got Harvey AI.
This specialized agent has been trained on a vast amount of tax and legal data.
It can perform various legal processes like filings, due
diligence, terms of service, and more fun stuff that we all enjoy.
It's significantly outperforms the base models on legal data.
And finally, we have Sakana AI, which is a specialized AI scientist agent.
It was fine tuned on a general research process that includes generating ideas,
conducting experiments, reviewing scientific papers, and peer reviews.
To support this trend, OpenAI's model customization program is even offering
you 1 million training tokens for GPT 4.
0 per day.
Yes, it means that until September 23, you can now train GPT 4.
0 up to a million tokens per day completely for free.
So how do you prepare for this trend?
Well, first, obviously use the free tokens if you can and try to develop
a specialized AI agent for another role like marketing or sales yourself.
However, keep in mind that all of the companies I mentioned before
are funded multi million startups because creating an agent that can
be reused across multiple businesses typically requires more resources.
So feel free to use my framework to get started faster and find
additional help in our discord.
But keep in mind that you might want to seek funding later.
The second way is to utilize these systems yourself.
In my opinion, by combining multiple specialized agents together, you could
achieve significantly more leverage than just by using one of them.
You could potentially even start a business without any employees at all.
So test them frequently as they come out and see how you can use them in your work.
In summary, specialized AI agents will significantly lower the barriers
to entry in the AI agent space.
Much like how Netflix popularized movie streaming in the 90s.
With its subscription based model, specialized AI agent platforms will
make AI agents more accessible.
While they are less customizable, these agents will serve as an
excellent starting point for most small and medium sized businesses.
Trend number two, natural language APIs.
Natural language APIs are APIs that are designed specifically
for large language models.
If you think about it, any product that exists in the cloud, like a
SaaS platform, is literally just a bunch of API endpoints connected
together through a user interface.
User interface itself is not the product.
The product is the backend, because this is where the main functionality is.
User interface just allows us to use that functionality.
And not that user interfaces aren't important, you know, I firmly believe that
the user interface was the reason GPT 3.
5 blew up.
It wasn't the RLHF technique, it was the fact that the RLHF technique allowed
for chat based interactions that were a lot more natural for a common user.
And now that we have something even more natural, specifically this.
Hey, I want you to count from 1 to 10 really, really fast.
As fast as you can.
1, 2, 3, 4, 5, 6, 7, 8, 9, 10.
Yes, I mean omni models that allow for literally real time voice
and video based interactions.
Yes.
There is no longer a need for any clicking, scrolling, and typing.
These models can now perform any actions on your behalf.
And companies like Microsoft with their own copilot and Apple with Siri
are recognizing the potential here.
First, Siri can now sound more natural as it speaks to you.
Second, Siri is now more contextually relevant and more personal to you.
Apple intelligence will provide Siri with on screen awareness, so it'll
be able to understand what you're looking at and take action on it.
And third, thanks to richer language understanding, you can
now speak to Siri more naturally.
Even if you stumble over your words, Siri will understand what you're getting at.
Soon, if you want to order Uber Eats, you won't even have to open an app.
You will just chat with the model on your device about what you want to get, confirm
the payment, and it will order it for you.
Or, maybe you won't even have to do anything at all, because it knows
everything about your habits, and it might even order it for you just in time.
And the way it will all work is through natural language APIs.
The name might be different depending on the operating system, but yeah,
in Apple, you can already do this with app intent domains or with what
they call it, assistant schemas.
Apple intelligence is powered by foundation models that give
Siri new capabilities in the domains we just talked about.
These models are trained to expect an intent with a particular shape.
The shape is what we call a schema and assistant schemas is what we call the API.
If you build an app intent with the right shape.
You'll benefit from our training and don't need to worry about the
complexities of natural language.
All you need to do is write a perform method and let the
platform take care of the rest.
It's funny because in my framework it works the exact same way except
the perform method is called run.
This year we've built schemas for over 100 kinds of intents, like
creating a photo or sending an email.
They each define a set of inputs and outputs that are common for
all adopters of that intent.
This is what I mean by shape.
In the middle of all this geometry sits your perform method, with
full creative freedom to define an experience that is right for your app.
Now, let me walk you through the lifecycle of a Siri request with Apple Intelligence.
to demonstrate assistant schemas in action.
Everything starts with a user request.
This request is routed to Apple intelligence for
processing through our models.
Our models are specifically trained to reason over schemas, allowing
Apple intelligence, the ability to predict one based on user request.
Once an appropriate schema is selected, the request is routed to a toolbox.
This toolbox contains a collection of app intents from all the apps on
your device, grouped by their schema.
By conforming your intent to a schema, you give the model
the ability to reason over it.
Finally, the action is performed by invoking your app intent.
The result is presented and the output is returned.
But it's not just Apple.
AI providers also want you to connect your apps to their LLMs.
which is evident by new features like structured outputs by OpenAI.
So if you want to learn more about how these models can now reason
reliably over those schemas, make sure to check out my previous
video on structured outputs later.
However, with this new trend, there are even more opportunities on the horizon.
Some of you might be familiar with API as a service products.
These products provide API endpoints without a user interface at all.
For example, there are URL Shortener APIs, Weather Information APIs,
Text to Speech APIs, Currency Conversion APIs, and others.
Some people are secretly making big money on this.
Now, with the release of Omni Models, imagine a product that users
interact with solely through an LLM.
You can create an API based service that users subscribe to and then simply plug
into their LLMs on their own devices.
Zapier was one of the first movers in this space with AI Actions API.
This is exactly what I'm talking about here, a natural language API designed
specifically for large language models.
And although it's not quite there yet, I'm confident that in 2025, we'll
see even more products like this.
So if you're a developer, start learning about App Intents
and how to integrate apps.
With large language models, depending on your operating systems.
If you want to build a SAS, think about developing a natural language
API, whether it's a new product or an existing one, this trend will be huge
for all developers because it will completely transform how we create apps
and how we use them on our devices.
Trend number three, emerging architectures.
And I am not talking about new model architectures.
This is probably also coming soon, but here I am talking about something much
bigger so far, the research suggests that intelligence scales proportionally
with the number of parameters.
However, while we can likely accommodate for the rising demand and compute with
100 billion data centers like this.
by OpenAI and Microsoft, there's another problem that money alone can't solve.
One chat GPT query takes nearly 10 times as much energy as a typical Google
search, and as much energy as keeping a 5 watt LED bulb on for an hour.
Generating an AI image can use as much power as charging your smartphone.
Generating an AI image can consume as much energy as charging your smartphone.
So, as you might guess, the global electricity demand
also skyrocketed in 2022.
And unless there is a breakthrough in AI architectures, we might hit a
plateau, not because of the technological limitations, but simply because
we don't have enough electricity.
You see, the fact that we're running our current AI models on existing hardware
is essentially just a coincidence.
CPUs and GPUs are designed for a different type of computing, where the outcome of
an operation is always either 0 or 1.
AI models, on the other hand, work in a probabilistic fashion,
meaning that AI models assign probabilities for certain outputs
and then select the most likely one.
This is why companies like IBM and Google have their own quantum AI divisions.
However, it's now evident that quantum computing is still not ideal for AI,
which is why we're starting to see the rise of new computing paradigms
like thermodynamic computing.
And the company leading this innovation is called Xtropic,
funded by Guillaume Verdon.
What we're building at Xtropic, our mission essentially is to build the
ultimate substrate for AI compute, right?
Hit the ultimate limits of physics in terms of density, especially In terms
of energy efficiency and speed for AI.
So how do we do that?
We embed AI algorithms into the physics of the world as tightly as possible.
Extropic takes a completely different approach by embedding
AI algorithms directly into the physics of thermal processes.
They use natural noise to perform computations, which is perfect
for generative AI tasks because it aligns with the probabilistic
nature of how AI models work.
Take a look at this video, right?
So tokens per second divided by Watts is a hundred million times
more energy efficient, which again, ballpark of the brain.
Again, this is not a machine we've built.
It's in simulations.
So it's a projection.
I'll show you in a second, the machine we've actually built, but this is
extrapolated from the data we got and are as accurate as we could simulations.
In terms of speed, it's about a thousand to 10, 000 times
faster in GPU for inference of deterministic neural networks.
So for sampling of Monte Carlo, it's about a million to 10 million times faster.
Of course, right now, this is just a simulation, but I mean, 1000 to
10, 000 times faster inference.
Compare this to grok, which is only 12 to 18 times faster than GPU, and it's also
100 million times more energy efficient.
So, how do you prepare for this trend?
Well, definitely keep an eye on companies like Extropic, their testing
begins in early 2025, and they should have a white paper coming out soon.
If their vision becomes a reality, you might want to
scale up your dreams as well.
For example, if right now running a 400B model is too expensive for your
use case, imagine what it would be like if you could run a thousand times
bigger model a hundred times cheaper.
In summary, emerging architectures might just be the key to unlocking AGI.
If companies like Extropic succeed, this will ultimately bring the cost
of intelligence to zero, leading us to abundance and prosperity.
Trend number four, humanoid robots.
Transcribed Remember that movie with Will Smith where by 2035 there was
one robot for every three humans?
Well, what if I told you that it could be more like one robot for every human?
Yes, we're talking billions by 2040.
And this is already here.
Figure the leading robotics company in this field valued at 2.
6 billion with investors like OpenAI, Microsoft, NVIDIA, and
Jeff Bezos recently released figure two with a ton of new features.
And they are starting their production line next year.
Do you already have first customer signed?
Like BMW manufacturing.
And by the way, these robots will also be quite affordable somewhere only around
20, 000, which is equivalent to a low level car, except it will be able to
work for you 24 hours a day for years.
At the moment, it seems like Figure is the most advanced company, however,
they are far from being the only one.
In total, there are around 20 other companies that are trying to accomplish
the exact same thing around the world.
For example, there is Tesla, of course, in United States and Boston Dynamics, and
there are AGIBOT and Unitree in China.
Now, the difference that makes Figure unique is how they are
utilizing large language models.
At the top, they've partnered with OpenAI to integrate their reasoning model.
This is ideal for humanoid robots because of the real
time voice and audio processing capabilities we discussed before.
This model then integrates with neural network policies.
I assume this is similar to how agents use tools through natural language APIs.
These policies process requests from the main agent, translating
them into specific behaviors.
The system then repeats this process in a feedback loop
until the task is accomplished.
While this is just my interpretation, it's clear that Figures partnership
with OpenAI will be instrumental to their success in this market.
So, how do you prepare for this?
Well, although these robots will be general, I assume they'll
still need some specific training, much like digital AI agents.
For instance, if you want to employ one in your cafe or at a manufacturing plant,
you'll most likely at least have to tell this agent about your business procedures,
your company policies, and specific tasks.
So, knowing how to train digital agents could evolve into one of the
most unique job opportunities yet.
Training humanoid robots to operate effectively in
different physical environments.
In summary, humanoid robots are no longer a distant future concept.
They are here and they will start transforming the way
we work as soon as next year.
Although they will of course eliminate a ton of jobs, I'm sure about that, they
will also create a whole new job market.
For those who are prepared.
Trend number five, government regulation.
So, so far all the trends I mentioned are positive, and I truly believe that they
will bring immense benefits to humanity.
However, there is one trend that could significantly delay everything
we've discussed far beyond 2025.
So, recently there have been new developments both in In the United
States and in Europe, California recently issued a new controversial
AI safety bill and both OpenAI and Anthropic executives responded with
formal letters expressing their opinions.
We are going to go over that bill of course, in a bit, but first let's
take a look at broader AI acts.
Specifically the EU AI act and the responsible advanced AI act in United
States, the general consensus is that both the EU and us are going to
regulate models above a certain number of flops, which is the computing power
that goes into training such a model.
This is exactly what Sam Altman proposed a few months ago in May
and the responsible EIA act even quotes him directly in a summary.
In the United States this number equates to 10 to 26 flops
while in Europe it's 10 to 25.
If more than this amount of compute goes into training your model, You
will have to go through extensive risk assessment, model valuation,
transparency requirements, and reporting.
For reference to train 405B model, you need around three times more compute
than the requirement in Europe and three times less compute than the
top requirement in the United States.
This is the only objective metric that I found in both of these acts.
Other criteria include, like if you can use this AI in the development of
chemical or Nuclear weapons, or if you can use it in high impact areas like law
enforcement, education, and employment.
But, here is the thing.
It has already been proven many times that CHAT GPT can greatly assist college
students in developing bioweapons.
Additionally, scientists use CHAT GPT all the time to assist them with
research, which is evident by an increasing frequency of LLM related
keywords in scientific publications.
So, no one still really knows under which category CHAT GPT falls.
And the legislators probably don't even know themselves because both
acts also include clauses like Administrator has the freedom to make
rules stricter without any evidence.
This last clause is specifically problematic because I want to emphasize
that there is absolutely no real evidence right now that there is something
nefarious living inside ChatsGPT.
I also want to be clear that in the United States this act has not
been passed yet, unlike in Europe, but it has already been used.
As a guide for other legislation.
So for example, in the new SB 1 0 4 7 bill in California, which is on
track to start taking effect by the next month, instead of using flops,
it simply uses costs as the metric.
So now all models that cost over 100 million train in
California must be regulated.
They must provide extensive proof of safety, implement mandatory safeguards,
like a kill switch, undergo annual audits by third parties, maintain safety
protocols, and submit regular reports.
However, the most concerning aspect of this bill is that it holds developers
responsible for any misuse of their models, even if the model was later fine
tuned or modified after its creation.
This is completely absurd, and I have no idea how could this even be discussed,
but OpenAI's chief strategy officer already expressed concerns and stated
that this bill could halt all model releases and AI development in California.
Anthropic, on the other hand, surprisingly said that this bill provides more
benefits and likely outweighs the costs.
So how will this affect the AI industry?
Obviously, it benefits major players like Anthropic and OpenAI the most.
Because very few companies will have the resources to meet those safety checks.
The open source community in California will most likely be completely
destroyed because it's much easier to create a derivative of the model
and misuse it from an open source project rather than a closed one.
Secondly, it will slow the AI development in the United States
and in Europe, potentially allowing countries like China to catch up.
Which could be a lot more dangerous than someone misusing chat GPT.
And third, this might actually decrease the amount of money that
goes into the safety research.
Because as you might guess, all of those model evaluations, kill
switches, and audits will not be financed from the company's profits.
They will be financed from the same safety budget that OpenAI and other companies
devoted to super alignment and research.
So to prepare for this, we have to monitor how this whole story unfolds.
The bill should start taking force gradually over the
next year in California.
However, keep in mind that even if we completely stopped AI development
today, it will still take us years to fully realize the full potential of
everything that has been created so far.
So the best thing that you can do right now is learn how to develop
useful AI applications for good.
To get started, I recommend watching this video next.
Thank you and don't forget to subscribe.
Weitere ähnliche Videos ansehen
The Race For AI Robots Just Got Real (OpenAI, NVIDIA and more)
The AI Hype is OVER! Have LLMs Peaked?
Why Nvidia, Tesla, Amazon And More Are Betting Big On AI-Powered Humanoid Robots
OpenAI CEO on the future of programming | Sam Altman and Lex Fridman
How to self-host and hyperscale AI with Nvidia NIM
The Future of Artificial Intelligence (Animated)
5.0 / 5 (0 votes)