AI News: The AI Arms Race is Getting Insane!
Summary
TLDRThis week in AI saw major announcements from Google and OpenAI, with Google's Gemini 1.5 becoming available in 180 countries and OpenAI's GPT-4 Turbo model improving at coding and math. New large language models were released, including Stability AI's non-commercial Stable LM2 and ML's Mixr 8X 22b. Meta is also on the verge of releasing Llama 3, an open-source model. Additionally, companies are developing AI chips to reduce reliance on Nvidia GPUs, and there's a push for AI companies to disclose their training data. In the world of AI music, Udio is gaining support from musicians, and Spotify is testing AI-generated playlists.
Takeaways
- π’ Google's Cloud Next event in Las Vegas featured numerous AI-related announcements, with a focus on enterprise and developer tools.
- π Google made Gemini 1.5 available in over 180 countries, offering a 1 million token context window for advanced language understanding and audio processing.
- π₯ An example of Gemini 1.5's capabilities includes analyzing an hour-long audio file to provide key takeaways and generate YouTube video titles and thumbnails.
- π€ OpenAI announced a significantly improved GPT-4 Turbo model, which is now available through the API and has shown better performance in coding and math tasks.
- π Stability AI released Stable LM2, a 12 billion parameter model that can be used both non-commercially and commercially with a membership.
- π ML released a new large language model using a mixture of experts architecture, featuring 176 billion parameters and a 65,000 token context window.
- π₯ Google introduced new versions of their open-source large language model, Gemma, tailored for coding and efficient research purposes.
- π« Meta is close to releasing Llama 3, an open-source model expected to be as capable as GPT-4, with multiple versions for different use cases.
- π° Tech companies like Google, Intel, and Meta are developing their own AI chips to reduce reliance on Nvidia's GPUs, which currently dominate the AI training market.
- πΆ AI music generators like Udio are gaining popularity and support from musicians, offering a platform for creating music with AI assistance.
Q & A
What is the main focus of the AI news in the spring of last year?
-The main focus of the AI news in the spring of last year was the ramping up of announcements related to new large language models becoming available, as well as the developments in AI technology by major companies like Google and OpenAI.
What significant announcement did Google make at their Google Cloud Next event in Las Vegas?
-At the Google Cloud Next event, Google announced the availability of Gemini 1.5 in over 180 countries with native audio understanding system instructions and JSON mode, among other features.
What is the context window of Gemini 1.5 and how does it compare to other models?
-Gemini 1.5 has a context window of 1 million tokens, which is significant because it allows for a much larger amount of input and output for the model to work with, providing a combined 750,000 words for interaction.
How did Bill use Gemini 1.5 to enhance his YouTube content creation?
-Bill used Gemini 1.5 to analyze an hour-long audio file from a video interview, generate key takeaways, suggest high click-through rate YouTube titles based on principles of Daryl EES and top YouTube creators, and even provide feedback on which thumbnail to use for the video.
What is the difference between GPT-4 Turbo and the previous model in terms of capabilities?
-GPT-4 Turbo is an improvement over the previous model with enhanced capabilities in coding and math, and it is updated through December 2023. It is also considered the strongest and most powerful model according to the chatbot Arena.
What is the open-source model released by Stability AI and how does it compare to Mixl 8X 7B model?
-Stability AI released Stable LM2, a 12 billion parameter model that slightly underperforms the Mixl 8X 7B model. It can be used both non-commercially and commercially, but commercial use requires a Stability AI membership.
How did Mistol release their new large language model and what are its specifications?
-Mistol released their new model, Mixr 8X 22b, through a torrent link. The model features a 65,000 token context window and a total of 176 billion parameters, with eight experts each having 22 billion parameters.
What is the significance of the new large language models released by Google and Meta?
-Google released new versions of Gemma, their open-source large language models, focused on coding and efficient research. Meta is close to releasing Llama 3, an open-source model expected to be as good as GPT 4, available for anyone to use and build upon.
What are the implications of the new AI chips introduced by Google, Intel, and Meta in relation to Nvidia's dominance in the GPU market?
-Google, Intel, and Meta are all developing their own AI chips to reduce reliance on Nvidia's GPUs. While Nvidia currently leads the market, these companies are attempting to catch up and provide alternative options for AI model training and development.
What is the controversy surrounding AI companies and their use of copyrighted material for training?
-There is a debate over the use of copyrighted material for training AI models. A new bill has been introduced to Congress that would force AI companies to reveal the copyrighted material used in training their generative AI models. This is due to concerns over data privacy and the ethical use of content.
How is Adobe approaching the acquisition of training data for their AI models?
-Adobe is taking a different approach by offering to purchase video content from creators for their AI training data. They are willing to pay between $3 to $7 per minute for everyday video footage, which is a shift from the traditional method of data acquisition.
Outlines
π Google Cloud Next and AI Announcements
The paragraph discusses the recent Google Cloud Next event in Las Vegas, highlighting new AI-related announcements. Google introduced Gemini 1.5, which is now available in over 180 countries with advanced features like a 1 million token context window. The video also mentions the use of Gemini 1.5 by a content creator for packaging a long video interview for YouTube, showcasing its capabilities in generating key takeaways and suggesting high click-through rate titles. Additionally, OpenAI's GPT-4 Turbo model received a mention, which is an improvement over the previous models and is now considered the most powerful by the Chatbot Arena community. The paragraph emphasizes the importance of these AI models and their potential applications in various fields.
π New Large Language Models and AI Developments
This paragraph delves into the release of new large language models, including Stability AI's stable lm2, which is a 12 billion parameter model, and Mixture of Experts models from ML. The open-source community is also making strides with the release of a new model via a torrent link, Mixr 8X 22b, which features a 65,000 token context window and a total of 176 billion parameters. Furthermore, Google has released new versions of their open-source large language model, Gemma, including one tailored for coding and another for efficient research. The paragraph also mentions Meta's upcoming release of Llama 3, an open-source model expected to rival GPT-4 in capabilities.
π‘ AI Chip Innovations and Video Generation Models
The focus of this paragraph is on the efforts of major tech companies to reduce their reliance on Nvidia GPUs for AI training. Google, Intel, and Meta have all introduced their own AI chips, such as Google's Axion processors, Intel's gouty 3 AI chip, and Meta's MTI accelerator. These developments aim to improve performance and efficiency while reducing costs. The paragraph also touches on Google's new image generation model, Imagen 2, which can create short animations and GIFs, and other AI-generated video innovations like Magic Time, which specializes in timelapse videos. The advancements in AI chip technology and video generation models underscore the rapid progress in the AI field and the increasing competition among tech giants.
πΆ AI in Music and the Future of Content Creation
This paragraph covers the emergence of AI in music generation, with platforms like Udio gaining popularity. Udio allows users to create music by providing prompts, style suggestions, and even AI-generated lyrics. The paragraph also mentions Spotify's new AI-driven playlist feature. In the realm of content creation, Adobe's initiative to purchase video content for training their AI models is highlighted, as well as Meta's efforts to identify AI-generated images using their own AI detection system. The discussion emphasizes the growing role of AI in creative industries and the potential for AI to transform content creation and consumption.
π€ AGI Predictions, AI Ethics, and AI-assisted Art
The paragraph begins with Elon Musk's prediction of achieving AGI within the next year or two, contrasting with Yan LeCun's view that current large language models will not reach human-level intelligence. It then shifts focus to the potential of Meta's self-supervised learning architecture, V Jeppa, to achieve AGI in the future. The Humane pin, a device designed to replace smartphones, receives critical reviews for its impracticality and high cost. The paragraph also discusses the use of AI in art, specifically an AI-assisted artist who was paid $90,000 for generating card art, and the importance of human intervention in refining AI-generated concepts. The discussion concludes with the launch of the Next Wave podcast, which aims to delve deeper into AI topics and features various guests to provide insights into the AI world.
π Launch of the Next Wave Podcast and AI Tools
The final paragraph announces the launch of the Next Wave podcast, a platform for deeper discussions on AI topics, ethics, and implications. The podcast, produced by HubSpot, offers a more in-depth conversational format compared to the video content. The paragraph also promotes the podcast's competition, which includes giveaways such as Apple Vision Pros, and encourages viewers to subscribe, like, and review the podcast. Additionally, the speaker highlights his own AI-focused newsletter and website, futur tools, which curate the latest AI tools and news, and provides an AI income database for subscribers. The paragraph concludes with a call to action for viewers to engage with the content and stay updated with AI developments.
Mindmap
Keywords
π‘AI News
π‘Google Cloud Next Event
π‘Large Language Models
π‘Gemini 1.5
π‘OpenAI
π‘Stable Diffusion Model
π‘Meera Model
π‘AI Music Generator
π‘Humane Pin
π‘AGI (Artificial General Intelligence)
π‘AI Image Generation
Highlights
Google's Cloud next event in Las Vegas featured numerous AI-related announcements, emphasizing the growing importance of AI in enterprise and development sectors.
Gemini 1.5 was launched in over 180 countries with enhanced capabilities such as a native audio understanding system and JSON mode instructions.
The Gemini 1.5 model boasts a 1 million token context window, allowing for extensive input and output interactions, equating to 750,000 words in total.
An example of Gemini 1.5's practical application includes analyzing an hour-long audio file to generate key takeaways and suggest YouTube titles based on content analysis.
OpenAI's announcement of the significantly improved GPT-4 Turbo model, available through the API, has sparked interest despite being somewhat overshadowed by Google's announcements.
Stability AI's release of the 12 billion parameter model, Stable LM2, demonstrates the ongoing growth in the open-source AI realm.
ML's release of a new large language model using a mixture of experts architecture, Mixr 8X 22b, features a 65,000 token context window and a total of 176 billion parameters.
Google's release of new versions of their open-source large language models, Gemma, includes one tailored for coding and another for more efficient research purposes.
Meta is reportedly close to releasing Llama 3, an open-source model expected to rival GPT 4 in capability and be publicly available for use and development.
The competition between companies to develop AI chips is heating up, with Google, Intel, and Meta all seeking to reduce reliance on Nvidia's market-leading GPUs.
Google's Imagen 2 model represents their foray into AI image generation, capable of producing animations and GIFs from text prompts.
Adobe's unique approach to AI involves purchasing video content from creators to train their large language models, offering a potential revenue stream for content creators.
The introduction of a bill to Congress aiming to force AI companies to reveal the copyrighted material used in training their generative AI models could have significant implications for the industry.
Udio, an AI music generator, is gaining support from musicians and investors alike, showcasing the potential for AI in creative fields.
Spotify's new AI-driven feature allows users to generate playlists based on prompts, further demonstrating AI's infiltration into everyday applications.
Elon Musk's prediction of achieving AGI within the next year and a half contrasts with Yan LeCun's skepticism about large language models reaching human-level intelligence.
The Humane pin, a device designed to replace smartphones, has received unfavorable reviews, highlighting the challenges in creating practical and user-friendly AI technology.
A card game developer's use of an AI artist to generate card art for $90,000 illustrates the growing potential and commercial viability of AI-assisted creativity.
The launch of the Next Wave podcast offers a platform for in-depth discussions on AI, providing valuable insights and perspectives on the latest developments in the field.
Transcripts
so just like spring of last year AI news
is really ramping up there has been a
ton of announcements this week I'm
really having to figure out what stuff
to filter down that I think you'll find
important because at the end of the day
nobody really cares that like Walmart
got a new AI chat bot or something so
I'm going to break down the stuff that I
found important interesting or just
downright fun that I think you're going
to enjoy so let's get right into
[Music]
it this week Google had their Google
Cloud next event out in Las Vegas where
they made a ton of new announcements
plenty of them related to AI more of the
announcements were more relevant to like
Enterprise and developers that are
building with AI models but there were
some pretty interesting and fun
announcements that I think you'll enjoy
as well so I'll be kind of sprinkling
them throughout this video the real
story of this week is all of the news
about new large language models becoming
available or soon coming available ing
Google's event this week they announced
that Gemini 1.5 is now available in 180
plus countries with Native audio
understanding system instructions Json
mode and more now we have talked about
Gemini 1.5 in the past but most people
haven't had access to it until now of
course the biggest Factor about Gemini
that people are most impressed by is the
fact that it's got a 1 million token
context window now one token just to
refresh your memory is about 75% of a
word so 1 million tokens means that
between the input that you can give the
model and the output it'll give back you
have a combined
750,000 words to work with Gemini 1.5 is
now available via the API so if you're a
developer and you want to build with
this model it's now available for you my
buddy bill of Al do here has probably
one of the best examples that I've seen
of somebody actually using Gemini .5 he
shows an example here where he says I
just dropped in an audio file of an
hourong video interview and now it's
helping me package it for YouTube we can
see in his screenshots here he actually
uploaded the MP3 file told it to analyze
this audio recording for his interview
he then asked it to give the key
takeaways and come up with 10 high
click-through rate YouTube titles based
on the principles of Daryl EES and top
YouTube creators keep each title to 50
characters or less so from this audio
file it then gave them the key takeaways
he also offered it two thumbnails and
said which of these thumbnails is better
suited for this YouTube video it
analyzed the two thumbnails and gave
feedback on which thumbnail to use and
then once he picked the thumbnail he had
it suggest the 10 titles and it gave
some pretty decent titles now that's all
pretty cool we can kind of do that with
Claude right now the difference is he
uploaded an audio file and it did this
from the audio file with Claude you
would actually have to get the text
transcript upload it and you would
pretty much get the same end result most
likely where I thought this was the most
impressive was when he asked it to
generate timestamps and you can see it
actually recommended these timestamps
with an explanation of each of the time
stamps and then there is a shorter
version of the timestamps that it also
generated this to me is really
impressive because I've tried to use
Claude and I've tried to use chat jpt to
generate timestamps for these videos
that you're watching right now while it
gets the various sections right and it
knows what I'm talking about in the
video
it just completely gets all the timings
wrong and struggles to give an accurate
timestamp to the actual chapter this
example that billal shared is really
really useful in my opinion but as open
AI does every single time Google makes
an announcement they come out with their
own announcement in most of the past
scenarios open ai's announcement really
overshadowed Google's announcement but
this time we just got a vague
announcement of a majorly improved gp4
turbo model is now available in the API
and rolling out inside of chat GPT we
don't have a ton more details than that
but if we take a look here at the openai
documentation we can see we've got the
newest model here gp4 turbo and this is
the April 9th edition of it Vision
request can now use Json mode and
function calling it's got the same
128,000 tokens we've been working with
and it's updated through December 2023
which is also what the previous model
was updated through supposedly this new
model is a lot better at coding and also
a lot better at math and for a while L 3
Opus was the cream of the crop the best
model out there but it seems now that
according to the chatbot Arena here the
newest version of GPT 4 Turbo the April
9th Edition now took over Claude 3 Opus
again as the strongest most powerful
model as voted on by the people that
rank this system here now they do show
Gemini Pro down here as being below
Claud and GPT 4 Turbo but I don't
believe I'm not 100% sure but I don't
believe that this is taking into account
the newest 1.5 model but that's not the
only news we've gotten in the world of
new large language models the open
source world is continuing to heat up as
well in fact stability AI released
stable lm2 which is a 12 billion
parameter model and according to most
benchmarks it just underperforms the
mixl 8X 7B model and although they kind
of make it out to be like an open-source
product it does say it can be used
non-commercially as well as commercially
but if you have a stability AI
membership so if you do want to use it
commercially you got to pay which
doesn't feel very open source to me well
ml said hold my beer and watch this
releasing a new large language model
using the mixture of experts
architecture but they released it in an
interesting way they released it as a
torrent link directly on X with almost
no context in order to actually download
the weights for this model you would
need a torrent downloader something like
views and then if you paste this URL
into your address bar it will start the
download over inside of your torrent
downloader however be aware it is a 281
gigabyte file now I don't have a ton of
information about this new model from
mistol however in the rundown newsletter
they gave us a little bit more details
this week according to Rowan over at the
rundown the new model is Mixr 8X 22b so
the previous model was 8X 7B so it was
eight separate models that the router
called upon to get the prompt answered
and each of the models that it was
calling upon was a 7 billion parameter
model each this new one has eight
experts but each expert instead of being
a billion parameter model is now a 22
billion parameter model so it was just
trained on a lot more data according to
this breakdown the new model features a
65,000 token context window and a
combined total of 176 billion parameters
I haven't used this model yet myself
channels like Matthew burman's channel
does a really really good job of testing
these large language models I have a
feeling that this one is going to be the
strongest open-source model once some
more of the tests come out around it but
we have even more large language Model
news even more coming from Google Google
released Gemini 1.5 their closed Source
model but they also rolled out new
versions of Gemma which is Google's
open- Source large language models these
two new models are code Gemma a model
fine-tuned for using with coding and
recurrent Gemma which is designed for
more efficient research purposes now all
of the Articles and resources I make in
this video I will share in the
description below so if you do want to
dive deeper into how these Gemma models
compare on the benchmarks against other
models check out the links below but
overall Gemma appears to be pretty on
par with the other open-source coding
specific large language models and in
the final bit of large language Model
news we also learned this week that meta
is really close to releasing llama 3
llama 3 is expected to be roughly as
good as GPT 4 but open source and made
publicly available for anybody to use
and fine-tune and build on top of now
they did say with llama 3 they were
going to release several different
versions of the model it kind of sounds
like what Claude did with Haiku Sonet
and Opus there will be different models
sort of more fine-tuned for different
reasons that's my takeaway from this as
well according to Tech crunch they
announced that this is coming out
sometime within the next month hopefully
sooner so really looking forward to that
because I truly am rooting for both the
open source and the closed source side
to keep on pushing the boundaries and
each side is just making things better
and better for us the consumers now I
know a lot of people that watch this
channel have small to medium businesses
and they're looking to use AI to improve
their marketing and their business
that's why for this video I partnered
with HubSpot so that I can get your
eyeballs on their completely free report
all about how AI is completely
redefining startup go to market strategy
I'm going to put the link in the
description to you can download this
report but this is a must read for any
small to medium business that plans on
using AI as part of their startup
strategy you're going to learn about the
various strategies that startups are
using to bring their products to Market
you also learn about the most popular AI
tools and best practices for scaling and
if you know me I love me some AI tools
the free report also covers how AI is
driving startup scalability and drawing
the attention of investors as well as
the future of a guy within goto Market
strategies of course my favorite section
as a guy who made a business around
curating amazing tools is this section
all about the tools and best practices
that HubSpot recommends for your goto
Market strategy again this free report
was provided by HubSpot who is
sponsoring this video so thank you so
much again to HubSpot for sponsoring
this and once again the link should be
right at the top of the description to
make it easy for you to find now if the
main story of the week was all of the
large language models that have come out
this week the sort of B story of the
week the Side Story is it seems that all
of these companies that are building
these large language models are all
trying to release the Reliance on Nvidia
gpus at the moment Invidia owns the
market on gpus being trained for AI but
Google Intel and even meta are all
trying to bring that chip generation
inhouse and stop giving so much money to
Nvidia at the Google Cloud next event
this week Google introduced their Axion
processors Intel introduced their gouty
3 AI chip which is apparently a 40%
better power efficiency than nvidia's
h100 gpus and meta announced a chip as
well this new chip is called an
MTI or meta training and inference
accelerator this is the second
generation of the Chip and according to
meta article here it is three times
improved performance over the first gen
chip now again I will link to all these
articles that talk about all of these
chips that were announced this week
because a lot of this sort of technical
stuff here is a little bit over my head
I don't personally understand how these
chips work but if you're somebody that
wants to Deep dive and truly understand
what makes these chips better than
what's available feel free to read the
articles in the description however back
at Nvidia GTC earlier this year Nvidia
announced their next iteration the
Nvidia Blackwell which is supposedly
already four times more powerful than
the h100s again the chip that is sort of
the industry standard for training AI
models right now so while all these
companies are making their own chips to
release Reliance on Nvidia Nvidia is
still way out ahead with their latest
generation of chips making it extremely
hard for these other companies to catch
up with the compute power that Nvidia is
producing right now also during the
Google next event Google revealed image
in 2 this is sort of like Google's
answer to open ai's dolly or adobe's
Firefly it's their own internal AI image
generation model however what makes
image in 2 different than tools like
Dolly and Firefly is that it can
actually generate animations it can
generate GIF files or GIF files however
you like to say it here's some examples
of the types of animations that it will
make you know similar to what we get out
of like Pika or Runway but these are
very short to 3 second clips and they
seem to be designed to just make like
short Loops or like little GIF GIF files
they described it as text to live image
so if you use an iPhone you've got that
live photo feature where when you take a
photo it sort of saves like two seconds
of video as well so that you can find
that right spot in the photo it
seemingly is designed to generate that
kind of little tetiny short clip that
wasn't the only video announcement that
they made at this event they also
revealed Google vids now this is
something that we don't have a lot of
information about they put out a little
like teaser video it's a minute and 27
seconds here and it appears to make
videos that look almost like PowerPoint
Style videos using AI says let's choose
a style and then you can see the styles
of the videos Almost look like you know
something you'd get out of canva or
PowerPoint or keynote or something like
that you pick your style you can give it
a script or let AI generate a script and
then it creates like a slide
presentation Style video very
reminiscent of like a PowerPoint or
Google slides video again not available
yet but it does say coming soon to
Gemini for Google workspace but here's
some research that came out this week
about a new video generator it's called
Magic time and it makes timelapse videos
this one is very specific to time lapses
so you can see some differences here
where a prompt given to a normal
animated video like bean sprouts grow
and mature from seeds generate something
like this where this magic time style
generates this like timelapse version or
Construction in a Minecraft virtual
environment this one shows almost like a
drone rotating view around it where
magic time is making this time lapse and
you can see a couple other examples here
again it makes a very specific type of
video which is amazing for people like
me who love to make videos cuz I can use
stuff like this for b-roll their GitHub
page which will be linked below has a
lot of examples of the type of
animations it can do but the coolest
part is the code is available on GitHub
so if you know what you're doing you
could run it locally or on a cloud but
they also have a hugging face demo that
you can play around with right now but
for now let's take a peek at one of
their like cached models here so I'll
click on this and this was a prompt of
cherry blossoms transitioning from
tightly closed buds Etc and if I play it
that's what that looks like and since
we're talking about video if you
remember last week I talked about how
Neil Mohan the CEO of YouTube said that
if open AI trained on their videos that
would be a clear violation of their
policies their terms well according to a
report from The New York Times open AI
transcribed over a million hours of
YouTube videos to actually train gp4 now
there hasn't been a ton of confirmation
if you read the article it's a lot of
hearsay and Google claims to have looked
at the robots. text file on you YouTube
and saw that open AI probably was
scraping data but there was no real
actual confirmation but pretty soon we
could have a law that forces the AI
companies to reveal what the models were
actually trained on so there was a new
bill introduced to the Congress on
Tuesday that intends to force artificial
intelligence companies to reveal the
copyrighted material they use to make
their generative AI models the bill
would actually Force companies to file a
report about what copyrighted material
they used at least 30 days before
actually releasing that AI model no clue
whether or not something like this will
get past but seeing as how some of the
biggest companies in the world namely
Google Microsoft and meta may not want
to reveal what data they actually
trained on and being as powerful as they
are I would imagine there could be some
lobbying going on behind the scenes to
keep bills like this from getting past
but I'm just speculating here Adobe on
the other hand is taking a completely
different approach they're actually
willing to buy data off of creators to
train on for their large language models
Adobe wants to create their own version
of Sora and in order to do that they
need a large amount of video training
data well in order to get that video
training data Adobe is offering to pay
between $3 per minute and $7 per minute
to purchase video content off of
creators Adobe is looking for everyday
things like people riding a bike or
walking down the street or the types of
things you'd see in normal stock video
so if you like to go out into the world
and film stock video footage you might
be able to make some extra money by
submitting it to Google and letting them
pay you a few bucks per minute of video
I think if a bill like we just looked at
does end up getting passed this is
probably the way of the future for how
these large language models will get
trained meta also announced this week
that they're going to take stronger
measures to identify AI generated photos
basically when people upload photos to
places like Facebook Instagram or
threads meta is going to use their own
AI detector to look for things that are
commonly present in AI photos and try to
identify when the photo or image was
made with AI previously meta was able to
label photos that was made with their
own emu AI image generator as well as
identify AI photos that the uploader
specifically marked as AI but now it
appears that they're actually going to
use AI to try to spot AI another big
story of the week was udio udio is a
really really good AI music generator
you give it a prompt you can suggest
Styles you can decide whether you want
to write your own lyrics have ai
generate lyrics omit lyrics Al together
and the outputs are really good like a
lot of songs I've heard I would not have
known that they were AI unless somebody
told me that they were AI they are that
good now I'm not going to go too deep
into udio in this video because the
video that I released the day before
this video went live was all about AI
music generators with a very heavy focus
on udio so if you really want to learn
about udio check out this video here but
one really interesting thing about udio
before I move on is that it's actually
being supported by other musicians
musicians like will IM and common and it
has backing from people like the
co-founder and CTO of Instagram and the
head of Gemini at Google and a16 is
backing it so lots of money getting
pumped into this and musicians actually
seem to be supporting this platform so
something really interesting to watch
here and since we're on the topic of AI
music Spotify is rolling out a new
feature where you can have it generate a
playlist for you using AI you just give
it a prompt like I need pump up music
for the gym and it'll make a playlist of
songs it thinks is pump up music for the
gym or I need sad music because I'm
painting a sad picture and I want to
hear sad music and it will generate a
list of sad music for you nothing super
mind-blowing or groundbreaking but I
thought I'd share since we're on the
topic of Music anyway now let's talk
about AGI for a second Elon Musk thinks
it's coming within the next year in an
interview on an xas Elon Musk said that
artificial intelligence will be smarter
than the smartest human probably by next
year or by
2026 so sometime within the next year
and a half he believes we're going to
essentially have AGI Yan laon one of the
Godfathers of AI and meta's cheap AI
scientist thinks otherwise he believes
that large language models will never
actually reach human level intelligence
in his article he doesn't actually say
that he doesn't believe that AI can't
reach human level intelligence he just
doesn't believe that large language
models the current standard that we're
using for AI right now isn't going to
reach human level intelligence back in
February we talked about how meta is
working on V jeppa this is meta's new
architecture for self-supervised
learning this is the technology that Yan
laon actually believes will at some
point hit human level intelligence this
week the much anticipated Humane pin
started getting into the hands of
consumers if you don't remember it's a
little pin device that goes on your
shirt it has a little projector that can
project things onto your hand it has a
camera it has a microphone so it listens
to voice commands and it's designed TR
to be sort of a replacement for a
smartphone and so far the reviews
haven't been super favorable here's a
quick recap as of right now the Humane
pin is an incredibly poor proposition so
no I can't recommend the AI pin in the
form in which I received it I can't
imagine a world or a use case where
someone would prefer this over what
already exists the pin is not worth the
money not yet and probably not anytime
soon you definitely should not buy
planning to replace your phone now the
biggest complaints about the Humane pin
were that it really doesn't do anything
beneficial over your smartphone people
complained about holding up your hand
like this to get through menus and stuff
actually gets fairly tiring on your
hands they also said that in bright
light you can barely see the projection
so it's hard to use people complained
that the gestures were confusing and
complicated there's no privacy if you
are actually trying to prompt it you
have to speak speak out your prompts so
in public it just feels awkward to be
talking to a little computer on your
chest but the biggest complaint that
everybody made about it was that it is a
$700 product with a $24 per month fee
and then if you ever cancel that $24 per
month fee the product just stops working
it's a paper weight saying that every
single person that talked about the
Humane AI Pin said that they thought
that the technology was really really
cool just not very practical or usable
yet and that it still might get to
something that people actually want that
could replace a phone it's just nowhere
close yet finally I thought this one was
a kind of fun article about somebody
making a lot of money using AI a card
game developer paid an AI artist
$90,000 to generate card art because no
one was able to come close to the
Quality that he was delivering now I
have a little bit of an issue with them
saying they're an AI artist cuz I think
the better term is AI assisted artist I
don't actually know if like an AI artist
is a thing because if you're just
pressing a button and letting AI
generate the output I don't really feel
like you are much of the artist however
I do really appreciate AI assisted art
where these cards here somebody
generated the images for every single
card using AI but then they went and
touched it up using Photoshop or
whatever image editing tool they use to
make sure that the colors the
consistency of the characters all the
Styles matched there wasn't extra
fingers all the weird stuff that we get
with AI they actually had to manually go
in and fix so AI generated the initial
sort of rough draft of the image but
then the artist actually then went and
made it into what the card designer
wanted AI helped them make a lot more
images at scale but the artist still had
to do work on all of the images so it's
more AI assisted art not really AI
generated art I don't know that's a
little bit of a soap box I'll step off
it now I just thought it was really cool
that somebody was using an AI art
generator to make the initial concept
then they cleaned it up with Photoshop
they did this at scale and earned
$90,000 from the company that hired them
to do this and finally the last
announcement this is more of a selfish
announcement but our podcast the next
wave is officially launched you can find
the first episode that we did with our
vend from perplexity it is available on
YouTube Spotify Apple podcast anywhere
you listen to podcast I highly recommend
the YouTube version because the editors
did an amazing job of putting some cool
graphics and overlays on it it's a
really fun video to watch but if you're
driving going for a workout doing
whatever you do and you like to listen
to audio podcasts it is available in
pure audio form as well HubSpot who yes
is the sponsor of today's video but is
also the producer of this podcast is
doing a really cool competition they're
giving away Apple Vision pros and all
sorts of cool stuff for subscribing
liking reviewing doing all of that kind
of stuff with the podcast I don't have
all the details on the competition yet
make sure you subscribe to the podcast
and like it and maybe leave a review and
the good people at HubSpot could
potentially hook you up it's not to
mention I might be a little biased but I
also think the podcast is really good
with videos like this I spend a very
short amount of time time per topic with
something like the next wave podcast me
and my co-host Nathan Lans we get to
Deep dive have longer form conversations
talk more about the ethics and the
implications and the long-term timelines
that we see with this technology so it
really gives us that platform to go much
longer much deeper and bring on really
amazing guests who could help us and you
better understand the stuff that we're
talking about in this crazy fast-paced
AI world that we're in that's my pitch
for the podcast check it out I'll make
sure again it's linked up in the
description it's called the next wave
podcast I really think you're going to
enjoy it and that's all I got for you
today if you haven't already check out
futur tools. where I curate all the
latest and coolest AI tools all of the
most interesting AI news that I come
across and I have a free newsletter
which will keep you in the loop with
just the coolest tools and the most
important AI news that I come across and
if you sign up you get free access to
the AI income database a cool database
of interesting ways to make money with
all of these various AI tools I'm going
to have to add recording video for AI
training for Adobe to the list pretty
soon but check it all out it's over at
futur tools. if you like videos like
this you want to stay in the loop with
the News latest tutorials latest tools
all that good stuff make sure you like
this video subscribe to this Channel and
I'll make sure it keeps on showing up in
your YouTube feed thank you so much for
tuning in thanks again to HubSpot for
sponsoring this video you all rock I
appreciate you letting me nerd out over
on YouTube and actually enjoying
watching it for some reason I don't get
it but I'm having fun I hope you're
having fun let's do it again I'll see
you in the next one
byebye
[Music]
CSF has you cover with AI TOS
too
m i press the Subscribe button if you
like
where's the light for you
to now w flow it out and he loves you
[Music]
Browse More Related Video
AI News: This Was an INSANE Week in AI!
HUGE AI NEWS : MAJOR BREAKTHROUGH!, 2x Faster Inference Than GROQ, 3 NEW GEMINI Models!
These AI Use Cases Will Affect Everyone You Know
BREAKING: LLaMA 405b is here! Open-source is now FRONTIER!
Which AI API is Best For Creating Software: OpenAI's ChatGPT, Google's Gemini, or Anthropic's Claude
SHOCKING New AI Models! | All new GPT-4, Gemini, Imagen 2, Mistral and Command R+
5.0 / 5 (0 votes)