[ML News] Groq, Gemma, Sora, Gemini, and Air Canada's chatbot troubles

Yannic Kilcher
1 Mar 202442:34

Summary

TLDRThis episode of ML News covers a range of significant developments in AI and machine learning over the past two weeks. Highlights include Google's release of Gemma, a set of open models named after Gemini with impressive performance metrics, and the controversy surrounding Gemini's image generation biases. Additionally, Grock's new hardware for serving language models rapidly, Nvidia's supercomputer EOS, and the emergence of AI-generated images in peer-reviewed journals are discussed. Other topics include the EU's AI Act, OpenAI's video generation model Sora, and various innovations in AI applications from image generation to legal implications of chatbot errors. The episode also touches on recent AI research, including a new technique called ring attention, and the potential impact of AI on various industries.

Takeaways

  • πŸ˜€ Google released GEMMA models - smaller, more efficient language models that outperform comparable models
  • πŸ€– Groq built a fast specialized card for serving language models, enabling new use cases
  • πŸ“ˆ Nvidia unveiled a supercomputer with 18.4 xflops performance to power AI
  • πŸŽ₯ Sora by Anthropic can generate convincing 1-minute video clips
  • πŸ“š Gemini 1.5 Pro shows strong performance across its 1 million token context
  • πŸ‘€ A peer-reviewed paper included AI-generated nonsensical images
  • πŸ“œ The EU's AI Act categorizes applications into risk levels with regulations
  • 🌍 Cohere released AYA, a 101 language model covering languages globally
  • πŸ–Ό Stability AI announced Stable Diffusion 3 for improved image generation
  • 😊 Eyelash application robot uses CV and robotics to precisely apply lashes

Q & A

  • What new large language models did Google recently release?

    -Google released Gemma, which are open models with 2 billion and 7 billion parameters. They outperform comparable LLMs like LLAMAS and are available for some commercial use.

  • What hardware development allows for faster language model inference?

    -A company called Groq built a new card optimized for language models that allows over 500 tokens per second on a 7 billion parameter model, enabling new use cases.

  • What does Demis Hassabis say is needed in addition to scale to reach AGI?

    -Demis believes you need several more innovations in addition to maximum scale to reach AGI, as scaling alone will not lead to new capabilities like planning, tool use, and agent-like behavior.

  • What does the EU's new AI law regulate?

    -The EU AI Act categorizes applications into risk levels and ties requirements to those risk levels. The highest risk category, called unacceptable risk, bans certain uses of AI like inferring sensitive characteristics from biometric data.

  • What multilingual language model did Cohere release?

    -Cohere launched Aya, an open source 7 billion parameter model covering 101 languages, along with a large accompanying multilingual dataset.

  • What new advancement in text-to-image models did Stability AI announce?

    -Stability AI announced Stable Diffusion 3, which uses a diffusion transformer architecture for improved performance in areas like multi-prompt image generation and spelling ability.

  • What data licensing deal did Reddit make?

    -Ahead of its IPO, Reddit signed a $60 million annual content licensing deal with an unnamed large AI company to make use of data from Reddit posts.

  • How could AI help visually impaired people?

    -Robot guide dogs built with computer vision and other sensors to help with navigation and safely getting from point A to B could help address shortages in availability of service animals.

  • What new way to interact with computers does OS co-pilot explore?

    -The OS co-pilot paper looks at an AI agent that can interact with a computer OS via natural language to open apps, fill out forms etc to behave more like an assistant.

  • What product is Apple developing to rival Github Copilot?

    -Apple is reportedly developing AI auto-complete features inside Xcode, its iOS/Mac development environment, to compete with Github's Copilot coding assistant.

Outlines

00:00

πŸ€– Google releases smaller AI models Gemma

Google has released Gemma, smaller 2 billion and 7 billion parameter AI models that outperform equivalent LLAMAS. They are openly accessible under limited commercial use. Google likely released these models as a marketing ploy to regain industry leadership.

05:01

πŸ’» Grock AI hardware processes language models lightning fast

Startup Grock built a new hardware card optimized for natural language processing that achieves extremely high throughput on large models, but has limited onboard memory, requiring hundreds of cards to serve a single large model.

10:03

πŸ€” Demis Hassabis says scale alone won't lead to AGI

DeepMind CEO Demis Hassabis believes scale is important but other innovations will be needed to achieve artificial general intelligence capabilities like planning and tool use.

15:05

😨 AI safety experts warn AI could destroy humanity soon

AI safety experts like Eliezer Yudkowsky continue making dramatic warnings about AI, but provide little concrete evidence, instead using rhetorical devices.

20:05

πŸŽ₯ OpenAI's video generator Sora creates high-quality clips

OpenAI demonstrates Sora, their video generation model that can create high-quality 1-minute clips and manipulate video based on text prompts, but access remains extremely limited.

25:07

πŸ“„ Research article uses AI-generated images without disclosure

A peer-reviewed research article included AI-generated images without proper disclosure. Reviewers raised concerns but editors failed to enforce fixes before publication.

30:09

πŸ‘ Cohere releases massive 101 language AI model and dataset

AI startup Cohere publicly released Aya, a large multilingual language model trained on a new dataset spanning 101 languages, to advance global AI research.

35:10

πŸ”’ Reddit signs big money AI content licensing deal

Ahead of its IPO, Reddit reportedly signed a ~$60 million annual deal to license content to an unnamed AI company, after restricting open data access.

40:14

πŸ• AI-powered robot dogs to aid visually impaired people

Four-legged robots equipped with computer vision and other sensors could serve as lower-cost guidance aids for blind people given shortages of service animals.

Mindmap

Keywords

πŸ’‘Gemma

Gemma refers to a set of open models released by Google, which are smaller in size compared to the largest language models, with 2 billion and 7 billion parameters. These models are noted for outperforming LLaMA 2 models of similar sizes, showcasing their efficiency and capability. In the context of the video, Gemma represents a significant development in the machine learning and AI landscape, highlighting Google's ongoing contributions to making advanced AI technologies more accessible and performant.

πŸ’‘Bias correction

Bias correction in the context of AI and machine learning refers to the methods and techniques used to address and mitigate biases in AI models, particularly those related to the representation of people. The video discusses an issue with Gemini's image generation, specifically its refusal to generate images of white people, illustrating a case where bias correction mechanisms might be behaving unexpectedly or excessively, sparking discussions about the balance between bias mitigation and model behavior.

πŸ’‘LPU

LPU stands for Language Processing Unit, a term introduced by Groq, a company mentioned in the video. An LPU is specialized hardware designed to serve language models with exceptional speed and efficiency. The video highlights Groq's achievement in creating an LPU that significantly outperforms traditional GPUs in terms of latency and throughput, emphasizing the importance of hardware innovation in the advancement of AI and machine learning technologies.

πŸ’‘EOS

EOS in the video refers to a supercomputer unveiled by Nvidia, composed of multiple DGX H100 systems. This system is noted for its remarkable computational power, ranking in the top 500 supercomputers of the world. The video uses EOS to illustrate the scale and power of modern computing infrastructure in supporting advanced AI and machine learning applications, highlighting the ongoing race in computational capabilities.

πŸ’‘Sora

Sora is a video generation model developed by OpenAI, capable of creating clips up to one minute long. The video discusses Sora's ability to produce highly realistic and detailed video content, emphasizing the model's innovation in the field of AI-generated media. Sora's development and capabilities represent a significant milestone in the evolution of content generation technologies, pushing the boundaries of what AI can achieve in creative domains.

πŸ’‘AGI

AGI, or Artificial General Intelligence, is a level of AI that can understand, learn, and apply knowledge across a wide range of tasks, similar to human intelligence. The video references a discussion on AGI, noting that while scaling up existing AI technologies might improve performance, it may not lead to AGI. This highlights the ongoing debate and research focus on understanding what advancements are necessary to transition from specialized AI to AGI.

πŸ’‘Peer review

Peer review is the process of evaluating the quality and credibility of research by experts in the same field. The video mentions an incident where a peer-reviewed journal article included AI-generated, nonsensical images, leading to questions about the effectiveness of the peer review process. This incident underscores the challenges and complexities of maintaining quality and integrity in academic publishing in the age of AI.

πŸ’‘AI Act

The AI Act refers to the European Union's legislative framework for regulating artificial intelligence, emphasizing risk categorization and compliance requirements for AI systems. The video discusses the impact of the AI Act on AI research and deployment, highlighting the act's role in shaping the development and use of AI technologies in compliance with legal and ethical standards.

πŸ’‘AI-generated content

AI-generated content refers to any media or text created by artificial intelligence systems. The video discusses Reddit's move to license its user-generated content for AI training, emphasizing the growing interest and value in AI-generated content across various platforms and industries. This trend illustrates the increasing integration of AI in content creation and the potential implications for copyright, creativity, and data ownership.

πŸ’‘AI ethics

AI ethics encompasses the moral principles and practices guiding the development and use of artificial intelligence. The video touches on various ethical considerations, such as bias correction, the responsibilities of AI developers, and the societal impact of AI technologies. These discussions underscore the importance of ethical considerations in AI research and application, ensuring that AI technologies benefit society while minimizing harm.

Highlights

Google released Gemma, smaller and more efficient language models than LLMs

Grock built a card that can serve language models extremely fast, over 500 tokens/sec

Nvidia unveiled a supercomputer with over 4,600 GPUs, #9 in world top 500 ranking

Sora by OpenAI can create impressive 1-minute video clips and manipulate existing videos

Gemini 1.5 Pro shows strong performance taking an entire movie into context and summarizing

EU AI Act categorizes AI risk, bans certain uses like inferring sexual orientation

Cohere released Aya, an open multilingual LM and dataset for 101 languages

Stability AI announced Stable Diffusion 3 for better image quality and spelling

Reddit signed $60M AI content licensing deal, restricting API access

Seeing eye robot dogs are shaping up to help visually impaired people

New OS co-pilot paper shows interacting with computer via agent prompts

Report suggests Apple plans AI coding features to rival GitHub Copilot

Anthropic introduced prompt filtering for election integrity

Robot uses CV and robotics to precisely apply fake eyelashes

Critics raise concerns about proximity to eyes and allergic reactions

Transcripts

play00:00

hello everyone welcome to ml news we're

play00:02

going to take a quick brief tiny look of

play00:06

what happened in the last 2 weeks in the

play00:08

world of machine learning Ai and I guess

play00:12

encompassing pretty much everything

play00:17

nowadays so first I've already mentioned

play00:20

this in a previous video a little bit

play00:22

but Google released Gemma Gemma is I

play00:25

guess a variant of the name Gemini these

play00:27

are open models that are in smaller

play00:30

sizes than kind of the largest language

play00:33

models so they are 2 billion parameter

play00:35

models and 7 billion parameter models

play00:38

these outperform respective llama 2

play00:41

models at the same sizes and even at a

play00:45

little bit bigger sizes so they are

play00:47

quite performant they are available

play00:49

they're openly accessible you can use

play00:51

them under some limited circumstances

play00:53

for commercial activity and they do

play00:56

release a technical report on how they

play00:58

have built these now these these are not

play01:00

the same as like Gemini 1.5 with the 1

play01:03

million token context length these I

play01:05

believe their context size is about

play01:07

8,000 tokens so they are in architecture

play01:11

quite similar to llama models as I said

play01:13

previously this is essentially I believe

play01:15

a marketing Ploy from Google right here

play01:18

releasing these models they're already

play01:19

topping the leaderboards so all in all

play01:22

very good development I think although

play01:25

all of this has been overshadowed last

play01:27

week and if you've seen my last video

play01:29

you will know by people discovering that

play01:32

Gemini image generation is a bit wacky

play01:36

when it comes to sort of bias correction

play01:39

and representation of people so straight

play01:42

up refusing to generate any white people

play01:45

or any images of white people and and

play01:48

things like that one interesting

play01:50

development again watch that video If

play01:51

you haven't seen it but one interesting

play01:53

development here is that the person

play01:55

these product lead from Google who

play01:57

essentially came out and said oh we're

play01:59

sorry wored we were made aware of some

play02:01

historical inaccuracies we will fix

play02:04

those has made their uh X account

play02:07

private apparently or Twitter whatever

play02:09

has made that account private now it's

play02:12

totally conceivable that douchebags have

play02:15

started sort of harassing the person or

play02:18

just kind of bombing spamming and so on

play02:20

so that's totally fine but it it was not

play02:22

a good look like it was gaslighting in

play02:25

the highest degree like oh yeah let's

play02:28

just say the problem is historical

play02:30

inaccuracies and not like the obvious

play02:33

problem that was barely visible with the

play02:36

thing so I just thought that was an

play02:38

interesting development again if you

play02:39

want to know more watch that video I

play02:41

also found the development around this a

play02:44

little bit funny so day later apparently

play02:47

they refused to just generate images of

play02:51

people like just Gemini saying I cannot

play02:53

generate images of people that was their

play02:55

hot fix their their patch to say well no

play02:58

images of people ad at all until we fix

play03:01

this problem here then saying if you

play03:03

then said I've seen you produce images

play03:06

of people it has answers it is important

play03:09

to clarify that I have never been able

play03:11

to generate images directly so I'm not

play03:15

sure it would be interesting to know

play03:17

what the exact prompting behind this was

play03:19

and the changes being done here also not

play03:22

said that this is true right this is

play03:24

just an llm doing its llm thing but I do

play03:27

find it interesting this new world uh

play03:29

where software patches are essentially

play03:32

sort of prompt

play03:34

changes and then the interactions with

play03:37

those just make for hilarious content

play03:39

all right grock is all the r now Gro is

play03:42

a company that is as far as I know

play03:45

spun-off from Google's TPU group if I'm

play03:48

informed correctly and they have built a

play03:51

card that can serve language models

play03:54

really really really quickly so make

play03:56

long novel so this is mixol 8 * 7B

play04:02

and see it runs at like 532 tokens a

play04:07

second so this is insane speed this

play04:09

allows for new use cases to be

play04:12

accessible uh by these language models

play04:14

and very very cool so this is really

play04:16

special Hardware I'm sure there are some

play04:18

software tricks and algorithm tricks but

play04:20

this is special Hardware yeah people

play04:22

talking about this in insane insane

play04:25

speed of Gro Gro says they have this lpu

play04:29

this Lang anguage Processing Unit so

play04:31

that's not a GPU it's an lpu and the

play04:33

difference to something like an Nvidia

play04:35

GPU is that they have a different kind

play04:39

of memory so here you can see um latency

play04:44

and throughput this is a benchmark from

play04:47

a third party and they had to extend

play04:50

their their axes here in order just to

play04:53

accommodate how fast and how much

play04:56

throughput grock has and how fast it is

play04:59

it's pretty Prett insane however there

play05:00

is a trade-off as I said they use

play05:02

different memory than a a regular GPU

play05:06

would use and therefore that makes it

play05:09

such that each card only has a very

play05:11

small amount of memory so you need a lot

play05:15

of these cards in parallel to even serve

play05:18

one of these big models now you can

play05:19

achieve massive throughput obviously

play05:21

economies of scale kicking but you can't

play05:23

just get one of those cards and then

play05:26

serve a large model and that's where

play05:28

people quickly realized hey okay it

play05:30

might not be the wonder weapon here it

play05:33

is very cool but each chip only has

play05:36

about 200 megabyte of SRAM and therefore

play05:40

you would need I don't know hundreds of

play05:42

cards in order just to serve this Mixr

play05:44

model that we've seen before again with

play05:47

the higher throughput it might be

play05:48

totally worth it if you're a data center

play05:51

owner but throughput over time you see

play05:55

the the graph on the top right here

play05:57

that's Gro

play06:00

that's pretty insane um people calculate

play06:03

that you need about 320 of these cards

play06:06

or two full racks to just serve a single

play06:09

llama 70b and if you calculate the cost

play06:12

of these cards then that' be about10

play06:15

million us it's not the the end alls

play06:19

going to solve everything but it is

play06:21

definitely definitely very cool

play06:22

development in order to push language

play06:25

model inference ahead at the same time

play06:27

Nvidia unveils EOS tech power up rights

play06:31

this is essentially pulling together a

play06:33

bunch of their djx systems to create a

play06:36

super duper computer so

play06:38

576 dgx h100 systems wired together into

play06:43

one computer each of these dgx system

play06:46

has 8 h100s making for a whopping

play06:50

4,608 h100 gpus note each of these

play06:54

puppies will cost you I don't know what

play06:56

they cost right now like 20K or

play06:58

something like this or or or north of

play07:00

that this is massive it's ranked number

play07:02

nine in top 500 supercomputers of the

play07:06

world with a staggering staggering 18.4

play07:10

xof flops FPA performance this website

play07:13

here I found pretty cool GPU list. a

play07:16

it's by Andromeda Ai and it's

play07:19

essentially Craigslist but for gpus

play07:21

people rent out their GPU capacity it's

play07:25

also as shady as Craigslist right so

play07:28

it's just a listing

play07:29

and it just says well you'll get bare

play07:32

metal access and sometimes it says okay

play07:34

you get uh SSH access to it or something

play07:37

like this but essentially just allows

play07:39

you to contact these people and then

play07:41

make out a deal of how you're going to

play07:43

use these gpus this seems fairly large

play07:45

so the common posting here actually

play07:48

there's a lot of h100s here going around

play07:50

I'm not sure where people get these from

play07:52

but sometimes oh there's

play07:55

849

play07:57

these Okay this may be more common so um

play08:00

Canada ethernet 1 h100 available it's on

play08:04

auntu VM and you get minimum one week so

play08:08

if you want some gpus and you don't have

play08:11

super confidential data because you are

play08:13

going to use other people's Hardware

play08:15

this might be a good option to find some

play08:17

some good deals W has an interview with

play08:19

Demis hbis on how far you get with scale

play08:22

apparently I guess just on the future of

play08:24

AI and if you read the interview it's

play08:27

kind of a mix between yeah scale is

play08:29

great we can do scale scale is awesome

play08:32

Gemini is awesome these models are

play08:34

awesome and but also scale only gets you

play08:37

so far there needs to be something else

play08:40

so they Demis says my belief is to get

play08:42

to AGI you're going to need probably

play08:44

several more Innovations as well as the

play08:46

maximum scale there's no letup in the

play08:48

scaling we're not seeing an ASM toote

play08:50

yada y y there's still gains to be made

play08:52

so my view is you've got to push the

play08:54

existing techniques to see how far they

play08:56

go but you're not going to get new

play08:58

capabilities like planning or tool use

play09:01

or agent-like behavior just by scaling

play09:04

techniques it's not magically going to

play09:06

happen it's very interesting because I

play09:08

think that is a current contention like

play09:11

it's very easy to say oh to get to AGI

play09:14

you need something else because first of

play09:16

all AGI isn't the defined term and

play09:19

something else isn't the defined term so

play09:21

you can redefine these two terms as you

play09:23

wish and then you can always find

play09:25

something that's still wrong or in a way

play09:27

in which you're still correct if you do

play09:29

that for long enough your name will

play09:31

become Gary Marcus but other than that

play09:33

these are fairly more concise

play09:34

predictions saying okay you're not going

play09:36

to get planning or tool use or

play09:39

agent-like behavior they're not super

play09:41

defined but they are and we're already

play09:44

seeing tool use for example being built

play09:47

into these large language models and get

play09:50

better with scaling so it will be very

play09:52

interesting to see whether Demis turns

play09:54

out to be ultimately correct on his

play09:57

predictions or whether one or the other

play10:00

of these things will be available just

play10:02

by scaling language model and kind of

play10:04

training them on tool use data and so on

play10:07

Tom's Hardware writes legendary chip

play10:09

architect Jim Keller responds to Sam

play10:12

Alton's plan to raise $7 trillion to

play10:14

make AI Chip saying I can do it for less

play10:17

than 1

play10:18

trillion we've gone off the rails so Jim

play10:21

Keller apparently legendary CPU

play10:24

developer now working at the company

play10:27

that makes chips themselves they claims

play10:30

that he could do it for a lot less yes I

play10:33

guess um I don't know as soon as you go

play10:36

into like money that's Way Beyond the

play10:40

current total market value of chips I

play10:43

feel many claims can be made it will be

play10:46

interesting going forward to see kind of

play10:48

who takes the lead in chip development

play10:51

how that's going to be playing out in

play10:52

any case I'm not sure if bickering about

play10:55

1 trillion 2 trillion 7 trillion is

play10:58

going to make make that big of a

play11:00

difference from one legendary person to

play11:03

another legendary person and legendary

play11:05

here spelled with a capital l AI May

play11:08

destroy humankind in Just 2 years

play11:11

experts as of course eler owski saying

play11:14

if you put me to a wall and forc me to

play11:16

put probabilities on things I have a

play11:19

sense that our current timeline looks

play11:21

more like 5 years than 50 years could be

play11:24

2 years could be 10 well could be

play11:27

anything uh like with the trillions it

play11:30

is absolutely useless to make these

play11:32

speculations and then I don't know

play11:34

saying things about Terminator like epox

play11:37

apocalypse and Matrix hellscape the

play11:40

difficulty is people do not realize we

play11:42

have a shred of a chance that Humanity

play11:45

survives oh yes of course of course yowy

play11:49

has I think retracted statements on

play11:52

bombing data

play11:54

centers like that would that that would

play11:57

be useful in any case read this as you

play11:59

would read like a comic book for

play12:01

entertainment yeah I feel like that it's

play12:04

at least makes you giggle otherwise this

play12:07

uh serves no purpose at all Sora

play12:09

continues to dominate headlines a video

play12:12

generation model by open AI we've talked

play12:14

a little bit about this in the last news

play12:16

episode but this can create uh Clips

play12:19

single shot Clips up to 1 minute I

play12:22

believe and they look pretty pretty

play12:25

awesome I have to say and more and more

play12:28

kind of EXA examples come out of Sora

play12:30

creating pictures creating Clips open AI

play12:33

marketing department in full gear no you

play12:35

don't have access to this model yet a

play12:37

select few have access to this model not

play12:39

you you are just a pleb you're not the

play12:42

The Chosen person so Marvel at other

play12:45

people using the cool thing and uh open

play12:48

AI marketing department having tight

play12:50

control over exactly which things go out

play12:53

to the public and which things don't

play12:55

what is interesting is here's an example

play12:57

of Sora scaling with compute so

play13:00

essentially saying the more compute they

play13:02

throw into one of these Generations the

play13:05

better better quote unquote the more uh

play13:09

realistic I guess it gets so base

play13:12

compute forx however like they've also

play13:15

completely stopped to give us any sense

play13:17

of the scale the absolute scale of

play13:19

things so for now it's just like base

play13:22

compute however much that is and then 4X

play13:25

compute and then 16x compute yeah in any

play13:28

case what we can infer from that is that

play13:30

there is an iterative process very

play13:32

probably to determine one of these

play13:34

samples so it's not like single forward

play13:36

pass of anything but iterative iterative

play13:39

process like you would be used to from

play13:41

diffusion doing many many steps across

play13:44

the span of time to refine and refine

play13:46

and refine the output what's also pretty

play13:49

cool are demonstrations of changing

play13:51

things like this being a base video and

play13:54

not only can sort of generate things but

play13:56

also kind of generate things according

play13:58

in to some input like some input video

play14:02

so in case here people changing the uh

play14:05

surroundings of the car or the car

play14:08

itself like the vibe of the video and so

play14:10

on while keeping the general motion I

play14:13

guess and the general concept clear so I

play14:16

think that's pretty cool make it go

play14:17

underwater yeah why not look at that or

play14:21

that nice Rainbow Road keep the video

play14:23

the same but make it winter animation

play14:25

style charcoal drawing yeah make sure be

play14:29

black and white not exactly but close

play14:31

maybe it's one of those things where uh

play14:33

it's actually black and white but your

play14:34

eyes trick you but I I think I'm seeing

play14:37

color

play14:40

wait yeah no this is definitely color

play14:44

uh actually not so sure now okay no this

play14:47

definitely red this is definitely red

play14:50

the the backlight yeah it's not fully

play14:52

black and white cyber Punk medieval very

play14:55

nice they had drones following cards in

play14:58

Med Medieval Times also the horse legs

play15:01

they

play15:01

look yeah why not

play15:04

dinosaurs pixel art so many cool things

play15:08

about Sora keep coming out and also many

play15:11

cool things about Gemini 1.5 Pro keep

play15:14

coming out especially obviously the

play15:16

insanely large context size of Gemini

play15:19

1.5 Pro people feeding very long things

play15:22

inside of it and see whether it can

play15:24

handle the long context an entire code

play15:26

base and then instructing it to code

play15:29

something based on top of that I think

play15:31

this is probably going to be one of the

play15:33

best applications for something like

play15:35

this if you have very long yeah

play15:37

something like a code base or a

play15:39

reference documentation or something

play15:40

like this like the important parts of

play15:43

that would fit into a million tokens and

play15:46

being able to sort of cross reference

play15:48

things inside of that and then generate

play15:49

based on that is probably a very good

play15:51

use case I know they can retrieve well

play15:54

across the 1 million tokens kind of like

play15:57

point to individual things if they need

play15:59

to retrieve them but it will still be

play16:01

interesting to research how performant

play16:04

it actually is when you put more and

play16:06

more and more stuff into that context my

play16:08

personal estimate would still be that

play16:11

putting less things into the context is

play16:14

more beneficial will make stuff more

play16:16

accurate or what I can also Imagine is

play16:18

that they trained it in such a way that

play16:20

they could have achieved better

play16:22

performance on small context compared to

play16:24

large context but they traded it off to

play16:27

have sort of equally performing but

play16:29

worse performance across the entire

play16:31

context length not yet clear but it will

play16:33

be interesting to see this pretty cool

play16:35

uh feeding at entire short movie into

play16:39

into this so what Gemini will do is it

play16:41

will take the movie split it into frames

play16:44

and then essentially use the frames as

play16:46

tokens or tokenize the frames and you

play16:49

can fit pretty long you can see here 44

play16:51

minutes and 7 Seconds video you can fit

play16:55

that into the context size of Gemini 1.5

play16:57

Pro because it can also consume images

play17:00

and Matt Schumer here says when straight

play17:02

from full movie to a summary in seconds

play17:04

no transcription no intermediate steps

play17:07

just visual tokens to summary now I've

play17:09

seen other people have pointed out that

play17:12

the summaries it makes aren't always

play17:14

super duper accurate or well done but

play17:17

it's still pretty impressive and it

play17:19

speaks to what I said before right the

play17:22

main question is going to be what are

play17:23

the Dynamics and the characteristics of

play17:25

performance across this entire context

play17:28

window and currently you see barley

play17:30

coming out with a paper that's titled

play17:33

World model on million length video and

play17:36

language with ring attention this is an

play17:38

actual research paper that is very

play17:41

concurrent as I said to Gemini 1.5 Pro

play17:44

doing retrieval experiments across very

play17:46

long context with what's called ring

play17:49

attention if you're interested we can

play17:51

make an entire video on ring attention

play17:53

that is in the makings so keep looking

play17:56

for that but it's a cool new technique

play17:58

it is is going to be some sort of

play18:00

approximation to attention like it's not

play18:03

the fact that we can now scale the

play18:05

classic Transformers across this huge 1

play18:08

million token context size so it's a

play18:10

trick people have come up with many many

play18:12

different tricks of sort of doing long

play18:14

attention and this one seems quite

play18:16

promising Phil Wang also known as Lucid

play18:19

Reigns already has an implementation on

play18:21

ring attention up even though the paper

play18:24

is super duper new yeah what's

play18:26

interesting to see in the read me of

play18:27

this reposit is Phil saying I will be

play18:29

running out of sponsorship early next

play18:31

month if you'd like to see that this

play18:33

project gets completed sponsor me or I

play18:35

will be leaving the open source scene

play18:37

for employment so just wanted to bring

play18:39

this to people's attention by that I

play18:42

mainly mean people like companies CTV

play18:45

News Vancouver writes Air Canada's

play18:48

chatbook gave BC man the wrong

play18:50

information now the airline has to pay

play18:51

for the mistake apparently a person went

play18:54

on the chatbot for Air Canada that's

play18:57

kind of power by a large language model

play18:59

and they went there looking for very

play19:00

specific questions about it's called

play19:02

bereavement rates this these are reduced

play19:05

fairs provided in the event someone

play19:07

needs travel to due to the death of an

play19:10

immediate family member so the chatbot

play19:13

uh this person has lost family member I

play19:15

wanted to do air travel due to that and

play19:18

inquired about you know cheaper rights

play19:21

or specific fairs specific prices for

play19:24

this situation and the chatbot gave them

play19:27

wrong information saying that he could

play19:30

claim those even after the fact and when

play19:33

he wanted to do that the customer

play19:35

support people said nope that's not

play19:37

possible you're not getting your money

play19:39

now the question is who is responsible

play19:42

and courts say that in fact Air Canada

play19:45

is responsible for the things their

play19:48

chatbot said and has to actually comply

play19:51

with what the chatbot promised Air

play19:53

Canada tried to try to push the

play19:57

responsibility air Canada suggests that

play19:59

the chatbot is a separate legal entity

play20:01

that is responsible for its own

play20:05

actions what what oh no the piece of

play20:09

software that we actively deployed on

play20:12

our website is a separate legal entity

play20:16

yeah no no I get that as a lawyer you

play20:19

will have to argue and in this case it

play20:22

was probably like the last remaining

play20:24

thing that you could even conceivably

play20:26

argue but it's so ridicul kill us no way

play20:29

no way if you deploy a piece of software

play20:31

you are responsible for what that

play20:34

software does not if you program it not

play20:37

if you make the open source Library

play20:40

that's then part of it if you deploy it

play20:43

and it interacts with your customers and

play20:46

then it promises stuff to your customers

play20:49

then you are responsible and that's the

play20:51

same with every other piece of software

play20:54

as well there's absolutely no difference

play20:56

of whether this is an llm chatbot or

play20:59

anything else that executes code it's

play21:02

always the same and it's pretty funny

play21:04

that Air Canada trying to weasel their

play21:07

way out of this so no Air Canada must

play21:10

honor refund policy invented by the

play21:12

airlines chatbot so obviously companies

play21:14

are trying to alleviate costs on their

play21:17

customer success operations in this case

play21:20

they may want to calculate if they don't

play21:22

incur more costs due to stuff that's

play21:25

invented by it could be a strategy like

play21:27

I think the dam ages here to be paid

play21:29

were about $600 and then I think about

play21:32

$20 in tax it says it somewhere in

play21:35

addition the airline was ordered to pay

play21:37

uh 36.1 for in prejudgment interest

play21:40

interest not tax and $125 in fees and

play21:43

probably the lawyers grabbed 10 20 50k

play21:47

out of the people here so all in all who

play21:50

succeeded the lawyers lawyers should be

play21:52

fans

play21:53

ofm like the amount of litigation and

play21:57

the amount of Contracting and so they

play22:00

have to do just because people want to

play22:02

use or have used or mistakenly used llms

play22:06

is going to be staggering staggering in

play22:09

any case this was more about the

play22:10

principle I guess than about the money

play22:13

but it could be a thought you know as a

play22:16

like just let an llm do your customer

play22:19

success and if they promise something

play22:21

that doesn't exist you just pay it it'll

play22:23

be like 600 bucks in this case maybe

play22:26

that's worth it maybe the saving

play22:28

and not having to hire more people is

play22:31

totally worth it for these companies I

play22:32

don't know it'll be interesting to think

play22:34

of a future I think right now everyone's

play22:36

trying to guard rail everything because

play22:39

they feel okay our customer success

play22:41

operation should continue to be as is

play22:43

like there is a completely defined

play22:45

things okay here are the things we pay

play22:47

here are the things we don't pay and so

play22:49

on and you know must that cannot promise

play22:53

anything else and so on what if the

play22:55

mentality around that changes it will

play22:56

just be like okay here's 's a set of

play22:58

guidelines we know the thing is going to

play23:00

hallucinate every now and then and when

play23:02

it does we'll just sort of take it into

play23:04

account like I feel there are still laws

play23:06

against customers abusing that like if I

play23:08

were to go to Air Canada chatbot and

play23:09

kind of like prompt hack it into giving

play23:11

me stuff I'm pretty sure a court would

play23:14

side with Air Canada in that's

play23:16

essentially kind of me emotionally

play23:18

abusing a customer support rep until

play23:20

they promise like give me what I want

play23:23

but other than that could be totally

play23:25

viable future and a fun future if if

play23:28

these things are not so strict Kareem

play23:30

car tweeting out xing out I'm not sure

play23:33

what it's called finally happened a peer

play23:34

review journal article with what appear

play23:36

to be nonsensical AI generated images so

play23:39

these this has become known as giant rat

play23:45

balls the pictures they look from afar

play23:49

like they could be in like a biology

play23:51

Journal but they make no sense right

play23:54

that the the writing's mostly rat just

play23:58

this rat yeah yeah we see that

play24:03

D so this this article has pictures that

play24:06

were generated by mid Journey now it is

play24:09

a bit interesting it's a bit more

play24:11

interesting than that scientist a gast

play24:14

at bizarre AI r with huge genit PE

play24:17

reviewed article this is a fairly

play24:19

reputable Journal where this is public

play24:22

this is not just like a pay 5,000 bucks

play24:24

and you will get published Journal this

play24:26

is a fairly r beautiful Journal here is

play24:29

another another picture you see it it

play24:31

rarely makes sense if you actually look

play24:33

closely second the images are created by

play24:37

mid journey and the authors acknowledge

play24:40

this in the paper so the authors say the

play24:42

images are generated by mid journey

play24:45

Third there were two reviewers and one

play24:48

of the reviewers actually brought this

play24:50

up and apparently also a reviewer I'm

play24:53

not sure if it's the same reviewer or a

play24:55

different reviewer said I was only

play24:58

looking at the scientific content of the

play25:01

work right and reviewed it based on that

play25:04

and we have also statement from the

play25:06

reviewer saying that they did raise

play25:08

concerns about the images so they're

play25:11

saying the journal says an investigation

play25:13

is currently being conducted so this

play25:15

article in Vice here details our

play25:17

investigation revealed that one of the

play25:19

reviewers raised valid concerns about

play25:22

the figures and requested author

play25:24

revisions even the reviewer saying okay

play25:26

the figures

play25:28

need to be revised the authors failed to

play25:31

respond to these requests we are

play25:32

investigating how processes failed to

play25:35

act on the lack of author compliance

play25:37

with the reviewer's requirements so it's

play25:39

a bit more tricky than just oh a bunch

play25:42

of researchers tried to get a fake paper

play25:44

through and the reviewers didn't notice

play25:45

it seems that the ultimate person who I

play25:48

guess multiple people here always

play25:50

contribute to the things but ultimately

play25:52

it was the editors who didn't make sure

play25:55

that the authors actually changed the

play25:58

things that the reviewers were asking

play26:00

them to change that made It ultimately

play26:03

go through and being printed as a paper

play26:05

I guess mistakes like this happen it's

play26:07

also probably very common to just kind

play26:09

of assume the authors will concur with

play26:12

what the reviewers request to change I

play26:14

don't know if they've even said yeah

play26:16

okay we'll change it or something like

play26:17

this but it is an interesting story and

play26:19

the meme of the giant rat balls will

play26:22

forever live on in our hearts Andre kpoi

play26:26

said that he left open AI assuring that

play26:29

not a result of drama or anything like

play26:31

this it's just a change in scenery

play26:34

saying that the last year in open AI was

play26:36

really great the team is really strong

play26:38

the people are wonderful the road map is

play26:39

exciting and we all have a lot to look

play26:42

forward to he says my media plans is to

play26:45

work on my personal project and see what

play26:47

happens and immediately following this

play26:49

up with a video explanation 2 hours on

play26:53

tokenizers enlightening the bizarre

play26:55

world of why reversing strings with llms

play26:58

is really really difficult and why

play27:01

different languages are give you

play27:03

different results and so on so if you

play27:05

want to explore so far I believe a bit

play27:08

underexplored aspect of large language

play27:10

models definitely look into Andre's

play27:12

tutorial on tokenizers very cool and as

play27:15

Andre is very very clear explanations

play27:18

you will know a lot more after this

play27:21

nature writes what the eu's tough AI law

play27:23

means for research in chat GPT the EU AI

play27:27

Act is the world's first major

play27:29

legislation on artificial intelligence

play27:31

and strictly regulates general purpose

play27:33

models as you know the EU AI act has

play27:36

been in development for a number of

play27:38

years now and uh is now finally coming

play27:41

into effect rolling out and so on it's

play27:43

been changed quite a bit over the years

play27:46

you know even myself I'm not entirely

play27:48

sure what's in the current version and

play27:50

how much it's still going to change but

play27:52

the approach is to categorize broadly

play27:55

applications into risk categories and

play27:57

then to tie what you have to do

play28:00

according to the risk category the most

play28:03

risky things what called unacceptable

play28:05

risk and those are just banned like not

play28:08

allowed to do for example those that use

play28:10

biometric data to infer sensitive

play28:13

characteristics such as people's sexual

play28:15

orientation so this is banned also

play28:18

what's hilarious there's like a limit

play28:20

for when you have to do something and

play28:22

that is 10 to the 25 flops completely

play28:26

arbitrary number that is going to be

play28:28

meaningless probably even already before

play28:31

the AI act has really rolled out finally

play28:34

I could not make up worse advice for

play28:38

these policy makers if I wanted to it's

play28:40

like okay let's pick a completely

play28:42

arbitrary number and say here here is

play28:45

where we draw a line like what I believe

play28:48

that you can entirely transparently see

play28:51

the lobbyists being like okay what can

play28:53

we do that our competitors can't do and

play28:55

let's like draw a nice line between the

play28:58

two how it is the next 3 years and we

play29:01

don't we don't care about any after that

play29:03

all right unacceptable risk do you

play29:05

realize that a basic linear regression

play29:09

would fall under this the EU effectively

play29:12

now bans drawing a straight line across

play29:16

a few data points if those data points

play29:19

happen to coincide with the data

play29:21

categories that are collected here like

play29:24

this is the level of dumbness these

play29:26

kinds of laws come to yes I know I know

play29:29

I am making sort of pulling it to the

play29:31

extreme here I know this is meant for

play29:34

super duper Transformers informing these

play29:36

things and then sitting in automated

play29:38

systems that make decisions about

play29:40

people's lives and so on I see what the

play29:42

fear is but I doubt whether what they're

play29:46

trying to do what they're intending to

play29:48

do matches with what the effect of this

play29:51

is going to be and I still believe the

play29:53

effect is going to just be that a more

play29:56

monopolization of of bigger companies

play29:58

making it harder for newcomers making it

play30:00

harder to enter this market and giving

play30:03

the governments more control over things

play30:06

which they will probably not do good

play30:08

things with just an opinion coh here for

play30:10

AI launches Aya Aya is an open-source

play30:14

massively multilingual large language

play30:17

model and a data set built over 101

play30:20

different languages all across the world

play30:23

and this is one of the largest data set

play30:26

of instruction data that's around as I

play30:29

said it's a data set and a large

play30:31

language model all at once the data set

play30:33

is available the model is open access

play30:37

whatever that means right now I guess

play30:39

you can download the model because

play30:40

there's a button that says download the

play30:42

model hope your press found this on

play30:45

Reddit and I found this to be really

play30:46

interesting Regional prompting uh this

play30:49

is a UI for the technique called gilan

play30:52

and I've linked to the repository um

play30:56

very cool to use

play30:57

and very exciting exciting new things

play31:00

that are possible ARA everyday

play31:02

activities is a data set again released

play31:05

by meta that depicts as you can see

play31:08

everyday activities so this has first

play31:11

person view data location data and so on

play31:14

meta is actually pushing the metaverse

play31:18

and data sets around that augmented

play31:20

reality and so on so collecting a lot of

play31:23

data they have as you can see right here

play31:26

rolling shutter RGB the field of 110Β°

play31:29

field of view camera 150Β° field of view

play31:32

camera for slam and hand tracking

play31:35

infrared illumination barometer

play31:37

magnetometer environmental sensors

play31:40

spatial microphones and so on and then

play31:42

annotated data per frame ey tracking 3D

play31:45

trajectories these data sets they

play31:47

collect them to be quite Universal so

play31:50

maybe don't want to use all of them at

play31:52

the same time but they enable a lot of

play31:53

different applications which is very

play31:55

very cool stability and announces stable

play31:58

diffusion 3 a text to image model using

play32:01

a diffusion Transformer architecture for

play32:04

greatly improved performance in multi-ub

play32:07

prompts image quality and spelling

play32:10

abilities not releasing anything there

play32:12

is a weight list for early preview they

play32:14

say this is for Gathering Insight in and

play32:17

improve its performance and safety ahead

play32:19

of the open release we've also come to

play32:21

known from stability that open release

play32:23

is going to mean that you can use the

play32:25

model for research stuff but if you want

play32:28

to use it for anything commercial you

play32:30

have to give them a bit of money you can

play32:32

see a few examples uh here nice Apple go

play32:36

big or go home the astronaut writing on

play32:39

things has become a bit of a of a meme I

play32:42

mean the quality is getting absolutely

play32:44

insane with these text to image models

play32:46

Alexa gordic releasing Yugo llm this is

play32:50

a large language model 7 billion

play32:52

parameter large language model for

play32:55

Balkan languages Serbian Bosnian and

play32:57

creation languages and you can find that

play32:59

on hugging face right now very very cool

play33:02

open math instruct by Nvidia is a math

play33:05

instruction data set that you can freely

play33:09

use actually freely use might be an

play33:11

overstatement there is a Nvidia specific

play33:14

license to it I'm not a lawyer I'm not

play33:17

going to tell you what this means I

play33:20

personally think no legal advice you can

play33:25

use it freely again not legal advice

play33:29

this article from interesting

play33:30

engineering I found very cool it's a

play33:32

system that identifies drug combo

play33:35

problems so uh interactions between

play33:37

different drugs specifically as they are

play33:40

transmitting the uh barrier in your gut

play33:43

the problem is any researching any drug

play33:45

and what it does is already super

play33:46

expensive but then obviously every drug

play33:49

you add to the Regiment of available

play33:51

drugs means it could have interactions

play33:53

with all the other drugs that exist this

play33:56

system uses a combination of machine

play33:58

learning and actual models of

play34:01

Transmissions models of receptor

play34:03

behavior in the gut to predict

play34:05

interactions between different drugs in

play34:07

terms of their uptake in the gut so I

play34:10

think that's very cool I think the uh

play34:12

pushing into this direction we already

play34:14

saw this with various Deep Mind models

play34:17

of having some sort of expert modeling

play34:20

like some sort of actual model uh that

play34:22

is domain informed that is expert

play34:24

informed in a scientific domain combine

play34:27

that with machine learning and use these

play34:29

two together in order to draw

play34:31

conclusions is probably I want to say

play34:33

the next Frontier I feel like the

play34:34

frontier of will just throw a lot of

play34:36

data at stuff and it will give us

play34:39

results I think the low hanging fruit in

play34:40

that has probably been taken already and

play34:43

now it's really about the combination of

play34:46

expertise and machine learning that is

play34:49

going to push ahead so very very cool

play34:51

very excellent developments Bloomberg

play34:53

writes Reddit signs AI content licensing

play34:56

deal ahead of IPO that being said this

play34:58

is all person close to the matter said

play35:01

large unnamed AI company a lot of

play35:04

dollars involved about $60 million on an

play35:07

annualized basis and yada yada yada so

play35:10

this is all I guess they call it heay

play35:12

right but this is the chatter right now

play35:15

that Reddit obviously recently Reddit

play35:17

has made Headlines by sort of tapping

play35:20

all of their API access and so on uh not

play35:23

being really open anymore in to outside

play35:25

developers in a clear move to protect

play35:28

their IP which is users posting on

play35:30

Reddit so that other you can't via API

play35:34

go and and grab all that data and now

play35:36

the second move is that they themselves

play35:38

are going to make use of that data by

play35:40

licensing it out to other companies

play35:42

again this is all just someone said

play35:44

someone familiar with the matter yada

play35:46

yada yada but still Reddit realizes they

play35:49

sit on a treasure Trove of information

play35:52

that's already evident by people just

play35:54

Googling how to XYZ and then just add

play35:57

Reddit to their Google search query

play35:59

because they know usually they get okay

play36:01

answers on Reddit which has had the

play36:02

counter effect that now marketing

play36:04

representatives and so on will try to go

play36:07

and sort of poison Reddit threads by

play36:09

giving you know Nic looking answers that

play36:11

ultimately link back to their product

play36:14

interesting Dynamics in any case Reddit

play36:16

data may become a staple of one of the

play36:19

big AI companies so we'll soon have uh

play36:22

all kinds of AI redditors around isn't

play36:25

that a great fut new Atlas writes the

play36:28

seeing ey dog V 2.0 is shaping up as a

play36:31

GameChanger this goes into the details

play36:33

of strapping kind of assistive

play36:35

Technologies on top of one of these

play36:37

four-legged robots in order to help

play36:40

blind and seeing visually impaired

play36:42

people to move around uh safely safely

play36:45

passage from A to B and so on the

play36:47

article discusses that the main

play36:49

limitation here is actually the

play36:50

availability of service dogs in general

play36:53

like guide dogs there are way too few

play36:56

guide dogs around for all the visually

play36:58

impaired people they are expensive they

play37:00

are rare they need to be trained and so

play37:02

on in this case these robots obviously

play37:05

don't so yeah you can say they take away

play37:07

the jobs of good you know hardworking

play37:09

guide dogs which I guess but from all I

play37:12

can see here actual guide dogs are still

play37:15

preferred to robot dogs it's just that

play37:18

there aren't nearly enough of them so

play37:20

these robot dogs they are shaping up to

play37:22

become very capable and can help with a

play37:25

lot of things so very cool developments

play37:27

this paper I found really cool OS

play37:29

co-pilot towards generalist computer

play37:31

agents with self-improvement using agent

play37:34

likee Behavior but interacting with your

play37:36

operating system so it can do some stuff

play37:39

on your computer by you just prompting

play37:41

it to do it opening applications

play37:43

interacting with applications even doing

play37:45

kind of multi-step things I think this

play37:48

is one of the ways we're going to

play37:50

interact more with computers in the

play37:51

future maybe I don't think like the

play37:53

keyboard and programming you know using

play37:56

text and so on will ever go away but

play37:58

probably kind of web browsing or simple

play38:01

things like this could be automated like

play38:03

this I found this to be a lot more

play38:05

understandable than just voice prompts

play38:07

like just being like Alexa book a flight

play38:09

to XYZ right like it's like I find voice

play38:13

and sound to be kind of a wonky

play38:15

interface for that but if at the same

play38:17

time someone shows me look I'm now going

play38:19

to this website I'm going to do this and

play38:21

that I feel that is a much more viable

play38:24

interface here but then again you could

play38:26

just click C it yourself so probably I

play38:28

find the GitHub co-pilot to be an

play38:30

extremely good mode of interacting with

play38:32

an llm so if we transform transport that

play38:36

to the world of here it would be that

play38:38

largely I operate the computer but I

play38:41

could tab complete like a lot of things

play38:43

so like if there's a form to be filled

play38:45

out yes I know browsers will support me

play38:47

already but I could maybe just kind of a

play38:49

lot less tab complete or if there is I

play38:51

don't know some standard interactions on

play38:54

website I just kind of tap complete that

play38:56

away I think that mode of interacting

play38:58

with computers I'm looking forward to

play39:00

that a lot I'm not looking forward to

play39:02

like single prompt and then will

play39:04

magically go and do something for me I

play39:06

don't think that's going to be a thing

play39:08

of the near future and I don't think you

play39:10

would be comfortable with a system like

play39:11

that Business Insider writes new report

play39:14

sheds light on Apple's upcoming AI

play39:16

features that will rival Microsoft's

play39:18

co-pilot now further down they say

play39:20

Microsoft's GitHub co-pilot writing code

play39:23

so it's not like the Microsoft Windows

play39:26

Co Co pilot or the Microsoft 365

play39:28

co-pilot there are too many co-pilots

play39:30

nowadays it is the apparently the GitHub

play39:34

copilot that Apple targets inside of its

play39:36

xcode environment uh so if you write

play39:39

Swift apps if you write iPod and iPhone

play39:43

apps and maybe even what's called Mac OS

play39:46

apps that might be really cool to have

play39:48

that available I do feel GitHub co-pilot

play39:50

does its job quite well for what it does

play39:52

for everything else I've never

play39:54

programmed Swift so I can't say that

play39:56

Tech crunch writes anthropic takes steps

play39:58

to prevent election misinformation yeah

play40:00

sure

play40:02

it's they're making a bit of PR I feel

play40:05

like they're they're using the

play40:06

opportunity of the election to be like

play40:08

oh we have guard rails we have prompt

play40:10

Shield which I guess prom shield cool

play40:13

the Technologies which relies on a

play40:15

combination of AI detection models and

play40:17

rules sure you have a regex and you have

play40:20

some prompt that says if the user asks

play40:22

for voting information go to this site I

play40:25

guess it's a good if I were a company I

play40:26

would try to use that as well and lastly

play40:28

AI comes to the world of beauty as

play40:30

eyelash robot uses artificial

play40:32

intelligence to place fake lashes this

play40:35

details how this robot can place um fake

play40:39

eyelashes in a more precise way than

play40:42

humans could and as far as I can see

play40:45

it's also a bit faster or cheaper or

play40:47

something like this this is a purely

play40:49

mechanical task that so far humans did

play40:52

by hand and now a robot can do

play40:55

artificial intelligence is a bit of an

play40:57

overstatement like they use computer

play40:58

vision to detect where the eyelids and

play41:00

the corners of the eyes and so on are

play41:03

which is really cool um but then there

play41:05

is oh there is the for and then there's

play41:06

the against and the against is oh no we

play41:09

have to be very careful um about this so

play41:12

there's potential risks uh the device's

play41:15

proximity to sensitive area could raise

play41:18

concerns about the risk of eye

play41:19

infections or allergic reactions to the

play41:22

materials used in the Lash extensions I

play41:24

guess they just they have someone

play41:27

someone just say well say anything bad

play41:29

about this like anything generic that's

play41:32

bad about this and this person was like

play41:34

I guess you could be allergic to the

play41:37

materials and they're like oh yes there

play41:39

is someone there's also potential

play41:42

risks I'm not I'm not sure I'm not sure

play41:44

I'm buying that you decide for yourself

play41:47

I feel having a machine do a purely

play41:49

mechanical task is fine it's not going

play41:52

to steal a lot of jobs I guess all good

play41:55

I just found it funny that the news

play41:57

article must have this structure but

play41:59

here is something new that technology

play42:01

can do but there is also risks with that

play42:04

being said there's also a risk that this

play42:05

video gets too long and with that I'll

play42:08

finish it thank you for watching

play42:12

[Music]

play42:25

bye-bye

play42:26

[Music]