Rising Titan – Mistral's Roadmap in Generative AI | Slush 2023
Summary
TLDRこのビデオスクリプトでは、ロンドンを拠点とするライトスピードのパートナーであるポール・マーフィーが、AI分野での投資経験と、新興企業Mistolの創設者アーサーとの出会いについて語ります。Mistolは、開発者がモデルをカスタマイズし、特定のタスクに適合させることができるようにすることを目指して、AIの基礎モデルを異なるアプローチで構築しています。彼らは既に7Bモデルをリリースし、コミュニティからの強い関心を集めています。アーサーは、オープンソースモデルの重要性、AIの安全性、規制へのアプローチ、そしてAIが社会にもたらすポジティブな変化についても議論しています。最終的に、ヨーロッパにおけるAI分野のリーダーシップの重要性を強調しています。
Takeaways
- 🌐 Mistolは、AIの基礎モデルを他の会社とは異なる方法で開発し、よりオープンでアクセスしやすいモデルを目指している。
- 🚀 Mistolは設立からわずか6ヶ月で、効率的なチームワークにより、最先端のモデルを迅速に開発している。
- 💻 Mistolの戦略は、開発者向けに特化し、より小型で特定のタスクに適したモデルを提供することにある。
- 🔍 Mistolの7Bモデルは、コミュニティからの高い関心を集め、多様な応用例が見られる。
- 🤖 Mistolは、AIの安全性やバイアスの問題に対処するために、モデルの重みを公開し、カスタマイズを可能にしている。
- 📈 Mistolは、将来的にモデルのホスティングサービスや新しい技術、プラットフォームの発表を計画している。
- 🌍 ヨーロッパにおける人材獲得は競争が激しく、Mistolは最高のエンジニアと科学者の採用に注力している。
- ⚖️ AIに関する規制や安全性について、Mistolはアプリケーションレイヤーに焦点を当てたアプローチを提唱している。
- 🌱 AIは医療、教育、創造的思考などの分野で人々の生活を改善する可能性がある。
- 🇪🇺 ヨーロッパにおけるAI分野のリーダーシップは、地域の価値観に合った技術の発展に不可欠である。
Q & A
ライトスピードはいつからヨーロッパで投資を始めましたか?
-2007年から。
ライトスピードはAIにどのくらい投資していますか?
-約10年間でAIに約50社に投資し、10億ドル以上を投資しています。
ミストールの創業者は誰ですか?
-アーサー、グォム、ティモットです。
ミストールはどのようなビジョンを持っていますか?
-基礎モデルを他の会社とは異なる方法で作成し、開発者がモデルを特化させて独自のものにするためのよりオープンなアクセスを提供することです。
ミストールの最初の目標は何でしたか?
-7Bモデルの構築です。
ミストール7Bモデルはどのように迅速に開発されましたか?
-優秀なチームを組み、データセットに80%のチームを割り当てることで、3か月以内に完成しました。
ミストール7Bモデルに対するコミュニティの反応はどうでしたか?
-非常にポジティブで、多くの開発者がミストール7Bをカスタマイズして新しい機能やアプリケーションを作成しました。
ミストールは将来何を計画していますか?
-新しいモデルと技術を発表し、モデルのホスティング能力を提供するプラットフォームの立ち上げです。
ミストールは採用においてどのような課題に直面していますか?
-最高のエンジニアと科学者を採用することが非常に競争が激しいため、大きな課題です。
ミストールはオープンソースとAIモデルの関係をどのように定義していますか?
-モデルの重みを公開しているため、完全にオープンソースとは言わず、'オープンウェイト'と表現しています。これにより、モデルの修正が可能になりますが、すべてを完全に理解するわけではありません。
Outlines
🚀 ライトスピードによるAIへの投資とミストールのビジョン
ポール・マーフィーは、ロンドンに拠点を置くライトスピードのパートナーとして、2007年からヨーロッパでの投資活動について紹介し、AI分野への約10年にわたる投資と50社以上への出資を強調しました。彼は、新興企業ミストールとその共同創設者アーサーに対する高い評価を表明し、AIの基盤モデルを異なるアプローチで開発しようとする彼らのビジョンを紹介しました。アーサーは、開発チームの迅速な行動とオープンソースモデルへの取り組みを通じて、開発者がモデルを特化させ、彼ら自身のものにすることを可能にするプラットフォームを構築していると述べました。
🌍 コミュニティとのエンゲージメントと次のステップ
アーサーは、ミストールの7Bモデルがコミュニティによってどのように受け入れられ、活用されているかを説明しました。特に、開発者が独自のタスクやデータセットにモデルを微調整し、新しい能力やトピックを探求している例を挙げました。さらに、将来のモデルや技術、プラットフォームの展開について触れ、ミストールがモデルホスティングと迅速な推論機能を提供する計画を示唆しました。
🤖 オープンソースとAIの課題
オープンソースの概念がAIとモデルにどのように適用されるかについて、アーサーは「オープンウェイト」という用語を使用して説明しました。これは、モデルの重みを公開することで、変更の可能性を提供するものですが、完全な透明性や理解を必ずしも意味するわけではありません。彼は、モデルの変更とカスタマイズが重要であり、オープンソースアプローチが科学と技術の進歩を加速させるために重要であると強調しました。また、バイアス制御と解釈可能性の向上についても言及しました。
⚖️ AI規制と責任の所在
アーサーはAI規制に関して、製品安全、国家安全保障、そして存在リスクという三つの異なる側面を識別しました。役割と責任に関しては、アプリケーション層に圧力をかけることで、間接的にモデルプロバイダーに対する市場圧力を生み出すべきだと主張しました。これにより、より制御可能で安全なモデルの開発が促されると述べています。また、規制に対するアプローチとして、アプリケーション層に焦点を当てることの重要性を強調しました。
🌟 AIによる未来の可能性
アーサーはAIの持つポジティブな可能性に焦点を当て、特に医療、教育、仕事の方法における革新的な変化を強調しました。AIがクリエイティブな思考を促進し、新しい社会を形成することへの期待を表明しました。さらに、気候変動などの世界的な課題に対してAIがどのように貢献できるかについても触れ、科学的な発見や技術的な進歩を加速させることができると語りました。
🇪🇺 ヨーロッパにおけるAIのリーダーシップとその重要性
AI技術の進展におけるヨーロッパの位置付けと、地域内での技術的リーダーシップの確立の重要性についてアーサーは言及しました。彼は、ヨーロッパが技術的な進歩を主導し、政策および技術の両面で提案を行うことで、民主主義と社会に対するヨーロッパの価値観を形成していくことの重要性を強調しました。
Mindmap
Keywords
Please replace the link and try again.
Highlights
Paul Murphy introduces Light Speed and its investment history in Europe, emphasizing their focus on AI with a portfolio of 50 companies and over a billion dollars invested.
Arthur shares the vision behind Mistol, aiming to make foundational AI models more accessible and customizable for developers, with a focus on open source models.
Mistol's rapid development of a 7B model within three months of funding, highlighting the efficiency and dedication of their team.
The community's enthusiastic reception of Mistol's 7B model, leading to thousands of derivative works and integrations in various open source projects.
Mistol's plans for releasing new models, techniques, and a platform for hosting models with fast inference capabilities.
Challenges in hiring top talent and building a community around Mistol's AI models, emphasizing the importance of community engagement.
Arthur's perspective on AI regulation, advocating for clear distinctions between product safety, national security, and existential risks.
The potential of AI in transforming healthcare and education through personalized and empathetic interactions.
AI's role in addressing climate change by enabling scientific breakthroughs in fields like chemistry and material science.
The significance of having a European champion in AI to shape the technology according to European values and democracy.
Mistol's commitment to open source as a core part of its philosophy, facilitating customization and addressing biases.
The distinction between open source software and open weights in AI, and how Mistol balances competition and collaboration.
The need for independent regulatory bodies to ensure AI safety and prevent regulatory capture by large tech companies.
Arthur's view on existential risks of AI as speculative and the importance of empirical evidence in regulatory discussions.
The future of AI in enabling more creative thinking by automating routine tasks and structuring knowledge.
Transcripts
[Music]
[Music]
[Music]
okay um welcome everyone really nice to
see you uh very very happy to be back at
slush especially this time with Arthur
uh my name is Paul Murphy I'm a partner
at light speed based in London uh just a
real quick uh bit about light speed for
those that don't know uh we have
actually we're a Silicon Valley based
fund but we've been investing Europe uh
since 2007 we have over 30 companies now
um in
Europe uh and um yeah we're investing in
pretty much every sector uh and every
stage um we're talking about AI today
and I think I think it's important to
put some context around that from our
perspective we actually have been
investing in AI for nearly a decade we
have about 50 companies um and have
invested over a billion dollars into the
category and that context is relevant
because uh when we met Arthur and his
co-founders we thank you we immediately
fell in love with the vision of mistol
um and so I thought the the best place
to to start would be to ask you Arthur
to tell us a little bit about what
you're building at
sure thank you very much slush for the
invitation thank you Paul as well um so
yeah we started mistal six months ago uh
with guom and timot and our vision was
that we wanted to make the foundational
models a bit differently from the other
companies uh we've been in the field for
almost a decade now and we've seen it go
from a cat and dog detector to something
is very close to being humanlike
intelligent or atast at least looks like
it and we knew that with a very
dedicated team we could develop uh
state-of-the-art models very very
quickly and we could actually take the
field into something that is that would
be more open where would give more
access to developers so that they could
specialize the models make them their
own make them as small as possible to
solve their task and for us the good way
of doing it and the good way of starting
that was to ship the best open source
models create models that would be very
easy to to use by individual developers
and from then on build onto an
Enterprise play to sell a platform that
allows developers to take large language
models and to make them their own to to
create some differentiation on the
application they're making and that's an
differentiation which is currently hard
to do when you only access apis of a
couple of providers but if you have a
deep access to the models you can create
things that are much more interesting
and this is what what we want to enable
so when we we LED your seed round it
wasn't that long ago you told us that
you're first thing you're were going to
do is to build your 7B model and then I
think it was it was like 3 months from
when we signed the docs on that round uh
we got our message saying hey we're
ready uh it's ready and it was faster
than we had expected it was already
incredibly ambitious I'm just I think
everyone's probably wondering how you
did how you did that so quickly well I
think the secret is to have a a good
team uh so we were joined by our first
employees uh a dozens of them at the
beginning of June and nobody took
holidays uh we Rec created the wall what
we call the machine learning Ops system
so that's actually Fair simple you you
need to create a very good Training code
base you need to create a very good
inference uh code base uh to to deploy
the models you need to be able to
evaluate the models and the one thing
you do need the most and where we
actually dedicated 80% of the team on uh
for three months is to have some very
good data sets so we we went to the open
web took public domain knowledge created
it so that we could just get the best of
it filtered it did everything to get
something very good did some work around
how to better optimize the models and
combine all of this uh and then train
the model to get the 7B and we continue
doing it uh with the new models we'll be
soon announcing like when you say it's
fairly easy I think maybe some people
would disagree with you on that but you
definitely made it look easy I think
that's true um so I'm curious uh the
community you know was has been very
engaged with 7B model Since You released
it I think it was you know trending on
hugging face for multiple days top you
know top top models um what kinds of
things have you seen that have been
interesting so far from the community so
we've seen I think thousands of um D
derivative work so uh Developers that
took mistal 7B and fine tuned it on
their task or on their data sets to make
it special so we've seen new
capabilities like longer context uh
better instruction following capacities
uh we've seen uh like new topics so
we've seen like occult specialized
models able to talk about uh post test
experience and the like much better than
what MB was able to do before so many
kind of different applications uh some
of them useful some of them
just funny um we've seen integration in
a lot of llm Open Source projects so the
open source world around Genera T is
pretty is is pretty involved already so
you have retrieval augmentation systems
you have projects that allow to deploy
the models on your laptop you have all
of these things and they adopted M 7B
very quickly and I think it was the
field was really missing an actor that
would produce the best open source
models and actively engage with the
community and that's what we we uh we we
are enabling okay and so now 7B is out
there what comes next so we have um
nothing announced yet but we we do have
things in house that we'll be announcing
before the end of the year uh new models
uh new techniques uh and obviously the
beginning of a platform so we're
actively working on the product uh we'll
be soon offering uh hosting capacities
for our models uh with very fast uh
influence uh capabilities and yeah
that's for uh very soon okay okay I'll
watch the space um so you're also while
you're doing all this incredibly what I
think most people would think of as
quite challenging technical work you're
also building a company and I know
that's not easy haven't done it myself
before um what's keeping you up at night
right now what's your biggest headache
um so hiring is obviously a very big
challenge I think the only reason why we
got there so fast is because we hired
the the best engineers and the best
scientist in the world it's a very
competitive landscape uh Europe is full
of talent especially the junior ones uh
and so we we are this is some like a
very big preoccupation for us like I'm
constantly working on it so that's one
thing um the other thing is like
creating the community engaging with it
uh so we started with the with mral 7B
but we really need to uh yeah well
facilitate the life of our users uh have
them engage facilitate Upstream
contribution facilitate the emergence of
IDs that we could help enable
so that's another thing we have a lot of
um I guess policy matters uh that we did
not expect but obviously this is an
agenda that you don't select um there's
we we so there's there's different
tracks you have in the US you have in EU
um we've been uh vocal about the fact
that we wanted to have hard regulation
on the product side because it's very
important and we see ourselves as the
provider of tools and a big enabler of
compliance for the application makers so
we've been saying that uh constantly and
and and we've seen like the debate uh
progress on these topics and so this is
something that yeah we're very keen on
trying to enable from a technical
perspective because it's important that
you have technical Founders that
participate in that discussion uh and so
that that has kept me up at night uh for
for a while and I think you know the
ambition was certainly to be able to
build something that could rival other
large companies like open Ai and I'm
just curious what do you view as a
differentiating philosophy or approach
to companies like open AI I think a
differentiating philosophy is that we
really Target the developer space and we
really think that when you're making an
application that you want to put into
production you do want to have several
specialized models that are as many
chips you you should see them as chips
that you assemble in an application and
it's actually not easy to make a very
good chip for the use case you want so
you can start with a very big model with
thousands of billion well with hundreds
of billions of parameters it's going to
solve your task maybe but you could
actually have something which is 100
times smaller and when you make an a a
production application that goes at
scale and Target a lot of users you want
to make the choices that lower the
latency lower the costs uh and leverage
the actual proprietary data that you may
have and this is something that I think
that that's not the the topic of our
competitors they're really targeting
like multi-usage very large models AGI
we takeing very much much more pragmatic
approach in enabling super useful
application today uh that would be cost
efficient that would be very low latency
and that would enable strong
differentiation through uh proprietary
data okay and you've talked I think
another key difference you've talked a
lot about open source as being a core
part of your DNA um and I think question
I sort of wanted to ask uh Arthur by the
way wouldn't look at these questions
beforehand so he wasn't expecting this
one but I understand the concept of Open
Source software I think we all do we see
the code you kind of can take it modify
it um and use it but in the world of AI
and and models the concept of Open
Source just feels like it's maybe a bit
different because actually some things
you do keep for yourself or you have to
what does open source mean in the
context of llms and AI so we don't
really call them open source so the the
models we provide are open weight I
think it's important to like keep a good
distinction between the like the
terminology we were using for software
and the terminology we are using for
models if you provide the weights of a
models you're enabling modification
you're not necessarily enabling like
full understanding of what's going on
but even if you do provide full
transparency on the data sets and
training you don't know what's going on
cuz it's it's a bit opaque by Design so
it's an empirical science when you
create a model the only way to verify
that the model is doing what you expect
is to measure it with with with
evaluation this something will be
enabling and then it's to modify it with
some signal coming from either humans or
maybe machines to to modify the model so
really the modification part is super
important for differentiation and we're
taking this approach there's a full open
source approach which I think is very
valid as well for science in which you
disclose your data set you disclose
everything that I think that's that's
something that we would strive toward at
some point but obviously it's super
competitive and the data set part is
very hard to to obtain it's also very
Capital intensive you need a lot of gpus
so right now we're taking a balanced
approach in between what we uh opens
what the open ways we provide the things
we keep for ourselves to to get a
competitive uh Edge and this is going to
be a dynamic play and we expect it to to
evolve with time and with technology
okay and then does the does the open
weight approach help with other
challenges like biases and control yeah
so it helps with basically two things
the first thing is that you can modify
the the biases you can have like a
strong and fine uh modification
capabilities on the editorial tone on
the orientation and alignment of the
model so we allow alignment of your own
models to your own values and those can
slightly differs um so like fine control
of biases goes through fine deep access
to to models that's the first thing the
second thing it allows and we've seen it
with active engagement of the AI safety
community in particular around open
Source models it allows to have better
interpretability because you can see the
inner activations of the of the models
and and that tells you things about
what's happening uh about why the model
is taking a decision and not another so
why is it outputting award and not
another and so in the interpretability
world it's also super useful it's also
and I guess the last thing is that it's
very useful to do red teaming because
you have a deep access to to the model
and so you can try to verify the the
part of it which are a bit failing or
behaving unexpectedly and these are
things that you can then correct very
similarly to uh what we've been doing in
the open source software for security
cyber security and the like okay and
then what I mean what is sort of what do
you view as at stake here you know why
is this is this in other words is this a
business Advantage for Mel or is it
something more fundamental that you see
as almost a
responsibility so it's both a business
Advantage because we allow further
customization and differentiations and
it's a very mature market and we expect
that on the application space the one
actors the actors that are going to
survive and create some value are the
one that will be able to strongly
differentiate themselves and so they
would need deep access to models so
that's a business differentiator then
there's a bit of an ideological
differentiators in the sense that I've
been contributing to open source for 10
years G as well we really think that AI
has been accelerated by open science by
the circulation of knowledge and that's
how we went in 10 years from something
very very uh well interesting but that
would just detect Tech boats and
something that actually uh will speak
the human language so this has been
allowed because you had big tech labs
you had the Academia as well that was
all of them were communicating at
conferences every every year and and
information would circulate and that
accelerated things and suddenly in 2020
open I decided to stop publishing and it
was followed by its competitors uh very
closely after and so ever since 2022 we
haven't seen like major advances in llm
publicly announced and so we've seen
currently there's like new architectures
that are used internally by our
competitors and that are not available
out there this is something we will
correct very soon okay great um so I
want to shift Focus now talk about
something you mentioned earlier which is
regulation and it's a topic you kind of
can't avoid I think you've thinking
about AI um a lot of focus within Europe
and and in the UK um and I think you at
the safety Summit in the the AI safety
Summit um last month there's a lot of
ideas out there and I think um you know
curious to hear your view is to what
should be the priority how should
regulation be prioritized and
instrumented yeah so I think it's quite
interest it's a very interesting topic
for me and and we've been uh yeah we've
been contributing IDs the one thing that
I would start with is that we've been
talking about regulation and safety and
mixing Concepts very heavily so there's
a matter of product safety which is
answering the question of you deploy a
diagnosis assistant in the hospital you
want it to be safe you want to be able
to measure whether the decision it's
making is actually sound is actually
correct so that's that's what we call
product safety that's something you have
when you buy a car you have product
safety of your car and it should very
much be similar for applications that's
one thing and AI to some extent creates
new problems because you have models
that are not deterministic and so they
behave in a potentially an expected way
so it's useful to refine the hard lws
that we have around uh product safety
regulation now there's another topic
that came up which is National Security
so the question of whether the llms that
we're training the LM that everyone is
training is spreading too much knowledge
so when you have access to llm you're
effectively able to educate yourself on
many topics and this is something that
is a concern for different actors
because you could have like small groups
that are deemed bad that could use this
knowledge to do bad things so this is
this has been at the a central topic
especially in the US um we're still
lacking a lot of there's absolutely no
public evidence that llms are
facilitating anything so we're really we
we've been advocating for for some
empirical grounding of the discussion
and this is something that's currently
very much lacking and then there's a
third thing which is kind of mixed with
all this with with the two first which
is existential risk so knowing whether
the technology we're making is
effectively on an unbounded exponential
that will end up destroying us because
as every exponential it kind of breaks
the limits at some point and and that's
well it becomes IL defined as we say in
mathematics so this is something that
for us is very much science fiction
that's empirical evidences so what we've
been saying is that we should really
focus on the first topic which is
imminent is something that is we do need
to have product safety on AI because
it's it's going to to otherwise it's
going to break trust in the technology
we're making and so we want to enable
that on the second part we are lacking
empirical evidence but I think this is
something that we should monitor closely
knowledge historically knowledge the
spreading of knowledge has always had
more benefits than uh than than
drawbacks and we AI is not different in
that respect but still it's something
that that could do with monitoring
because it's really new technology on
the third aspect of AGI and and and the
like and and the fact that that you
could have an autonomous system that
would go out of control this is
something that we are not at heas
discussing because we really think that
as scientists we are lacking evidence of
any existential risk and we think that
it pollutes the discussion on the first
aspect which is super important yeah and
so if I just kind of make sure I
understand this right the view is that
the application layer is probably the
one that has the most responsibility in
terms of safety at least to consumers or
end users whoever that is businesses but
that perhaps the models could provide
that as a feature or functionality but
it's not the responsibility of the model
to ensure that the ultimate data
transmitted is itself safe exactly so we
think that the correct way of putting
some pressure on the model providers uh
like us is to effectively say that any
application which is deployed and that
includes the application that we deploy
uh should be should meet a certain
number of safety standards so they
should do what they're expected to do
and if you do that then that means that
the application providers will be
looking at model providers that are
controllable enough that can give some
form of guarantees that can give some
evaluation Tools around the fact that
they're controllable and that they do
what they're expected to do so you have
some form of second order pressure that
is put you put pressure on the
application layer and that puts a market
pressure on the foundational model
developers and that's the correct way of
making a healthy competition in making
the most controllable models in making
the best evaluation tools and making the
best Guard railing tools and we think
that it's a much better way of doing it
than applying directly a pressure on the
foundational model layer because if you
do that well you're you're in a IL
defined territory because you're trying
to control something which is by Design
super multi-purpose very akin to a
programming language so you can't really
regulate the programming language
because you can do anything with it and
so really there's a problem of
definition and then there's an
operational problem of the fact that if
you put some heavy pressure on that
layer you're effectively um favoring the
big actors that have a lot of compliance
capabilities and you're you're making it
harder for startups with Innovative IDs
to to come up and compete and
so this like foundational models is a
bad proxy for a market capture and so we
believe that applying the regulation
pressure on the application layer is the
one thing to do because that's going to
Foster competition and provide a safer
world do you think that there's a role
for an or you know an iaea kind of like
organization to exist that helps to
enforce or provide this guidance
regulation so yes I think um this kind
of Regulation if we need
to monitor I think we we do need to have
empirical evidence of what's happening
in the space and we need to monitor the
product side safety and one way of doing
it is to enforce that we have very very
independent uh organisms that actually
monitor these things and when I say
independent I mean that we should be
very cautious of of preventing pressure
and Regulatory capture of this things so
setting standards but ensuring that no
big actor is basically writing the
standard themselves so what that means
that if we are if we if we need to have
this this form of organisms they need to
be very well funded probably state
funded
and being completely screen from
pressure from the industry okay so now I
want to shift you know I think the
regulation debate is largely many of the
debates in AI are tend to sort of skew
somewhat negative so let's dream for a
second like how can AI make our lives
better what do you see as the utopian
future with
AI so I think the there's so there's
many vertical in which Ai and like
interact ING with machines with natural
language carry a lot of value uh so
Healthcare is going to be completely
changed by AI because you you will be
able to
interact uh with empathic beings uh that
are actually super well grounded on
statistics and that's really what you
were expecting from medicine so we
expect that AI is going to empower uh
Physicians to be much better at what
they're doing and to make better
decisions um education is also a super
interesting topic uh personalization of
Education we know that it's super
important to uh take the most most
potential of of human beings and having
some like your individual teacher being
an assistant this is going to change a
lot of things especially in the global
South um so that's two things generally
speaking this is going to change the way
we work so it's a way the fact that it
can interact with v structure knowledge
and that it can do well act as if well a
bit imita the boring task of of your
daily life this is going to enable more
space for creative thinking so we will
be able to think more creatively and
that's going to unleash I think a new
Society very soon and if you think about
some of the more existential risks we
face in the world like climate change do
you think that something like that can
be addressed or at least improved yes so
I think this is a frontier which which
hasn't been completely addressed yet but
this is really a promise of having
better models the fact that if you if
you have some ways of reasoning around a
pool of science well you can enable
scientists to come up with new ideas you
can potentially unlock very precise
things like create like in chemistry in
accelerating uh chemical reaction so
that you emit less CO2 for instance
these things like Material Science
chemistry Fus nuclear fusion as well all
of these locks that we have and that are
that we basically need to break in order
to address climate change well I mean
that's one of the way you can can
address climate change obviously the one
way is also to reduce consumption but
the the these things we we think that AI
is going to be an enabler of of of
breaking these lcks it's not going to be
an easy task uh there's still many
things to invent and we think that going
through the open science part uh
fostering the the the keeping fostering
the AI community that drove the field
forward for 10 years is is super
important to break these logs okay
that's great so I think I want to come
back to to Europe sort of for our last
questions we're we're out of time um how
I mean I think the fact that the company
is being built in Europe is very
important to you it was obvious to you
and your co-founders when we invested um
how important do you think it is for the
industry that we have a European
champion emerg in in the field of
AI so the Technologies is AI generative
AI is is really a wave you can it's
going to change society quite
significantly and in Europe we have a
choice of either being on top of the
wave and driving the technology forward
or just looking at it happening in the
US uh and in China and we think that in
order to shape the technology to our
values and to the way we think about
democracy about Society we need to have
very strong technological actors that
are able to drive the field forward make
proposals um both in term of policy and
in term of technology and so that's why
we believe it's super important that
actors
that we have strong actors in Europe
great thank you so much Arthur really
appreciate it
amazing
関連動画をさらに表示
【速報】Meta社がついに最新・最強AI「Llama3」をリリース!今後インスタにも導入!?徹底レビュー
Bioshokさんシリーズ1〜AI脅威論がなぜ最近話題なのか?
Satya Nadella - CEO of Microsoft | In Good Company | Podcast | Norges Bank Investment Management
【未来の職場】AI時代における労働市場の変化/AIが仕事を奪う?それとも新たなチャンスを創出する?/人間の価値が問われる時代のリーダーシップ
Nvidia $290 Target by Cathie Wood | Nvidia Stock News
これさえ押さえておけばOK! 生成AI時流を解説 〜3月後半の生成AIトレンドをご紹介〜(2024/04/03)
5.0 / 5 (0 votes)