7. Frank Block, Head of IT Data Science at Roche, Switzerland
Summary
TLDRDer Vortrag von Frank umfasst sein persönliches Engagement in der KI-Branche, beginnend mit seiner Ausbildung als Physiker und seiner Zeit am CERN. Er diskutiert die Bedeutung von Datenqualität, die auch nach Jahrzehnten immer noch eine Herausforderung darstellt, und betont die Notwendigkeit, den Fokus auf die Produktivität und den Nutzen von KI-Anwendungen zu legen. Er reflektiert über die Entwicklung von KI von Expertensystemen über neuronale Netzwerke hin zu heutigen automatisierten Modellen und unterstreicht die Bedeutung von Interdisziplinarität, Soft Skills und der Bedeutung von Qualität und Kundenzentrierung in der KI-Entwicklung.
Takeaways
- 🔬 Datenqualität ist ein immerwährendes Thema, das die Produktivität von AI-Teams und die Skalierung von KI-Projekten beeinträchtigt.
- 🧠 Die Industrielle Nutzung von KI erfordert tiefgreifendes Verständnis der Geschäftslogik und der Wertschöpfung für Stakeholder.
- 💡 Die technische Seite erfordert fortlaufendes Lernen und gute Kenntnisse in Datenwissenschaft, Cloud-Architekturen und fortgeschrittenen Analysewerkzeugen.
- 🛠️ Technologie ist ein Enabler, aber das zentrale Ziel ist es, Probleme zu lösen und nicht nur um der Technologie willen zu arbeiten.
- 🤝 Interdisziplinarität ist entscheidend für erfolgreiche KI-Projekte, da sie die Komplementarität verschiedener Perspektiven und Fähigkeiten nutzt.
- 💼 Soft Skills wie das Fähigkeiten zu netzwerken und die Fähigkeit, Geschichten in der Sprache der Stakeholder zu erzählen, sind für KI-Spezialisten wichtig.
- 👨🔬 Die Rolle des Data Scientists hat sich von der reinen Forschung hin zu einer Produktions- und Operations-Orientierung entwickelt.
- 🔍 Die KI-Industrie hat sich von der Ersetzung von Menschen durch KI hin zu einer stärkeren Betonung der Verstärkung menschlicher Fähigkeiten entwickelt.
- 🏢 Organisationsbarriere und mangelnde Bereitschaft der mittleren Management-Ebene, KI zu adoptieren, können die Skalierung von KI-Projekten verlangsamen.
- 📈 Die Bewegung von Daten als Produkt und die Einführung von DevOps-ähnlichen Prinzipien in der KI-Entwicklung sind wichtige Entwicklungen.
Q & A
Wie hat die Erfahrung des Redners bei CERN seine Karriere beeinflusst?
-Der Redner ist Physiker ausgebildet und hat bei CERN, dem größten Beschleuniger der Welt, seine Masterarbeit geschrieben. Diese Erfahrung hat ihn stark beeinflusst und ihn für die Arbeit mit großen Datenmengen und der Analyse solcher Daten interessiert gemacht.
Was ist der Large Electron-Positron Collider und wie wichtig war er für die Forschung?
-Der Large Electron-Positron Collider (LEP) ist ein 27 Kilometer umfassendes Gerät, das 100 Meter unter der Erdoberfläche liegt. Es war ein gigantisches Instrument, das für die Forschung zum Verständnis der Bedingungen des Urknalls von großer Bedeutung war.
Wie viele Protonen-Proton-Kollisionen pro Sekunde wurden im Large Hadron Collider gemessen?
-Im Large Hadron Collider wurden zwei Milliarden Protonen-Proton-Kollisionen pro Sekunde gemessen, was enorme Mengen an Daten erzeugt hat.
Wie hat sich die Einstellung des Redners zur künstlichen Intelligenz (KI) entwickelt?
-Der Redner begann mit der Arbeit an Expertensystemen und wechselte dann zu neuronalen Netzwerken, nachdem bewiesen wurde, dass nicht-lineare Neuronen fast alles modellieren können, was zu einer Renaissance der KI-Forschung führte.
Was ist die Bedeutung von Datenqualität für KI-Anwendungen?
-Datenqualität ist ein konstanter Faktor, der die Skalierung von KI beeinträchtigt. Ohne hochwertige Daten ist es schwierig, KI-Teams produktiv zu gestalten und die volle Potenzial von KI zu nutzen.
Wie wichtig sind Soft Skills im Bereich der künstlichen Intelligenz?
-Soft Skills sind sehr wichtig, da sie das Verständnis und die Kommunikation zwischen verschiedenen Stakeholdern und Teams erleichtern. Dazu gehören Fähigkeiten wie das Aufbauen von Netzwerken, das Erzählen von Erfolgsgeschichten und das Anpassen des Tempos der Innovation an die Bedürfnisse anderer.
Was sind die Herausforderungen beim Übergang von Prototypen zu Produktion in der KI-Entwicklung?
-Ein Hauptherausforderung ist die Verkürzung der Zeit von der Erstellung eines Prototypen bis zur Produktion. Es ist wichtig, Plattformen zu nutzen, die einen nahtlosen Übergang ermöglichen, um die Zeit bis zur Produktion zu reduzieren.
Wie kann die Lücke zwischen verfügbaren Data Scientists und Marktbedarf überbrückt werden?
-Durch die Förderung von Self-Service-Analytik und Citizen Data Scientists kann die Lücke schrittweise überbrückt werden. Dies erfordert jedoch eine hohe Datenqualität, um sicherzustellen, dass die Ergebnisse der Analysen verlässlich sind.
Wie wichtig ist die Interdisziplinarität in KI-Projekten?
-Interdisziplinarität ist sehr wichtig, da sie die Zusammenarbeit verschiedener Fachbereiche wie Business, Data Science und Engineering ermöglicht, um komplexe Probleme zu lösen und umfassende KI-Lösungen zu entwickeln.
Was sind die aktuellen Entwicklungen, die der Redner in der KI-Branche beobachtet hat?
-Der Redner beobachtet eine Verschiebung von KI als Ersatzmöglichkeit für Menschen hin zu KI, die unsere Fähigkeiten erweitert und unsere Arbeit interessanter macht. Er betont auch die Notwendigkeit, Datenqualität und Governance zu verbessern, um KI erfolgreich zu skalieren.
Outlines
🔬 Einführung in die Welt der Physik und KI
Der Sprecher, ein Physiker, erzählt von seiner Karriere und wie er durch seine Ausbildung und seine Zeit am CERN, dem größten Teilchenbeschleuniger der Welt, beeinflusst wurde. Er beschreibt seine Erfahrungen mit riesigen Datenmengen, die durch den Betrieb des Large Hadron Colliders generiert wurden, und wie er zu der Überzeugung kam, dass KI und KI-Systeme für die Analyse solcher Daten unerlässlich sind. Er spricht auch über seine frühen Erfahrungen mit Expertensystemen und der Entwicklung zu neuronalen Netzwerken, die er später in der Teilchenphysik anwenden konnte.
🔧 Automatisches Modellmanagement und AI-Produkte
Der Sprecher beschreibt seine frühen Arbeiten mit automatischem Modellmanagement und der Entwicklung von KI-Produkten. Er betont die Bedeutung der Datenqualität und der kontinuierlichen Verbesserung von Modellen, um den Wert von KI-Anwendungen zu maximieren. Er diskutiert auch die Herausforderungen der Skalierung von KI in Unternehmen, die oft durch organisatorische Barrieren und die Notwendigkeit, den Wert von KI nachzuweisen, begrenzt wird. Der Sprecher hebt hervor, wie wichtig es ist, den Fokus auf die Qualität und den Kunden zu legen, um erfolgreiche KI-Produkte zu entwickeln.
🤝 Interdisziplinäre Zusammenarbeit und Soft Skills
Der Sprecher betont die Bedeutung der Zusammenarbeit zwischen verschiedenen Disziplinen, wie Business, Data Science und Engineering, um erfolgreiche KI-Projekte umzusetzen. Er spricht über die Notwendigkeit, die Geschwindigkeit der Innovationen anzupassen, um sicherzustellen, dass alle Beteiligten mithalten können. Er erwähnt Soft Skills wie das Fähigkeit, ein Netzwerk aufzubauen, die Fähigkeit, Geschichten zu erzählen und die Bedeutung von Führung, um KI-Projekte voranzubringen. Der Sprecher diskutiert auch die Herausforderungen, Talente zu finden und zu halten und wie man effektive Teams aufbaut.
🌐 Aktuelle Entwicklungen in der KI-Industrie
Der Sprecher reflektiert über aktuelle Trends und Entwicklungen in der KI-Branche, wie die Verwendung von KI zur Steigerung der Fähigkeiten der Menschen und zur Verbesserung ihrer Arbeitsbedingungen. Er spricht über die Bedeutung von Datenqualität und Governance, die weiterhin die Skalierung von KI-Lösungen behindern können. Er diskutiert auch die Notwendigkeit, AI-Technologien zu standardisieren und zu industrialisieren, um sie effizienter und skalierbarer zu machen. Der Sprecher erwähnt auch die Bedeutung von Agile-Methoden und DevOps in der KI-Entwicklung.
📊 Datenqualität und KI-Implementierung
Der Sprecher beantwortet Fragen zu den Herausforderungen der KI-Implementierung, insbesondere in Bezug auf Datenqualität und Governance. Er diskutiert die Notwendigkeit, ein Gleichgewicht zwischen der Bereinigung von Daten und der schnellen Implementierung von KI-Lösungen zu finden. Der Sprecher betont, dass KI-Projekte nicht auf perfekte Daten warten sollten, sondern mit den verfügbaren Daten arbeiten und kontinuierlich verbessern sollten. Er spricht auch über seine Erfahrungen beim Übergang von Prototypen zu Produktion und die Bedeutung einer effizienten Plattform für diesen Prozess.
⏱️ Zeit zur Produktion und KI-Lernungen
Der Sprecher teilt seine Erkenntnisse über den Übergang von Prototypen zu Produktion in der KI-Entwicklung. Er betont die Bedeutung, die Zeit von der Erstellung eines Prototypen bis zur Produktion zu verkürzen und wie man Plattformen nutzt, um diesen Prozess zu beschleunigen. Der Sprecher diskutiert auch die Herausforderungen, die mit der Skalierung von KI-Lösungen einhergehen und wie man diese effektiv bewältigen kann.
Mindmap
Keywords
💡Datenqualität
💡Industrielleisierung
💡Expertensysteme
💡Neuronale Netze
💡Datenanalyse
💡CERN
💡Datenwissenschaft
💡Künstliche Intelligenz (KI)
💡Automatisierung
💡Interdisziplinarität
Highlights
Constant importance of data quality over time despite advancements in AI.
Observation of recent developments and trends in AI and data analysis.
Background as a physicist influenced the speaker's approach to data and AI.
Experience at CERN working with large-scale data from particle accelerators.
The evolution from expert systems to neural networks in AI applications.
Importance of understanding business models and value generation in AI projects.
The necessity for continuous learning and staying updated with AI advancements.
The shift from automating model production to proving the value of AI applications.
Emphasis on the product focus and customer-centric approach in AI development.
The challenge of data quality as a barrier for scaling AI and its impact on productivity.
The need for interdisciplinary teams in AI projects for a comprehensive approach.
The importance of soft skills in AI, such as the ability to network and communicate effectively.
The shift in AI focus from replacing human work to augmenting human capabilities.
The role of organizational barriers in the scaling of AI and the need for data-driven decision-making.
The potential of self-service analytics to address the shortage of data scientists.
Balancing data quality and governance with the need for timely insights in data-driven businesses.
Learnings from moving AI projects from prototyping to production and the importance of time to production.
Transcripts
of constant over time and that may be
interesting um to to mention as well as
a few observations um more recent
developments and and trends uh that i
think i'm i'm seeing here on on the
horizon
um and
so if if i continue here let me see oops
now the controls are back so um
where do i come from and um so
one second i'm still having some issues
here with my back and forth so now it's
good um so where did i come from i'm i'm
a physicist by training and that that
influenced me a lot so let me tell you a
little bit
why i like data so much and um
analysis of big amounts of data so when
i when i started
studying towards the part when you do
your master thesis so the last two years
of my studies i was lucky and could go
to cern cern is in geneva it's the
biggest accelerator you will find on
earth
and as a matter of fact it was just
about the time when this huge machine
was was being launched uh into operation
back then under the name of a large
electron positron collider and this is a
huge 27 kilometer circumference device
about 100 meters below surface
so everything for me as a young student
there was was gigantic right so starting
with this huge machine then going on and
here i'm showing now the upgrade that
that it received like 10 years after it
it started
working as a lap it was called large
hadron collider and there were huge
experiments that are placed on the earth
under the surface and these huge
machines you see a little bit here
perhaps
the dimensions compared to to us humans
so these are huge machines think about
it as microscopes
and and and they are there to collect
tons of data
and that data is then what will be
analyzed later on and um so
what is then of course also gigantic is
the amount of data that's generated in
such an environment
um and and so you have something like
two billion
of these proton proton collisions head
on collisions per second and that just
generates crazy amounts of data right so
um yeah that was huge
of course then all the computing you
need is huge
um
i don't know a few of you may remember
that name of the craze super computer
today you would probably put a iphone
and it would do probably the same job
but that was was a big computer back
then so that was a certain approach um
in an architecture that was invoked
and cern was the first place where you
would find that in europe and it was
also the first place where it would be
decommissioned because um they
definitely went uh in into other
architectures like farms of um you know
of unix machines and then the servers
and so on and so on and of course
overwhelming the physics that's why i
was there so understanding more about um
the big bang right so what were the
conditions
we had a model the standard model and so
on and so on so this is all the
the story and i really just wanted to
give you that kind of background to see
you know what has motivated me to really
work
very early on in this ai space so i
started off
working on expert systems so that was
kind of a paradigm
kind of from symbolic ai that was
on vogue back then
after earlier on marvin minsky had shown
that
using simple what they call perceptron
so not yet
you know the neural networks as we know
them but these were simpler
architectures they couldn't even solve
simple problems such as the xor problem
so
that got you know that kind of neural
network research into a longer period of
darkness or standby so i was happy
working on on these um at cern you know
doing diagnostics of certain falls of
complex machines so that was an
interesting approach to learn and then
of course very quickly after
i switched to neural networks which came
in vogue again after someone else proved
that well if you have non-linear neurons
you can model almost anything you know
you just have to have enough enough
neurons in your in your
deep learning model
so
there were many applications i could
start then playing with you know be it
for particle track reconstruction
using different setups or identifying
certain fundamental particles
so all of that
was you know back then you had to write
your own code
um we didn't have to open open source
libraries which came a bit later so
that takes a bit of time until you get
something deployed as i moved out of
academia when i finished my phd
it was pretty much about
already then automating
um you know the whole operation of
model production and operation
and replacement of models by new models
and massive amounts of models hundreds
of different models um that would be
live
um and
automated so this was something that we
started very early on in in the 2000s
and i think this is still you know today
a topic under
you know automl and and these are kind
of the the tags that we have today but
in fact that was a work uh
on my side that started very early on
and then it moved more and more into
measuring also the value really proving
the value of of ai of a application so
you can prove it to a b testing
depending on the applications where you
come from that may be more difficult in
a b2b context for instance and then
today i think very important this this
product focus everything is a product
ai product and that brings automatically
the customer
um the user into the focus and quality
with it so i think we we get more and
more that um very good focus on
quality
so
now
summarizing looking back
from from my experience at least what
are the things that i think stayed
constant over time and these may be
interesting for you here and there to
you know to have a look into
um so one of the
unfortunate
evergreens and constants i found at
least is data quality
uh which is a pity because we 20 30
years ago i would have told you yeah i
waste 90 of my time
resolving these data issues and today
the answer is more or less the same
so i think we still haven't gotten out
of that
remove that big barrier for
scaling up
ai
this
of course limits the productivity of our
ai teams our data science teams
analytics teams industrialization is
slowed down or even impossible and uh of
course the value finally that we can get
out of
ai is greatly limited by that
so i'm happy to hear about also others
uh here in the audience you know what
what your experience is then on the
skill side yes um if we want to create
um
advanced applications that contain some
ai components we need to understand what
what is that business of course that
we're working for so how does that work
how does it generate value
um what are the
pain points uh not always easy to
identify
how are the stakeholders and
incentivized and
what are the value metrics that we
should be defining and measuring
um and i think also not to be
underestimated is of course the
organizational change that that it
implies it may often be a barrier
for adopting ai is just the change that
it would
imply
and usually
in you know
adopting ai-based innovation means
change of the current way of working
that's
very much true in most cases
on the technical side of skills
of course we expect that um you know
the the stakeholder or the the people
who develop those solution solutions
that they have very good and deep data
science skills depending on the area
you're working on
um so we need to have some continuous
learning we need to be up to date with
what's going on on the ai front which is
developing very quickly but i think also
you need that kind of experimental
scientific approach to to cut bigger
problems into pieces formulate
hypothesis and so on and so on
um and visualization definitely another
important ingredient
more on the i call it i.t in quotes
let's say skills um there's all the
other technical um skills we need
you need to have some some good
knowledge of programming languages today
more and more cloud architectures and so
on uh be you know being capable of
working with different um advanced
analytics tools um but i think it should
not um
make us blind to
you know in the sense of it's not
all about orbitating around technology
technology is the enabler but what we're
really trying to do is solve problems
and that is at the center and not the
technology
from my perspective
one thing we may um
or i've seen several times is of course
that when we
you know we built these models we almost
fall in love with them and we want them
to be perfect
that most likely will never happen um
so sometimes it's really better to
have a reasonable model that works
reasonably well let's get it out there
it starts generating some value and
let's keep optimizing it afterwards
so that that would be a recommendation
then the other
topic um i've been um of course
observing
um is
interdisciplinarity and here i'm just
showing a few
uh
profiles um this
may change according to the area you're
working in of course you will have
different personas participating in this
but you know to give an example so we
have the business side represented
usually
wants to minimize the risk while
maximizing the return so that's
the balance act that they're usually
working on
on the more like data scientists slash
ai side well we do all these experiments
we try models you know that very well
things can go wrong anytime but that's
part of the game
and we keep generating knowledge
which then
is usually put into practice
and production by more of the
engineering
people that that contribute uh to this
whole
project or or activity
that you may be working on so um i think
this this complementarity is very
important so each have each of these
have a different way of working and
together that really makes a lot of
sense
some words about
soft skills
definitely one topic that i have
observed
is um you know if you are so
um
you just see that the innovation is
working greatly there is many things you
can do you can you can go very far ahead
and by doing that you may
forget
perhaps that other people will not
follow you at the same speed
and it may be worth slowing down a bit
the pace of innovation because otherwise
you will find yourself alone running
ahead and nobody behind you
so um
the other thing um i i put here ability
to network so i think it's very
important that
um you know we as data scientists as ai
specialist that we
we know what's going on we we make
ourselves known but that we also
understand
a network
you know with all different areas of
the companies of the organizations
in a little bit a role of an internal
consultant
always looking for opportunities usually
and
trying to identify the right or the real
pain points by asking the right
questions and that is not always easy
it's very often i find even close almost
to an art
because the real problems
from my experience they're behind many
layers of apparent problems
until you get to them
and then finally if you tell a story a
success story about
ai being employed here and there
uh definitely you need to do that in in
your stakeholders language and not in a
very technical
language which is usually not not very
much
appreciated
then
of course you will have to to find the
people
that have the skills to create your
your ai applications solutions
that will always be a challenge i guess
there will never be enough data
scientists
and among those you still have to find
the ones that are the the right talents
for for what you're out to do
and then of course the next thing is
once you have hired a certain
you know the talents that you were
looking for how do you retain them
because that's the next challenge um
usually the market being very hot
you know what is it that is really
retaining them and
from my perspective it's very much about
the interesting challenges you can
provide
to be resolved so if these are really
challenging high value adding i think
then you have a good chance of retaining
them
and um then the next thing is how do you
make that team efficient right so that
we don't just do research in any kind of
direction but really pointing towards
generational value
interdisciplinary
teams i mentioned that before
the last point here i would make is
about the kind of leadership
that i've seen over the years also
myself as i learned
i see much more benefits in a kind of a
pool leadership which is more kind of a
servant
manager kind of
approach than pushing
into the teams
what you think they should be doing i
think rather better that they follow you
because you have a certain proposition
to make that sounds interesting
um coming towards the end um a few notes
on recent developments um that i've been
observing and i'm sure
you see them as well
um one of the things
over the years
that i find interesting
and absolutely correct also the
development that we
i think in the beginning it was very
much about um ai machine learning
simply replacing people in what they're
doing and i think that focus is now
shifting more and more
definitely in the area where i am in
into augmenting our capabilities
and also making our jobs more
interesting
by the help of of ai that that empowers
us
um
another topic that i i really
stumbled across over the last years is
really the point sometimes we ask so why
can't we really scale up ai in a bigger
way data quality certain one but the
others i think the purely organizational
barriers and
i think part of that is also middle
management which is largely not yet
ready to adopt
ai on a large scale so you may end up
doing
many great projects you can show value
but in the end nothing will change
the the new ways of working will not be
adopted
and that i see as a little bit of a
barrier
so we need to enable many more people to
really start using data themselves and
become more data driven
um
i think the tools are around
the the reservation i would have is
still
um the data quality side as long as we
have those issues i guess you get more
trouble than benefits if you have
everyone doing auto ml on any kind of um
suspicious data so this must go hand in
hand
you know
before we can unlock the value of data
really
and regulations i guess they they
continue to increase so we're seeing a
lot um coming up already out there
that will certainly continue growing
so i think we need to be
ready and embrace it um
and industrialization standardization
automation of ai definitely as well
and to close
um
you've seen some of that certainly um
the envelopes concepts you know that we
augment or increase the
the well-known devops approach to also
contain the
the continuous exploration using uh ml
and ai methods
um
are part of this and we also have some
architectures
frameworks that are being
described and proposed just an example
here from from google which i find quite
useful of course now without going into
any any details here
um in the ways of working the agile and
the scaled agile i think also good
movements here
that um you know have a
product centric um and and team
empowering
kind of focus
so i think all in all
very desirable
developments
and and finally on the data front i i
see some hope there so there is
the fair data movement that you may have
seen already
i think that is already addressing some
data quality dimensions hopefully also
putting data quality more into the
center of detention and another movement
that i'm observing that is creating that
kind of attention on data is this kind
of data as a product approach that we
see in the data mesh
context since 2019 more or less so i see
some adoption of those principles
i've just brought here you know what are
the kind of
requirements or product properties for
these data products and many of them of
course they are they are related to data
quality so let's hope that this
brings the desired effect and good data
plenty of good data so that we could
generate plenty of good
applications using ai in future thanks
for your attention this is all i i had
for you today thank you happy to take
some questions
thanks a lot thank you so much for
sharing the presentation thank you now
just we have time for q a uh we have
given
uh possibility of microphone camera on
to all participants before launch so
anybody who has a question please just
switch on the camera
so i we have just a feedback from niraj
uh
ibsen pharmaceuticals excellent
presentation frank thanks a lot we
agreed
hi frank thank you very much for this
interesting insights and based on your
long-term experience you mentioned one
point that data scientists is a shortage
they're not enough on the market to
fulfill all the demand of companies and
how do you see the possibilities of
self-service analytics to fill up a
little bit this gap and
let's say get
the
the demand of
knowledge workers closer to the
situation of missing data scientists
yeah now thanks for the question marcus
and this is uh spot on right
i mean we have those talks about the
citizen data scientist um i think the
idea is great
the only dependence i would see or
restriction is the data quality
so i mean you have great tools you know
tools like yours and others that that
provide those kind of easy
access to
data preparation and data analysis and
ml i think the tools are there
now we need to make sure that the data
is also there and good
because otherwise you will have many
people
that start using the data and they will
get contradictory results and then you
get a lot of discussion everywhere
and then in the end they will even say
it's the tool right um which is not of
course the case so that is my only
restriction um you know as soon as you
have areas of data where you say
this is safe we have you know we can
guarantee the quality of that i think
then you can open it up for
you know business users
to widely start working on that
thank you very much frank well thank you
thanks a lot and now there was somebody
else hannah took on if you have any
question you can switch on microphone as
well yes
yes thanks frank uh interesting talk
because i think i want to ask a relevant
question is like
because what we are facing is we are
facing a huge amount of data and of
course data quality and governance is
important
but then
and some people will emphasize too much
on this and
this will slow down the let's say ins
driving the insights
so we're in this dilemma is like because
actually
the huge amount of data what type of
insights we should focus on what type of
insights comes first
or
let's clean up all the data but this we
don't know this cleaning up the data is
a huge tedious
very time consuming work and the inside
trailing is of course more welcomed from
the business and
i mean what is the balance there
thanks um for the question i think
that's exactly you know the the
complicated situation we're in and
you know we as data scientists
well we work in this environment and we
would never say let's wait until all the
data is fixed and then we start working
that's just not feasible so very often
also our work i mean our productivity
goes down because we have to fight
against these data quality uh problems
at the same same time
as we do this work we also
understand certain um you know we get
data intelligence which can then be used
again to nicely model it subsequently
into
a data mesh data product a data lake
whatever but we could we also provide
help to those people who then you know
make data nice right in the end but very
often we're at the forefront of this
right so yeah we we cannot wait we need
to move on but
wouldn't it be great if we could move on
like three times faster right
thanks a lot thank you uh
jack
yeah um hi frank nice to see you again
thank you
you mentioned at the end moving into
production so what what do you have kind
of top two learnings when you move from
this prototyping mvps to production and
then what do you need to take into
account and and what's your learnings
in that journey
well great great question and and
you know what we are working um
very much on is is to shorten that time
you know we're very good in in
prototyping um
and shorten the time from prototype to
production or to product right and this
it could still be that after three four
months we have a prototype but then it
takes another nine months or so to get
the product out and it's too long and so
we are now also trying to work on
certain
platforms which then would allow us more
of a seamless path from idea to
prototype to production so you know many
products out there and definitely we
looked into some data science
workbenches
and we are getting some benefits from
there so this is something that we're
really uh ramping up strongly now so i
think time to production is
is key um
yeah i would say that's the major
learning part
浏览更多相关视频
Generative KI auf den Punkt gebracht – Wie man im KI-Zeitalter besteht und erfolgreich ist (AI dub)
Vom ABC zu AI: KI in Schule und Unterricht
Wie erstelle ich gute Prompts für ChatGPT & Co?
Warnzeichen auf dem Aktienmarkt....
How This AI Startup Grew by 100x in Just 6 Months | Fireworks AI, Lin Qiao
The Marketing Opportunities and Challenges With The Rise Of AI
5.0 / 5 (0 votes)