How Sonrai Analytics leverages ML to accelerate Precision Medicine (L300) | AWS Events

AWS Events
4 Jun 202434:29

Summary

TLDRこのプレゼンテーションでは、アイルランドのスタートアップソリューションアーキテクトであるジョナ・クレイグが、AWSのサービスを活用してがん治療の効率化を目指すスタートアップであるソナイアナリティクスをサポートしていることを紹介します。ソナイアナリティクスは、大量の医療データを扱い、機械学習を用いてがんの診断や治療の時間を短縮する取り組みを進めています。彼らはAWSのマネージドサービスを活用し、小規模ながらも技術的最先端を駆使して、医療分野におけるイノベーションを目指しています。また、プレゼンテーションでは、AWSのサービスであるSageMakerを使用して機械学習モデルをトレーニング、展開し、監視するプロセスについても説明されています。

Takeaways

  • 🌟 ソナイアナリティクスは、AWS技術を活用してがん治療の臨床試験時間を短縮し、医療システムの効率化に貢献しています。
  • 👔 ジョナ・クレイグはアイルランドのスタートアップソリューションアーキテクトとして、さまざまな規模のスタートアップをサポートしています。
  • 🛠️ AWSのマネージドサービスを活用することで、スタートアップチームは限られたリソースを効果的に利用し、技術的な課題に取り組むことができます。
  • 🔧 機械学習のループは、データの準備、モデルのトレーニング、モデルの展開、そしてモニタリングとオーケストレーションの4つの基本的なステップから成り立っています。
  • 💾 ソナイアナリティクスは、ペットバイト単位のデータを扱うため、効果的なコスト管理システムとスケーラブルなアーキテクチャが必要です。
  • 🧬 彼らのクライアントは、がんを治療するための新しい薬剤を開発しており、AIを活用して適切な治療法を特定しています。
  • 🔬 コンピュータービジョンを使用した特定のユースケースでは、顕微鏡画像からがん細胞を検出するAIを開発しています。
  • 🚀 AWSのサービススタックを活用して、トレーニングから推論までのプロセスを効率化し、モデルのパフォーマンスを最適化しています。
  • 🌐 グローバルな展開が可能で、AWSのデータセンターを活用して、顧客ごとにセグメント化されたインスタンスを提供しています。
  • 🛡️ データ保護とプライバシーに重きを置いたAWSサービスを使用することで、GDPRなどの規制に適合しています。
  • 🔑 AWSのアクティベートプログラムを活用して、初期のビジネス開発と技術的な検証を迅速に行うことができました。

Q & A

  • AWS認定チャレンジとは何ですか?

    -AWS認定チャレンジは、AWSのサービスや機能を学ぶためのプログラムで、機械学習やソリューションアーキテクチャを含むAWSの様々な認定資格に挑戦することができる。

  • ソナイアナリティクスはどのような企業ですか?

    -ソナイアナリティクスは、AWS技術を利用してがんのドラッグ試験時間を短縮することを目指すスタートアップ企業であり、医療分野における効率性向上に貢献している。

  • Jonah Craigの現在の職務は何ですか?

    -Jonah Craigはアイルランドにいるスタートアップソリューションアーキテクトとして、ソナイアナリティクスなどのスタートアップ企業をサポートしている。

  • ソナイアナリティクスが取り組んでいるデータの種類にはどのようなものがありますか?

    -ソナイアナリティクスは、医療や生命科学分野のデータを扱っており、petabytesのデータを処理する必要がある。

  • AWSサービスの中でソナイアナリティクスが使用しているものには何がありますか?

    -ソナイアナリティクスはAWSのサービスを幅広く使用しており、SageMaker、Health Omix、Athena、Glue、Lambda、Fargate、ECSなどが挙げられる。

  • ソナイアナリティクスが取り組んでいるがん治療アルゴリズムの重要なポイントは何ですか?

    -ソナイアナリティクスは、精確医学を通じてがんの治療法を開発しており、アルゴリズムを用いて患者さんに適切な治療法を提供することに重点を置いている。

  • AWSのSageMaker Studioの機能は何ですか?

    -SageMaker Studioは、データの準備からモデルの展開までを一連のプロセスで管理し、機械学習エンジニアやデータサイエンティストが協力できる環境を提供している。

  • ソナイアナリティクスが使用しているAWSのサーバーレスコンピュートとは何ですか?

    -AWSのサーバーレスコンピュートは、コードを実行するためのサーバーの管理を必要としないコンピュートサービスであり、LambdaやFargateなどが該当する。

  • ソナイアナリティクスが直面しているデータストレージの課題とは何ですか?

    -ソナイアナリティクスは、petabytesのデータを効率的に管理し、コストを最適化する必要があり、S3のライフサイクル管理を活用してホットとコールドのストレージ間でデータを移動させている。

  • ソナイアナリティクスが開発しているAIアルゴリズムの目的は何ですか?

    -ソナイアナリティクスは、AIアルゴリズムを開発してがんの治療法を迅速化し、病理学者の作業を支援することで、患者の診断と治療の時間を短縮することを目指している。

Outlines

00:00

🎤 プレゼンテーションの開始とAWS認定の紹介

Jonah Craigがプレゼンテーションの司会を始め、AWS認定チャレンジに参加した理由とその意味を説明。彼はアイルランドのスタートアップソリューションアーキテクトとして、ソナイアナリティクスなどの顧客と協力し、スタートアップ企業をサポートしていることを紹介。ソナイアナリティクスは、AWS技術を活用してがん治療の臨床試験時間を短縮する取り組みを行っている。Jonahは、データ準備、モデルトレーニング、モデルの展開と監視を含む機械学習のループを紹介し、ソナイアナリティクスがこれらのプロセスを効率的に遂行していることを強調した。

05:02

🔬 ソナイアナリティクスのデータサイエンスとAI製品開発

Dr. Matthew Aliseがソナイアナリティクスのデータサイエンス部門の責任者として、自社のAI製品開発の経緯と取り組みを語る。元分子生物学者の彼は、データ分析の需要が高まってからバイオインフォマティクスに転向し、ソナイに参加してからAI製品を開発している。ソナイは、がん治療法の開発を支援するため、精密医薬品向けのアルゴリズムをAWS平台上で開発している。また、コンピュータービジョンを使用して、病理学者の助けとなるAIを開発していることを紹介。

10:03

💊 がん治療法のデータ分析とAIのビジネスチャレンジ

Matthewは、がんの顕微鏡画像をデジタル化し、AIを用いて病理学者が手動で行っていた作業を自動化する取り組みについて説明。AIの適用により、病理学者の不足と高コストの問題を解決し、スライドのデジタル化をリードする。彼らの戦略は、最小限の患者データを用いて特徴量抽出器を開発し、その後自動データ準備を通じてモデルの性能を向上させ、最終的に医療現場での利用を目的としたワークフローに統合した。

15:06

🛠️ AWSテクノロジーを使ったデータサイエンスの基盤構築

Jaredがソナイのエンジニアリングチームを率いており、データエンジニアリングとインフラストラクチャを構築していることを紹介。ソナイは、信頼性の高い研究環境を提供し、GDPRなどの規制に則って患者データの取り扱いを行っている。また、AWSのサービスを活用して、複雑で多様なデータタイプに対応し、分析と発見を通じて患者の安全性を確保している。

20:07

🌐 AWSサービスを使ったデータアーキテクチャの構築

Jaredは、ソナイがAWSサービスを活用してデータアーキテクチャを構築し、分析とAI/MLの分析をサポートしていることを説明。Health OmixやAthena、Glueなどのサービスを用いて、大量の生データを効率的に処理し、分析と可視化を行っている。また、AWSのサーバーレス機能を活用して、柔軟なスケーリングとコスト効率の良いインフラを構築している。

25:07

🚀 AWS生態系を活用したスタートアップの成長

Jonahは、ソナイアナリティクスがAWSの管理サービスを活用して、製品開発に注力していることを強調。AWSのサポートとアクティベイトプログラムを通じて、ソナイは迅速に成長し、市場で競争力を維持している。また、AWSのドキュメントとトレーニング資源を活用して、技術的な課題に対処し、新しいサービスを迅速に採用している。

30:09

🔮 未来の展望とAWSサービスの進化

JaredとMatthewは、ソナイアナリティクスが今後の焦点をHealth Omixに置き、生データの処理を効率化する取り組みを行っていることを紹介。また、ファウンデーションモデルの開発や大規模な言語モデルを活用した新しいアルゴリズムの構築を予定している。AWSのサービスが進化し続けることで、ソナイはさらなる技術的イノベーションを目指している。

Mindmap

Keywords

💡AWS

AWSはAmazon Web Servicesの略で、クラウドコンピューティングサービスを提供するAmazonの部門です。ビデオではAWSのマネージドサービスがソナーアイアナリティクスが開発するAI製品にどのように統合されているかについて語られています。特に、AWSのサービスを使用してがん治療の時間短縮に貢献している例が挙げられています。

💡マシンラーニング

マシンラーニングは、コンピュータがデータから学習し、判断や予測を行う能力を持つようにするアルゴリズムや统计モデルの分野です。ビデオでは、マシンラーニングが医療分野でのデータ分析やがん治療の効率化にどのように役立つかが説明されています。

💡ソナーアイアナリティクス

ソナーアイアナリティクスは、ビデオで取り上げられている特定のスタートアップ企業で、AWS技術を活用してがん治療の時間を短縮するAI製品を開発しています。彼らの取り組みは、医療業界におけるデータサイエンスの応用例として紹介されています。

💡データサイエンティスト

データサイエンティストは、統計学、コンピュータサイエンス、およびドメイン知識を活用してデータから洞察を得る専門家です。ビデオでは、データサイエンティストが医療データの分析やAIモデルの開発に関与していることが強調されています。

💡SageMaker

SageMakerはAWSが提供するマシンラーニングプラットフォームで、データサイエンティストがモデルを構築、訓練、およびデプロイするのに使用されます。ビデオではSageMakerがソナーアイアナリティクスがAIモデルを開発する上でどのように活用されているかが説明されています。

💡モデルトレーニング

モデルトレーニングは、マシンラーニングにおいてデータセットを使ってアルゴリズムを教育し、特定のタスクを実行する能力を開発するプロセスです。ビデオでは、モデルトレーニングがAIの開発サイクルにおける重要なステップとして触れられています。

💡データプリペアテーション

データプリペアテーションは、データサイエンスの分野で、データをクリーンにし、適切な形式に変え、機械学習モデルで使用するために適切に加工するプロセスです。ビデオでは、データプリペアテーションがモデルの性能に影響を与える重要な要素として強調されています。

💡モデルデプロイメント

モデルデプロイメントは、訓練されたマシンラーニングモデルを実際のアプリケーションやサービスに統合するプロセスです。ビデオでは、ソナーアイアナリティクスがモデルを医療業界に実装し、がん治療の効率化に役立つかた様子が語されています。

💡MLOps

MLOpsは、機械学習のライフサイクルを自動化し、効率化するためのフレームワークやツールの集まりです。ビデオでは、MLOpsがモデルの開発からデプロイメントまでを効率的に管理する方法として触れられています。

💡Precision Medicine

Precision Medicineは、患者の個人差を考慮して治療を個別に調整する医療のアプローチです。ビデオでは、ソナーアイアナリティクスがPrecision Medicineの分野でAIを活用してがん治療の時間短縮に貢献していることが強調されています。

Highlights

AWS认证挑战介绍,鼓励人们学习AWS认证,包括机器学习和解决方案架构。

Jonah Craig作为初创解决方案架构师在爱尔兰工作,与Sonai Analytics等客户合作。

Sonai Analytics使用AWS技术减少癌症药物试验时间,提高医疗保健系统的效率。

Jonah介绍了机器学习的基础概念,包括数据准备、模型训练、部署和监控。

Amazon SageMaker服务概述,它如何封装机器学习流程。

Sonai Analytics面临的挑战,包括数据存储、成本管理、训练自己的基础模型和业务需求的扩展。

介绍了AWS的AI服务,如Amazon HealthLake,适用于医疗保健和生命科学领域。

SageMaker Studio如何支持数据准备到模型部署的全过程。

Jared和Matthew分别从数据科学和工程角度介绍Sonai的工作。

Matthew的背景介绍,从分子生物学家到数据科学领域的转变。

Sonai Analytics的成立背景和它在精准医疗领域的应用。

通过计算机视觉技术辅助病理学家分析癌症组织切片的用例。

使用AWS服务进行数据存储、处理和模型训练的技术细节。

Jared讨论了工程挑战,包括数据工程和基础设施建设。

如何使用AWS服务处理和分析大规模生物技术数据。

Sonai如何利用AWS服务加速药物开发和临床试验流程。

Jared和Matthew强调了AWS服务在医疗保健领域的应用潜力和未来展望。

观众参与请求,包括对演讲的反馈和即将举行的工作坊信息。

Transcripts

play00:04

hello folks can you hear me

play00:09

okay uh if you can take a moment to get

play00:11

seated we'll kick off our presentation

play00:13

today um the first thing I'll say is uh

play00:16

wearing this jacket is not a full choice

play00:19

of mine but um it's tied in with the AWS

play00:22

certification challenge so if you're

play00:25

looking to educate yourself on AWS

play00:27

certifications whether that's machine

play00:28

learning or Sol architecture it's a

play00:31

really great way to learn Hands-On um

play00:33

how the AWS Cloud works so I'm going to

play00:37

kick things off um my name is Jonah

play00:39

Craig and I work as a startup Solutions

play00:41

architect here in Ireland and I have the

play00:43

absolute privilege of working with

play00:44

customers like sonai analytics um and we

play00:47

support startups whether they're like

play00:49

two people uh in in their parents

play00:52

basement or garage uh all the way up to

play00:55

startups who have scaled to 100 or 200

play00:59

plus employees

play01:00

so it's really good fun and again sonai

play01:03

have been on that journey and it's been

play01:06

great to support them through last year

play01:07

and this year so we like to take the

play01:09

most Cutting Edge and and really bring

play01:10

it uh to you today from an AIML

play01:13

perspective why I love working with

play01:16

sonai for two really key reasons the

play01:19

first is they're using um AWS technology

play01:23

to reduce cancer drug trial times I'm

play01:25

I'm sure many of us in this room

play01:27

including myself had had cancer effect

play01:29

effect their lives in some shape or form

play01:31

whether through family or friends and I

play01:33

know every day when I have a meeting

play01:35

with Jared or Matt it's extremely

play01:37

motivating to to get stuck in and and

play01:39

help them grow because they're bringing

play01:41

efficiencies into the Healthcare System

play01:43

I also love working with them because

play01:45

from a technology perspective what

play01:47

they're doing is truly Cutting Edge from

play01:49

a startup team you don't have unlimited

play01:52

resources and you really need to

play01:54

leverage things like AWS manag services

play01:56

and I'm going to let them talk that talk

play01:58

through that and break that down

play02:01

so my job today is to set a framework if

play02:05

you like um for machine learning and

play02:07

make it simple and you can use this

play02:09

framework uh as to like listen to their

play02:12

side of the story so the machine

play02:14

learning Loop when we break this stand

play02:16

to a foundational concept we start with

play02:19

data preparation arguably the most

play02:22

important part about any machine

play02:23

learning problem it does not matter if

play02:25

you have the most Cutting Edge

play02:27

algorithms the data really is everything

play02:30

so making sure you've got access to data

play02:32

and making it machine learning friendly

play02:34

is

play02:34

key we then move on to model training

play02:37

okay so this is where we select a model

play02:40

and we want to take that data that we

play02:42

have and start training it so the

play02:45

expensive part of machine learning and

play02:47

cost optimization really is a key part

play02:50

to this and then when we're ready and we

play02:53

picked the optimum model we can deploy

play02:54

that into production and monitor it okay

play02:58

so we can talk about things like Model D

play02:59

Rift um which may mean over time maybe

play03:02

the data that's coming in is slightly

play03:04

different to what it was trained on and

play03:06

this is where the secret sauce kind of

play03:08

comes in which is orchestration mlops

play03:11

okay and our Amazon sagemaker service

play03:15

encapsulates this Loop so if you're

play03:16

building a machine learning problem you

play03:18

can do all of this and chain it all

play03:20

together and sonai again have these

play03:22

efficiencies in place and what they can

play03:24

do with a small team is is just uh

play03:27

breathtaking I'm going to outline as

play03:29

well the the key high level challenges

play03:31

for sonai the first one is data storage

play03:35

so sonai analytics are in the healthcare

play03:37

life science space and they're taking

play03:38

the this foundational model Trend and

play03:41

really you know bringing it to to this

play03:43

space in this vertical so they're

play03:46

dealing with petabytes of data so

play03:48

ensuring that they've got a an effective

play03:50

cost uh management system for that is is

play03:53

crucial and also to be able to handle

play03:55

that scale is key they're also um

play03:58

training and hosting Their Own

play03:59

Foundation models which is again

play04:01

extremely Cutting Edge they're using

play04:02

services such as Sage maker to help with

play04:04

this but I'm going to let the the guys

play04:06

dive deep into how they're actually

play04:07

doing this and then finally scaling to

play04:11

meet business demand it's really really

play04:13

important if you're a startup or an

play04:15

Enterprise you need to be able to scale

play04:17

your technical architecture to be able

play04:19

to meet customer demand and again sonai

play04:21

can do that if they have a a big

play04:23

Enterprise client or whether it's a new

play04:25

startups joining their system they can

play04:28

do that because they've architect

play04:29

detected it in the correct

play04:31

way so you'll hear a lot today about our

play04:34

AWS Services okay and today we're going

play04:38

to focus on a bit on Sage makers so keep

play04:40

that in mind through their presentation

play04:43

and we have a whole host of AI Services

play04:45

Okay so we've got Amazon Health omix

play04:47

here which uh again is it applies mostly

play04:50

to the hcls space however no matter

play04:54

you're vertical whether you're in

play04:55

fintech or anything like this there are

play04:57

AI Services there that can do a lot of

play04:59

heavy lifting of AI in the back end okay

play05:02

so I just wanted to paint that in case

play05:03

you're maybe not in the htls space but

play05:05

there's a lot of things that you can

play05:07

take away from our service

play05:10

deack so let's talk a little bit about

play05:12

sagemaker before I hand over again if

play05:15

you keep that model machine learning

play05:16

Loop in mind you can see it sagemaker

play05:19

Studio takes care of everything from

play05:20

data preparation all the way through to

play05:22

the deployment of the model and what's

play05:24

really helpful is that it is designed

play05:27

for a whole host of uh people using the

play05:29

service so ml Engineers or data uh

play05:33

scientists they can all collaborate in

play05:35

one place which really helps

play05:37

efficiencies because you're not using

play05:39

say 203d Party Services for each part of

play05:41

this process again the results can be

play05:44

like sonai the way that they handle

play05:47

this from an AWS architecture standpoint

play05:51

I get asked all the time like this

play05:53

sounds great but how do I actually get

play05:54

started with something Hands-On that I

play05:56

can build for my own company so again I

play06:00

always Point people in the right

play06:01

direction the best place to start is

play06:03

with a workshop get hands on and this QR

play06:06

code will take you to the sage maker

play06:07

immersion day and this again we've got

play06:10

examples of if you want to just maybe

play06:12

experiment for your first time with sage

play06:13

maker or you want to build a pipeline

play06:15

and orchestrate a whole end to end

play06:17

solution get started we have um uh we

play06:21

then encourage you to build a POC on

play06:23

your side and we have AWS Partners I'm

play06:26

sure you've heard a lot about them today

play06:27

so again don't be afraid to engage them

play06:30

they they're they've done these problems

play06:31

before and they can help you deliver

play06:33

them and teach your team so again really

play06:35

fast way to do it and then straight into

play06:37

production so I wanted to set the scene

play06:40

for sonay um I'm going to introduce uh

play06:43

Dr Matthew Alise who is the uh head of

play06:46

data science and it's been a pleasure

play06:49

working with you Matt and uh I'll leave

play06:50

it to

play06:53

you afternoon guys uh you can hear me

play06:57

great okay um so I'm going to set the

play06:59

scene a little bit more than what

play07:01

Jonah's done thank you so much for for

play07:03

the introduction and also the invitation

play07:05

to to present uh so we're doing a double

play07:08

act today uh I'm doing the data science

play07:10

and my colleague Jared is doing the

play07:11

engineering we're both sides of this the

play07:14

same coin okay uh and I can't do my job

play07:17

without him and vice versa I build the

play07:19

ml I'm extracted away from the

play07:20

infrastructure and Jared essentially

play07:22

does the plumbing uh for us so that we

play07:25

can get access to data and train our

play07:27

models so I'm going to tell you a bit

play07:29

about what what we do and why we do it

play07:31

and how we do it but first off I'll give

play07:33

you a little bit about about myself and

play07:35

my path to data science um I I actually

play07:38

originally started as a molecular

play07:39

biologist in the lab about 10 years ago

play07:41

so I was probably looked a little bit

play07:42

more like a mad scientist not I wasn't a

play07:44

gold jacket but it was it was a long

play07:46

white one of test tubes and mixing

play07:48

Regents and I worked in a Precision

play07:50

medicine lab uh back then there was a

play07:52

lot of new technologies about sequencing

play07:54

DNA proteins and so on and what happened

play07:57

about 2015 uh which coincides with the

play08:00

Advent of you know Ai and Nvidia

play08:02

Technologies and so on is that we were

play08:04

generating so much data that the

play08:05

bottleneck wasn't the generation it was

play08:07

the analysis and the utilization of it

play08:09

so I then transitioned into what's

play08:10

called bioinformatics uh which is really

play08:13

just a subset of data science within the

play08:15

health field uh so I joined son about 5

play08:17

years ago and we' we've been building AI

play08:19

products ever since uh so now now we can

play08:23

dive in to sonre um so sonre is a Queens

play08:25

University spin in Belfast it was a

play08:28

fairly long drive down today uh up early

play08:32

uh but it's great to be down here uh and

play08:34

what we've developed is a number of

play08:35

different algorithms hosted on AWS

play08:38

platforms for our clients uh our clients

play08:40

are developing drugs to treat cancer uh

play08:43

within prec Precision medicine the

play08:45

important thing is who should get the

play08:46

drug you don't give everyone the same

play08:48

drug because drugs can't be toxic and

play08:51

there's no point in giving a drug to a

play08:52

patient with an O under respond so

play08:54

identify new markers to apply the right

play08:56

treatment and we're develop helping

play08:57

develop Diagnostics or digital

play08:59

Diagnostics using AI ultimately to

play09:02

enable our clients to save patient lives

play09:05

so that that's sunr why do we do it uh

play09:08

Precision medicine includes a lot of

play09:10

different disease types uh but one of

play09:12

the main ones that we focus on is cancer

play09:15

uh I don't think I need to tell everyone

play09:17

in the room that cancer is bad okay it's

play09:19

it's very unpleasant one in two people

play09:21

will develop a cancer incident in their

play09:23

lifetime hundreds of thousands of people

play09:25

die across the UK and Ireland every year

play09:27

but as I said on the previous slide

play09:29

there are a lot of organizations

play09:30

developing therapies to help patients

play09:33

and the amount of data that they're

play09:34

Genera in the lab is very amable to AI

play09:37

there's a lot of digital transformation

play09:38

happening across biotech and right

play09:40

through the healthcare and we're here to

play09:42

help so I'm going to Spotlight a use

play09:44

case uh focusing on computer vision uh a

play09:47

little bit like the uh the previous talk

play09:50

but uh fre it in the business context

play09:53

and then dive into the AWS Tex stack

play09:55

that we actually used right from

play09:56

training through the machine learning or

play09:58

the inference sorry uh and I'm going to

play10:00

pass over to my colleague Jared who's

play10:01

going to Spotlight some of the the

play10:03

engineering challenges behind that as

play10:05

well and it's also going to really tie

play10:07

in nicely with the the train in Loop

play10:09

which Jonah highlighted as

play10:11

well uh so this particular use case uh

play10:14

I'll actually point to the images first

play10:16

that you can see so the top image there

play10:18

is actually a microscope a glass slide

play10:20

of a patient's cancer this is colong

play10:23

cancer it's a glass slide and for a long

play10:26

time it was what we refer to as analog

play10:27

you looked on a microscope the pathology

play10:29

just would be able to say okay there's

play10:30

tumor uh they would get a Sharpie pen

play10:32

literally a Sharpie pen draw on the

play10:34

glass slid and say that's where you need

play10:35

to extract your DNA from or your RNA or

play10:38

whatever it is so that you can have a

play10:39

test on your

play10:40

diagnosis but we're now starting to scan

play10:42

these slides so they're digitized

play10:44

they're really large images they're

play10:45

pyramidal tiffs you can see as you from

play10:48

the the image below but because they're

play10:49

not digitized it's aable to Ai and we

play10:52

can essentially replace that Sharpie pen

play10:54

with our own AI generated proposed so

play10:57

it's called a region of Interest this is

play10:59

important for a number of reasons

play11:00

because Pathologists are scarce there's

play11:02

actually fewer and fewer Pathologists as

play11:04

uh as we go through time and they're

play11:06

very expensive it's about $100 per slide

play11:09

there and you'll see that as the AI that

play11:11

we apply is is incredibly cost effective

play11:14

so that's the business challenge we ask

play11:16

ourselves can AI help with this spoiler

play11:18

alert it can uh but technical

play11:20

requirement is can we train a model

play11:22

entirely within AWS detect cancer from

play11:25

these large images and assist

play11:27

Pathologists now we work in a regular

play11:29

environment we're one of the few

play11:30

companies building AI as medical devices

play11:33

uh so assist is an important word that

play11:34

we're not replacing Pathologists uh and

play11:37

the business output of this is to save

play11:40

millions in operational costs if there's

play11:41

hundreds of thousands of those images

play11:43

and you're paying pathologist $100 ago

play11:46

in

play11:47

mindset so what was the strategy uh we

play11:50

started with the minimum amount of

play11:52

patients that we could really work with

play11:53

to generate a feature extractor and uh

play11:56

generate a model so we had to ingest a

play11:58

large data set 2,000 patients actually

play12:00

quite a lot I know that in other domains

play12:02

that's a very small number but 2,000

play12:04

patients actually quite hard to get a

play12:05

hold of uh then we had to automate the

play12:07

data preparation so this data came from

play12:09

the lab weo automate that and then we

play12:10

enter that train and evaluation Loop and

play12:13

we iter it and once the model is reaches

play12:16

a sufficient performance uh then we can

play12:19

deploy it with an user workflow and

play12:21

that's the important part a model on its

play12:22

own is not useful it has to be in an

play12:24

adoptable workflow for uh uh for

play12:27

clinicians to use and ultimately what we

play12:29

got out of this was a model which

play12:31

outperforms our competitors which is

play12:32

great it's scalable Cloud native

play12:34

deployment so we can actually scale to

play12:36

meet the needs of healthcare uh so it's

play12:38

not one patient at a time we can scale

play12:40

using serverless comput to uh to to go

play12:43

acrost thousands of patients uh the

play12:45

turnaround time Jared's actually going

play12:47

to speak to we've got some really

play12:48

impressive metrics there as well as the

play12:49

cost savings for clients but now I'm

play12:51

going to to spot L an AWS conference so

play12:54

I'm going to speak to some of the uh the

play12:57

technology behind our training

play12:59

um so remember these are Precision

play13:01

medicine Labs so the training data

play13:02

actually comes from these pathology

play13:04

instruments we set up an Amazon SFTP uh

play13:07

it lands an S3 and then Jared's team

play13:09

have an an event driven architecture in

play13:11

place which uh uses lambas to extract

play13:13

the metadata from these really large

play13:15

images so that they can be viewed by

play13:17

Pathologists so our machine learning

play13:19

pipelines are a little bit different

play13:20

from other domains in the sense that

play13:22

Pathologists have to be able to look at

play13:23

the images before they can go to the

play13:26

next stage so a human in the loop is

play13:28

really important uh you might need 10

play13:30

pathologist a thousand pathologist

play13:32

looking at different images we're also

play13:34

going to talk through some of the really

play13:35

cool uh lamba based architecture that uh

play13:38

or use of lambdas for viewing these

play13:41

images these really big images uh the

play13:44

next step on the data preparation side

play13:46

uh is tile extraction so you saw tiles

play13:48

within those images uh you cannot pass

play13:52

this single image these large images

play13:53

through a GPU they're just too big so

play13:55

what you need to do is tile them up or

play13:57

chunk them up uh and that the pipelines

play14:00

that we actually originally started with

play14:01

on premise was incredibly slow uh we've

play14:04

used Cloud native technology uh you can

play14:07

instantiate thousands of instances so

play14:09

that it's done in in in hours instead of

play14:11

days or even months that's the data

play14:13

preparation side the data then lands on

play14:15

S3 uh we use fargate tasks orchestrated

play14:19

by ECS to do that uh and then the data

play14:22

is ready for us to to start training and

play14:24

that the ml team can get stuck in uh in

play14:27

terms of we we have our docker for

play14:29

training uh parked on ECR and we use

play14:32

Sage maker uh for for our training jobs

play14:35

uh we do we did have an on- premise

play14:36

solution which we utilized a lot but

play14:38

there's I don't think I need to tell

play14:40

people why using the cloud is really

play14:41

important for scalability but it means

play14:44

that we didn't have cues machine

play14:45

learning Engineers waiting for GPU

play14:46

access uh and at a startup that's really

play14:48

important we can also scale down uh our

play14:51

use of compute as needed uh we can scale

play14:55

our experimentation we get our metrics

play14:56

we output our model back to S3 one of

play14:59

the things that we're looking to do is

play15:00

move to the model registry as part of

play15:02

sagemaker uh we do have our own uh

play15:05

artifactory on S3 to handle the nuances

play15:07

of what we do and because we build

play15:09

medical devices it's just about timing

play15:11

to move towards uh embracing uh some of

play15:15

the new features in stag maker so the

play15:17

model performance once it's reached that

play15:19

stage it's there is a lot of rigor that

play15:22

goes into training some of these models

play15:24

we follow the fda's gmlp so good machine

play15:26

learning practices uh we have to invol

play15:29

people right from oncologists machine

play15:31

learning Engineers doctors uh even

play15:34

nurses and survivors where we can input

play15:36

about this the sign off of the

play15:37

performance of this model once that's

play15:40

done we can start to get ready for a

play15:42

production uh and this is where our

play15:45

inference comes in into play so uh our

play15:48

S3 model artifact approved for

play15:50

production it's baked into an inference

play15:52

container uh it sounds like common sense

play15:54

but we are not allowed to have any

play15:56

training logic in our inference

play15:58

containers whatsoever uh most people

play16:00

would not but we're actually mandated by

play16:02

the regulatory bodies that that cannot

play16:03

happen there can be no Superfluous logic

play16:06

in there in the inference containers so

play16:07

we have to have a separate one uh that's

play16:09

uh hoston uh ECR it's built and that's

play16:12

when I start to from my perspective our

play16:15

team start to think about handing over

play16:16

Jared where his team will ship this out

play16:18

ship the code uh cdk is used for INF the

play16:21

infrastructure we have blue green

play16:22

deployment so we can bring one uh one

play16:25

application down uh and wait for the

play16:27

other and we use serverless comput so uh

play16:29

Lambda triggers uh and the fargate ECS

play16:32

fargate task orchestrated by ECS uh we

play16:36

are going to be moving to inference on

play16:37

Sage maker uh the those tasks do not

play16:40

have GPU yet uh we want GPU so that we

play16:43

can further improve the turn around time

play16:45

so Sage maker is what we'll be doing uh

play16:47

for inference our serverless compute as

play16:50

well uh the final part is monitoring

play16:53

that's very important to me as well I've

play16:55

greatly simplified all of the

play16:56

infrastructure in place for that that's

play16:58

not just watch but it's a really good

play17:00

place for us to hone into uh we need to

play17:02

understand the usage patterns of our

play17:03

algorithms uh and Jonah mentioned model

play17:06

drift metrics uh so if we start to see a

play17:08

jump from 20% cancer incidence to 40% it

play17:11

might tell us there's something wrong we

play17:13

need to go to the lab understand if

play17:14

something is changed with the data

play17:16

that's being produced all of these

play17:18

insights allow for next iterations and

play17:20

training a a more robust model uh so

play17:24

that's a really good segue for me to

play17:26

hand over to to Jared my counterpart

play17:29

who's going to uh talk about the

play17:30

engineering challenges around what we

play17:38

do thank you Matt thank you Jonah um I'm

play17:41

Jer loan I lead the engineering team

play17:43

that's a team of cloud Engineers full

play17:45

stack and quality insurance Engineers as

play17:47

Matt said we basically handle the

play17:48

plumbing which is the data engineering

play17:50

and the infrastructure that underpins

play17:52

the AI and machine learning analysis uh

play17:55

and I'm going to Spotlight how we use

play17:56

edus at a high level what we provide to

play17:58

your customer customers we provide we

play17:59

build our own algorithms but our main

play18:01

job is actually giving Pharma and

play18:03

biotech the tools for them to develop

play18:05

drugs for them to develop new

play18:08

treatments um so our primary use cases

play18:10

that we deploy was called The Trusted

play18:12

research environment that is a prepr a

play18:14

prepackage Cloud environment that allows

play18:17

them to have all their tools all their

play18:18

data all their users in a regulated

play18:20

environment that's really important this

play18:22

soner or infrastructure is heavily

play18:23

dictated by things like gdpr things like

play18:26

how we handle patient data and it was a

play18:28

big reason why we also picked AWS so

play18:29

that we can use certified data centers

play18:31

and where we can trust that our data

play18:32

will be

play18:33

protected um our customers are seeking

play18:36

to discover new cancer drugs new

play18:39

treatments um they they want to develop

play18:41

these models and algorithms they want to

play18:43

typically take their data assets through

play18:45

the clinical trial so they can get to

play18:46

Market and we cover the full end to end

play18:49

life cycle of that and this means

play18:51

handling three sort of key areas so on

play18:53

the Discovery side it's about how we

play18:55

actually analyze the complex data it can

play18:57

be really big really un weedly and

play19:00

unlike business data it's way more

play19:01

varied it tends to be messy you tend to

play19:03

have very Niche modalities that require

play19:06

completely different tools and this forc

play19:08

us to look at a data M architecture and

play19:10

to look at a variety of different AWS

play19:12

services that suits each individual data

play19:14

type so for example health omix is very

play19:17

good at handling bulky data directly

play19:20

from Lab instruments um it can use

play19:22

popular framework such as nextflow which

play19:23

is really important for handling all of

play19:25

the different types of data that can

play19:26

come out whether it's tabular genomic

play19:28

protein Imaging based data we use Athena

play19:31

as an extremely powerful um serverless

play19:34

uh query that we can build to run under

play19:36

analytics and our visualizations and we

play19:38

can use glue to handle things like ETL

play19:40

converting CSV to parkz and typically as

play19:42

as a general process we need to take

play19:44

large bulky data that for example might

play19:46

be 100 gigabyte of raw data which you

play19:48

then turn into 40 terabyte of working

play19:51

data and then you end up with actually

play19:53

surprising small csvs in parkz which is

play19:55

resolved and that's where you can make

play19:56

analytics and findings but you need to

play19:58

link it all together you need

play19:59

traceability and you need to have a an

play20:02

auditing basically so that we can we can

play20:04

know the full history of what happened

play20:05

to this sta we can know the proper

play20:06

process was carried out because

play20:08

ultimately patient safety is at

play20:10

risk um after we make our discoveries

play20:13

and handle our data we now want to

play20:14

develop it so sonre and our customers

play20:17

make a lot of use of sage maker and

play20:18

sagemaker notebooks and we can use that

play20:21

to directly hook into other AWS services

play20:23

including running llms but our primary

play20:25

use is spinning up applications um

play20:28

training tun in models and if preparing

play20:31

new drugs treatments or algorithms for

play20:33

eventual clinical release on the

play20:35

deployment side um we have two use cases

play20:38

our customers need to deploy the things

play20:39

they build so we can give them an

play20:40

operational window where they can use

play20:42

things like API Gateway Dockers and Apps

play20:44

sync so they can spin up their compute

play20:45

they connect their data sources but also

play20:47

on a very practical level we need to

play20:49

deploy across the world we need to have

play20:51

a partner like AWS who has data centers

play20:53

that we can trust and we can use AWS

play20:56

orgs to create a separate account for

play20:58

each and every client it's fully

play20:59

segregated we do not share anything

play21:01

there's no risk of databases being um

play21:04

coming across people sharing compute

play21:05

every single customer gets a completely

play21:07

different instance and we can use

play21:08

control toar to completely automate the

play21:11

deployment of it we can monitor remotely

play21:12

we can move it updates and we also um

play21:15

use cdk which is the cloud development

play21:17

kit and that lets us treat the cloud

play21:19

infrastructure the way we treat our

play21:21

JavaScript code is completely PR it's

play21:24

completely controlled we have absolute

play21:25

certainty of what version is going out

play21:27

and that each customer is using the

play21:28

correct

play21:31

version um spotlighting some interesting

play21:33

challenges and I think Emily did a great

play21:35

job pointing this earlier as well there

play21:36

going to be a lot of um things that

play21:37

resonate with us um I'm going to walk

play21:40

through a real use case um with a client

play21:42

who was looking to hold a paby or a

play21:44

th000 terab of Imaging data um these

play21:47

images are about 5 to 12 GB each and

play21:49

when you extract them they're more like

play21:50

50 gab so they get very unwe very

play21:53

quickly and the big challenge with this

play21:54

type of data is you need both High

play21:56

availability and low cost and those

play21:59

things are to typically at the opposite

play22:00

ends of the spectrum and the way that we

play22:03

handled that was to use S3's native life

play22:05

cycle management so we're able to very

play22:07

easily be by apis as well as by

play22:10

automatically triggering after say 3

play22:11

weeks of inactivity we can cycle data

play22:14

between what we call Hot which is when

play22:15

it's an S3 standard that's when a

play22:16

pathologist needs to view the image they

play22:18

want to run model training they want to

play22:20

execute algorithms so we need it highly

play22:21

available and then cold is kind of like

play22:23

a having a a a cold storage where maybe

play22:26

you've already processed the data but

play22:27

legally you need to hold it or maybe

play22:29

you're not ready to analyze it you want

play22:30

to batch it later and a good way we like

play22:32

to think about this it's very similar to

play22:34

a manufacturing term known as just in

play22:36

time where when you have a physical

play22:37

Warehouse you don't want to hold

play22:39

inventory for the sake of it you want to

play22:40

be very e efficient about when you bring

play22:43

it in about when you load it out it's

play22:45

how you you can make a significant

play22:47

amount of cost savings and in this case

play22:48

alone this led to 600,000 in savings

play22:51

just from holding the data one of the

play22:52

first surprising things when I when I

play22:54

joined soner was that the DAT the

play22:56

storage is going to be a bigger cost

play22:58

problem compute which I really did not

play22:59

see coming but once we get into this

play23:01

scale it it it it really shows that um

play23:04

and interestingly we could have made

play23:05

more cost savings cuz we could have got

play23:06

it down to 25 or 50 terab hot and not a

play23:09

game would have had a dramatic

play23:11

impact um another interesting challenge

play23:13

is son you fac when we were building our

play23:16

AI um soon to be CI video our medical

play23:18

devices is that we were using on-

play23:21

premise server as Matt mentioned and in

play23:23

order to handle this data we need to

play23:25

pull the individual SVS orti files we

play23:28

need to

play23:28

extract the metadata we need to pull out

play23:30

5 to 600,000 individual tiles store them

play23:34

prepare them for training it's just very

play23:36

computationally intensive and we had a

play23:38

pretty expensive on- premise server but

play23:40

ultimately we were just bottlenecked by

play23:42

CPUs and gpus and based on the current

play23:44

Cadence we were looking at six-month

play23:46

delay just to get the data ready just to

play23:47

do the data preparation stage so to get

play23:50

around this we created an MVP Cloud

play23:53

native tile extractor It Was Written in

play23:55

Rust and it used lambas and farget and

play23:58

the the end result was that we basically

play23:59

just dumped all of the images into 1 F3

play24:02

bucket we set up an automation policy to

play24:04

then spin up um an individual task fre

play24:07

image and six months became six hours it

play24:09

just iterated through the full list it

play24:11

gave us everything we need and on a

play24:13

practical level for sunr it was a

play24:14

difference in meeting the deadline and

play24:15

feeling but it also means that us and

play24:17

our customers can now build these things

play24:19

6 to 12 months faster and this scales

play24:21

with the data so the more data you have

play24:23

the more this is more important it was

play24:24

also incredibly cost effective um way

play24:27

cheaper than the cost of electric would

play24:28

have been to run our

play24:31

server uh another interesting challenge

play24:34

um that Madden mentioned was that we

play24:35

have to keep the human in the loop so we

play24:37

can't just hold this data it can't just

play24:38

be behind the scenes and it would be a

play24:41

lot cheaper and efficient if it was just

play24:42

spinning up boxes in the back we

play24:44

actually need to make it available and

play24:45

Matt had mentioned about working in a

play24:47

lab we essentially provide a digital

play24:49

microscope so we need to make those

play24:50

images available on a web browser um and

play24:53

if you can imagine from the image in

play24:54

that you can you can zoom left you can

play24:55

zoom in and out you can go left up down

play24:57

right

play24:59

and when you're doing that you are

play25:01

essentially trying to view 50 gab worth

play25:03

of Imaging data there's about 5 or

play25:05

600,000 images they're about 80

play25:07

kilobytes each we're also trying to

play25:09

avoid not having to store an extra 50 CU

play25:11

then that one pyte now becomes 10 pyte

play25:14

when the time you fully extracted so

play25:16

what we did was create a cloud native

play25:18

image viewer where we spun up thousands

play25:20

of Lambda indications we linked it up

play25:23

with appsync for graph query to a

play25:24

JavaScript front end and essentially we

play25:27

are lazy loading and rendering ing as we

play25:29

spin up a Lambda it reaches in to the

play25:31

archive format pulls out the tile that

play25:33

we need we're not technically holding N3

play25:35

storage we're not even paying S3 cost in

play25:37

that because it's just a a temporary

play25:39

transfer in a live session and the

play25:42

result is is pretty much instantaneous I

play25:44

can pan around I'm lazy loading as I

play25:46

move around the screen I'm only using

play25:47

the data that I need it's entirely

play25:49

serverless it's it's it's instantaneous

play25:51

as well and uh it's also extremely cost

play25:54

effective we're talking fractions of a

play25:56

penny for things like this to run over a

play25:57

couple of hours

play25:59

and a big thing for sonry was as well

play26:01

that a lot of Open Source Solutions

play26:02

competitors to get around this this big

play26:05

data storage problem they sacrifice

play26:07

image quality they heavily compress the

play26:09

image because how else do you get that

play26:11

amount of data on on a server um we were

play26:13

able to completely avoid that we we

play26:15

don't remotely compress we use the full

play26:17

original size image and for us that is

play26:21

the difference potentially between an AI

play26:23

correctly estimating whether you have

play26:25

lung cancer or not so we have we had no

play26:27

sacrific the quality but we had

play26:29

instantaneous speed and we avoided

play26:31

serious storage

play26:33

costs um this is now looping back to

play26:36

sort of Matt's key purpose about why we

play26:37

do this um I've worked in healthcare a

play26:40

long time I used to build medical device

play26:41

software in hospitals as well Healthcare

play26:44

basically has a scalability problem in a

play26:45

nutshell there are constantly more and

play26:48

more patients there's fewer doctors

play26:49

there's fewer nurses there's fewer path

play26:51

Pathologists there's less money to go

play26:53

around for patient and that is not going

play26:55

to get better anytime soon so lots of

play26:57

companies like our ourselves are looking

play26:58

at how do we digitize this how do we use

play27:00

Ai and Cloud to automate and a very

play27:03

telling example is our MSI algorithm

play27:06

that we built um to detect colon cancer

play27:09

the current gold standard is about two

play27:11

weeks in a lab it goes through multiple

play27:13

people there's lots of physical

play27:14

processes involved we can do the

play27:17

equivalent in 3 minutes and in fact in

play27:20

the space of time that I've been

play27:21

speaking to you we could have done the

play27:22

entirety of Ireland and it would have

play27:24

been surprisingly cost effective we

play27:26

would have just run Millions at a time

play27:27

and tested it to 100,000 at a time a has

play27:31

handled it absolutely no problem uh and

play27:33

also then in terms of the Practical

play27:35

stuff we mentioned the Pathologists need

play27:37

access to a thousand terabytes and

play27:38

pedabytes of data at a time the current

play27:40

reality is they're passing around

play27:42

physical hard drives they're losing data

play27:44

they're struggling to find things we can

play27:45

we can hold that data we can make it

play27:47

highly available and it means the

play27:48

pathologist can go through more cases

play27:50

which means more patients can get more

play27:52

diagnosis and of course everything we do

play27:54

has to be regulated governed and all

play27:56

about patient safety and we an ISO 1345

play27:59

certified company so we've we've had

play28:01

external guidance on how we do machine

play28:03

learning what our practices are and that

play28:05

we are prioritizing patient safety and

play28:07

and positive cancer

play28:09

outcomes uh and I want to talk a bit

play28:11

about uh aeds and why we chose them and

play28:13

we're a very young startup we're 5 to

play28:15

six years old so we had a fresh chance

play28:17

to look at providers and one of the big

play28:19

things that appealed to us was that AWS

play28:21

has a massive ecosystem um there's so

play28:23

many services that we can latch on to

play28:25

and I think as Jonah mentioned at the

play28:27

start of a speech we we're a small but

play28:29

ambitious team we can't boil the ocean

play28:31

we don't want to build things that we

play28:32

don't need to build we want to build the

play28:33

things we care about so for example we

play28:35

didn't need to build our own

play28:36

authentication system we can use Cognito

play28:38

we didn't need to build ways to manage

play28:40

certain infrastructure when we can use

play28:41

Athena we can use homic this allows us

play28:44

to be really precious with our time and

play28:46

it's ultimately why we compete against

play28:48

bigger companies because we are picking

play28:50

the things that matter um another key

play28:53

thing is I mentioned is that we deploy

play28:55

across the world so having a large cloud

play28:58

provider that has respected data centers

play28:59

where we can reliably ship out the same

play29:01

product it's just fundamentally required

play29:04

and the flexibility of AWS is also

play29:06

really useful there's always aund ways

play29:07

to do things it gives us the freedom to

play29:09

build the things that we want I also

play29:12

can't uh speak highly enough about um

play29:14

the support we've had from Jon and sinon

play29:16

um we torture them regularly and other

play29:18

members of the edus team we've had

play29:20

access to training for sage maker we've

play29:22

had beta access to health omix it allows

play29:24

us to get a competitive Head Start um it

play29:27

whenever we're having trouble get

play29:28

product Specialists brought in who can

play29:29

help us identify if our cyber security

play29:32

is up the spec it can make sure that

play29:33

we're doing things the right way it just

play29:35

allows us to move fast and build fast

play29:37

which we have to do we have no choice

play29:39

but to be moving quickly in the market

play29:42

and in order to H get to where we are

play29:43

today we took advantage of aded activate

play29:46

um in the early days it gave us um the

play29:48

ability to explore and create pocs and

play29:51

minimum valuable products it meant that

play29:53

we could afford to experiment and we

play29:55

could get something in from the

play29:56

customers it was really critical to our

play29:57

early early business days um and in a

play30:00

similar V we can look at Marketplace as

play30:02

a way to see do we need to build this or

play30:04

can we buy it can I take something off

play30:05

the shelf can I deal with this awkward

play30:06

customer request is this what sonra

play30:09

really wants to do if not let's just

play30:11

find it and Slot it in and that way we

play30:12

get to keep moving we get to protect our

play30:14

road map which again is just really

play30:16

important for a company of our size uh

play30:18

and lastly probably a fairly undervalued

play30:20

statement AWS does tend have really good

play30:22

documentation we're constantly trying

play30:24

new things we're Hing our heads against

play30:26

the wall so being able to access blogs

play30:28

videos training material it's the

play30:30

difference between meeting in a deadline

play30:32

and

play30:33

not um so final slide for me and I just

play30:36

want to talk about the future and

play30:37

there's some things we've already

play30:38

pointed to um health omix is probably

play30:41

the biggest Focus for us this year we've

play30:42

already got it up and running and we've

play30:44

had a lot of great support from The A

play30:46

Team in that as mentioned earlier it's

play30:47

for handling bulky raw data that tends

play30:50

to come from instruments it's a really

play30:51

big problem to our customers and in

play30:54

practice you can be dealing with um

play30:56

gigabytes of data that terabytes when

play30:58

you're using it we actually went to

play31:00

three different prototypes it's still in

play31:01

the early days but um we went with AWS

play31:04

batch originally which is the most open

play31:06

source recommended route and we had the

play31:08

exact same run cost $2,000 and health

play31:11

omix came in at

play31:12

$22 and you know that's because of

play31:15

things like there's omix compute you're

play31:17

avoiding networking and data cost with

play31:19

not gateways so there's lots of

play31:20

advantages and there's also security and

play31:22

infrastructure benefits that we don't

play31:23

have to manage infrastructure we can

play31:26

rely upon apis and it just makes our

play31:27

life easier um thankfully I don't have

play31:30

to explain Foundation models because

play31:32

Emily gave a fantastic presentation on

play31:33

it but this is something that son very

play31:35

much is wanting to double down on um

play31:37

Imaging is very complicated I mentioned

play31:39

about the expensive issue with um data

play31:42

storage and how tile extractions very

play31:44

competition intensive that gives us now

play31:46

a foundation to build a foundation model

play31:48

it means that we can potentially develop

play31:51

this as a way to rapidly build new

play31:53

cancer diagnosis algorithms and to make

play31:54

what we call multi-indicator

play31:56

multi-indicator based so it could be

play31:58

that we can more easily um identify

play32:00

other parts of cancer in the body as

play32:03

well as allow our customers to build

play32:04

these things as well and then last but

play32:06

not least everyone on the plan has been

play32:08

talking large about large language

play32:09

models sonra has used them also to try

play32:11

and and boost Discovery and we are

play32:13

working with our own clients to see

play32:15

about how we can link in apis to bedrock

play32:16

so that customers can basically get new

play32:19

drugs to Market quicker that that's our

play32:21

purpose is to constantly speed up to

play32:22

reduce the barriers overcome the

play32:24

technical hurdles uh thank you very much

play32:26

I'm going to pass back to Jonah

play32:29

[Applause]

play32:33

Jared Matthew absolutely incredible like

play32:36

it's just seeing what you've achieved

play32:38

with the team um is just brilliant and

play32:41

every month I check in with you folks

play32:42

like it's a new story and you've

play32:44

progressed something that may take you

play32:46

know a year so really really appreciate

play32:49

it and definitely note that what what

play32:52

sonai analytics do extremely well is

play32:54

they really lean into the AWS manag

play32:56

services so say maker Lambda fargate

play32:59

they're using that and what that means

play33:01

is they can focus their time developing

play33:03

their own product and business needs and

play33:05

less time managing kind of

play33:07

infrastructure so you can see that and

play33:09

the way they're going again talked a

play33:11

little bit about the AI Services check

play33:13

them out uh there's loads of general

play33:15

ones there's verticalized ones and

play33:18

there's a whole stack you can easily

play33:19

find them on the AWS website um I have

play33:23

uh one final ask for the audience um it

play33:26

takes about 30 seconds if you wouldn't

play33:28

mind scanning the QR code it helps uh

play33:30

speakers such as myself Jared and

play33:32

Matthew uh talk at these events so would

play33:34

really appreciate uh the feedback from

play33:36

yourselves it it should only take 30

play33:38

seconds so on a final note um from a

play33:40

startup perspective if anyone wants to

play33:42

get handson with uh some generative AI

play33:44

kind of workshops and things um we are

play33:47

running a generative AI Workshop uh in

play33:50

Dublin on the 22nd of May where you can

play33:52

bring yourself and a and a team and

play33:55

we're going to build a generative AI

play33:57

chat bot so if you want to get involved

play33:59

with that can find me after the talk and

play34:02

Jared Matthew and myself are going to be

play34:05

at the startup uh Loft over there so

play34:07

when you go out the door turn right

play34:09

we'll take any questions if you have any

play34:11

you want to learn from the two or myself

play34:13

uh we'd be really happy to to Dive Right

play34:15

In with those conversations so we'll be

play34:17

there for about half an hour thank you

play34:19

so much for being in a great audience

play34:21

[Applause]

Rate This

5.0 / 5 (0 votes)

Related Tags
AWSAI技術スタートアップがん治療効率化ソリューションデータ分析マシンラーニングクラウドコンピューティングライフサイエンス
Do you need a summary in English?