David C King, FogHorn Systems | CUBEConversation, November 2018

SiliconANGLE theCUBE
15 Nov 201824:56

Summary

TLDRIn this CUBE Conversation, Jeff Frick interviews David King, CEO of FogHorn Systems, about the convergence of edge computing, fog computing, and cloud computing. King explains fog computing as an advanced form of edge computing that brings cloud functions like big data analytics to industrial environments. FogHorn's focus is on delivering AI capabilities on live-streaming sensor data to optimize industrial IoT processes in real-time, reducing the need to send massive data volumes to the cloud. The discussion also covers the integration of IT and OT, the challenges of cybersecurity in connected systems, and the potential of video and audio sensing in industrial applications.

Takeaways

  • 🌐 FogHorn Systems is a company focused on fog computing, which is an extension of edge computing and aims to bring cloud computing functions closer to the source of data.
  • 📈 Fog computing is designed to perform analytics and machine learning on live-streaming sensor data, reducing the need to send massive amounts of data to the cloud.
  • 🤝 The convergence of Operational Technology (OT) and Information Technology (IT) is crucial for leveraging AI and IoT in industrial settings, despite the historical separation of the two domains.
  • 🛠️ FogHorn's technology can run on a variety of hardware, from small devices like Raspberry Pi to larger systems, emphasizing the flexibility for different industrial needs.
  • 🔒 Security is a significant concern as connecting OT systems to IT networks can introduce vulnerabilities, despite the benefits of real-time data insights.
  • 💡 The industrial IoT is not just about data collection but also about applying AI and machine learning to improve operations in real-time, leading to significant economic benefits.
  • 📊 FogHorn's stack is designed to handle high-frequency data from industrial machines, enabling on-the-fly computation and decision-making.
  • 🔄 The concept of 'ML on ML' or machine learning models improving other machine learning models in an automated loop is a key aspect of FogHorn's approach to industrial AI.
  • 🚀 FogHorn's technology can be integrated into existing industrial systems, either by sending processed data back to the cloud or directly into control systems for immediate action.
  • 📹 There's a growing trend in industrial IoT towards using video, 3D imaging, and audio sensing for insights, which was traditionally underutilized.

Q & A

  • What is the main topic of discussion in the video?

    -The main topic of discussion is edge computing, fog computing, and cloud computing, with a focus on how these technologies intersect and their applications, particularly in industrial IoT.

  • Who is David King and what is his role in the discussion?

    -David King is the CEO of FogHorn Systems, a company focused on fog computing. He is in the discussion to provide insights into the company's background and the concept of fog computing.

  • What does fog computing represent according to the discussion?

    -Fog computing represents the intersection between cloud and on-premises computing, aiming to bring advanced computing capabilities like analytics, machine learning, and AI closer to the source of data, typically in industrial environments.

  • How does FogHorn Systems differentiate between edge computing and fog computing?

    -FogHorn Systems views fog computing as more than just edge computing. While edge computing has been around for decades in industrial settings, fog computing is seen as a more advanced form that applies cloud computing functions, such as big data analytics, in an industrial context or directly on a machine.

  • What is the significance of 'big data operating in the world's smallest footprint' mentioned by David King?

    -This phrase signifies the concept of performing complex data analytics and machine learning on a small scale, close to the source of data, which is essential for real-time decision making in industrial IoT without the need to send massive amounts of data to the cloud.

  • What are the challenges in merging OT (Operations Technology) and IT (Information Technology) as discussed in the video?

    -The challenges include historical separation and different priorities, such as real-time control and safety in OT versus data-driven insights in IT. There's also a need for education and understanding between the two fields, as well as addressing security concerns when connecting previously isolated systems.

  • How does FogHorn Systems address the issue of data persistence and analysis in industrial settings?

    -FogHorn Systems focuses on performing analytics and machine learning on live-streaming sensor data at the edge, reducing the need to persist large amounts of data on-premises or send it to the cloud for processing.

  • What is the concept of 'ML on ML' mentioned by David King?

    -'ML on ML' refers to the concept of machine learning models improving other machine learning models in an automated fashion, such as updating a global fleet-wide model based on insights gathered from edge devices, without human intervention.

  • How does FogHorn Systems handle the computational challenges at the edge, especially with limited resources?

    -FogHorn Systems has developed a software stack that is lightweight and OS-independent, capable of running on small form factor devices like Raspberry Pi, making it suitable for edge environments with limited power and connectivity.

  • What are some of the practical applications of FogHorn Systems' technology in the field?

    -Practical applications include condition-based monitoring, predictive maintenance, asset performance optimization, and plant-wide optimization. The technology also enables the use of video, 3D imaging, and audio sensing for insights not traditionally derived from such data.

  • How does FogHorn Systems ensure that its solutions are non-invasive and compatible with existing industrial infrastructure?

    -FogHorn Systems ensures non-invasiveness by developing solutions that can run on existing hardware, such as PLCs, and by initially providing alerting and insights without directly interfacing with control systems, allowing for gradual integration and proof of concept.

Outlines

00:00

🌐 Introduction to Edge, Fog, and Cloud Computing

Jeff Frick from theCUBE welcomes viewers to a discussion on edge computing, fog computing, and cloud computing at the Palo Alto studios. He introduces David King, CEO of FogHorn Systems, a company focused on fog computing. David explains that fog computing is an evolution of edge computing, aiming to bring cloud computing capabilities to industrial environments. FogHorn Systems was founded to give substance to the concept of fog computing, which involves processing data close to its source using advanced analytics and AI, thus reducing the need to send massive amounts of data to the cloud.

05:00

🔄 The Convergence of OT and IT

The conversation delves into the integration of Operational Technology (OT) and Information Technology (IT), highlighting the historical separation and current convergence due to technological advancements. David discusses the challenges and opportunities of merging these two domains, emphasizing the need for IT to understand and respect the real-time and safety-critical nature of OT. FogHorn's approach is to augment OT with AI and analytics without disrupting existing systems, focusing on adding value through intelligent data processing at the edge.

10:01

🛠️ The Role of Fog Computing in Industrial IoT

David King elaborates on FogHorn's role in Industrial Internet of Things (IIoT), explaining how their technology enables real-time analytics and machine learning directly on sensor data. This approach minimizes the need to store and transfer vast amounts of data to the cloud, allowing for more efficient and immediate decision-making. The discussion touches on the importance of processing data at the edge of the network, particularly for high-frequency data generated by industrial machines, and how this can lead to significant economic benefits.

15:02

💡 Real-World Applications and the Future of Industrial Automation

The discussion moves to practical applications of FogHorn's technology, with a focus on condition-based monitoring, predictive maintenance, and asset performance optimization. David shares examples of how their technology is being used in remote and brownfield sites, emphasizing the shift from traditional monitoring methods to more advanced, data-driven approaches. The conversation also explores the potential for video and audio sensing in industrial settings, highlighting the move towards using these technologies for real-time decision-making and process optimization.

20:02

🔧 Closing Thoughts on AI in Industry and the Path Forward

In the final part of the conversation, David and Jeff discuss the future of AI in industrial settings, including the concept of self-healing machines and self-improving processes. They touch on the importance of starting with high-value business problems when implementing AI and edge computing solutions. David shares an example of how video monitoring in an oil and gas plant led to significant insights and improvements, showcasing the potential for non-invasive AI applications in traditional OT environments.

Mindmap

Keywords

💡Edge Computing

Edge computing refers to the practice of processing data near the source of the data, rather than in a centralized data-processing warehouse. This reduces the latency and bandwidth usage associated with transmitting data to a central location. In the video, edge computing is discussed in the context of industrial IoT, where it enables real-time analytics and machine learning on live-streaming sensor data, which is crucial for applications like jet engines that generate massive amounts of data.

💡Fog Computing

Fog computing is an extension of cloud computing, bringing computation, storage, and networking services closer to the location where it is needed, such as at the edge of the network. It is designed to improve response times and save bandwidth. In the script, fog computing is highlighted as a concept that allows for advanced computing at the edge, which is more than just edge computing, and is seen as a way to apply cloud computing functions in industrial environments.

💡Industrial IoT (IIoT)

Industrial IoT refers to the application of IoT technologies in industrial settings, enabling the interconnection of industrial systems with advanced data analytics. The video emphasizes the importance of IIoT in delivering analytic, machine learning, and AI capabilities on live-streaming sensor data, which is vital for improving operational efficiency and reducing downtime in industrial settings.

💡OT/IT Convergence

OT stands for Operational Technology, which is the hardware and software used to monitor and control industrial processes. IT refers to Information Technology, which involves the use of computers to store, retrieve, transmit, and manipulate data. The convergence of OT and IT is about integrating these two domains to leverage data and computing power for better operational efficiency and insights. The video discusses the challenges and opportunities of merging these traditionally separate systems.

💡Cybersecurity

Cybersecurity in the context of the video refers to the measures taken to protect industrial systems from digital attacks. As OT systems become more connected, they also become more vulnerable to cyber threats. The discussion highlights the importance of maintaining security while embracing the benefits of IT and IIoT.

💡Machine Learning

Machine learning is a subset of artificial intelligence that allows systems to learn and improve from experience without being explicitly programmed. In the video, machine learning is discussed as a critical component of fog computing, where it can be applied to live-streaming sensor data to provide real-time insights and improve industrial processes.

💡Deep Learning

Deep learning is a type of machine learning that uses neural networks with many layers, allowing the model to learn and make decisions based on complex patterns in large amounts of data. The video mentions deep learning as a part of the advanced analytics that can be performed at the edge, enabling more sophisticated processing of data from industrial equipment.

💡Time Series Database

A time series database is designed to handle time-stamped data, which is common in industrial settings where sensors collect data at regular intervals. In the video, it is mentioned as a type of database that can be used to store and analyze the large volumes of data generated by industrial IoT systems.

💡Digital Twin

A digital twin is a virtual representation of a physical asset, process, or system. It is used to monitor, predict, and optimize the performance of the physical counterpart. In the script, the digital twin is discussed as a concept where the digital representation of an asset or a fleet of assets can be updated with insights derived from edge computing, allowing for better maintenance and performance optimization.

💡Condition-Based Monitoring

Condition-based monitoring is a type of predictive maintenance strategy that uses data from sensors to monitor the condition of equipment and predict when maintenance is needed. The video discusses this as a key application of edge computing in industrial settings, where real-time data can be analyzed to prevent downtime and improve efficiency.

💡Video Analytics

Video analytics involves the use of software to analyze video content for various purposes, such as surveillance or quality control. In the script, video analytics is highlighted as an emerging application of industrial IoT, where cameras can be used to monitor and analyze processes in real-time, leading to improved safety, quality, and operational efficiency.

Highlights

Introduction to edge computing, fog computing, and cloud computing, and their significance in the current tech landscape.

David King, CEO of FogHorn Systems, discusses the company's focus on fog computing and its industrial applications.

FogHorn Systems' origin and its mission to define and add value to the term 'fog computing'.

The distinction between edge computing and fog computing, especially in the context of industrial IoT.

How fog computing aims to bring cloud computing functions to industrial environments.

The importance of processing data close to its source to improve efficiency and reduce data overload.

Challenges and opportunities in merging operations technology (OT) with information technology (IT).

The historical context of OT and IT, and their traditional separation in industrial settings.

FogHorn's strategy to integrate AI into OT environments without disrupting existing systems.

The potential economic impact of industrial IoT and the value of real-time insights in production.

Addressing security concerns as OT systems become more connected.

The concept of 'ML on ML' and its role in the foundation of AI for industrial applications.

How FogHorn's technology enables real-time analytics and machine learning directly on live-streaming sensor data.

The practicality of running complex AI models on edge devices with limited resources.

Examples of how FogHorn's technology is being used in the field, including condition-based monitoring and predictive maintenance.

The potential for video and audio sensing in industrial IoT and how it expands the possibilities for data collection and analysis.

FogHorn's approach to starting with high-value business problems when implementing AI and edge computing solutions.

The future of AI in industry, including self-healing machines and self-improving processes.

Real-world examples of how FogHorn's technology has been integrated into existing industrial processes.

Transcripts

play00:02

(uplifting orchestral music)

play00:12

>> Hey, welcome back, everybody.

play00:13

Jeff Frick here with theCUBE.

play00:14

We're at the Palo Alto studios,

play00:15

having theCUBE Conversation,

play00:16

a little break in the action

play00:17

of the conference season

play00:18

before things heat up,

play00:19

before we kind of come to the close of 2018.

play00:21

It's been quite a year.

play00:22

But it's nice to be back in the studio.

play00:24

Things are a little bit less crazy,

play00:26

and we're excited to talk about

play00:27

one of the really hot topics right now,

play00:29

which is edge computing,

play00:31

fog computing, cloud computing.

play00:32

What do all these things mean,

play00:33

how do they all intersect,

play00:34

and we've got with us today David King.

play00:36

He's the CEO of FogHorn Systems.

play00:38

David, first off, welcome.

play00:39

>> Thank you, Jeff.

play00:40

>> So, FogHorn Systems,

play00:42

I guess by the fog,

play00:43

you guys are all about the fog,

play00:44

and for those that don't know,

play00:45

fog is kind of this intersection between cloud,

play00:47

and on prem, and...

play00:49

So first off, give us a little bit of

play00:51

the background of the company

play00:52

and then let's jump into

play00:53

what this fog thing is all about.

play00:54

>> Sure, actually, it all dovetails together.

play00:57

So yeah, you're right,

play00:58

FogHorn, the name itself,

play00:59

came from Cisco's invented term,

play01:02

called fog computing,

play01:03

from almost a decade ago,

play01:04

and it connoted this idea of

play01:06

computing at the edge,

play01:08

but didn't really have

play01:08

a lot of definition early on.

play01:10

And so, FogHorn was started actually

play01:11

by a Palo Alto Incubator, just nearby here,

play01:14

that had the idea that hey,

play01:15

we got to put some real meaning

play01:16

and some real meat on the bones here,

play01:18

with fog computing.

play01:19

And what we think FogHorn has become

play01:21

over the last three and a half years,

play01:23

since we took it out of the incubator,

play01:24

since I joined,

play01:26

was to put some real purpose,

play01:27

meaning, and value in that term.

play01:29

And so, it's more than just edge computing.

play01:31

Edge computing is a related term.

play01:34

In the industrial world,

play01:35

people would say, hey,

play01:36

I've had edge computing for three, 40, 50 years

play01:38

with my production line control

play01:39

and also my distributed control systems.

play01:41

I've got hard wired compute.

play01:43

I run, they call them,

play01:44

industrial PCs in the factory.

play01:46

That's edge compute.

play01:47

The IT roles come along and said,

play01:48

no, no, no, fog compute is

play01:49

a more advanced form of it.

play01:51

Well, the real purpose of fog computing

play01:53

and edge computing,

play01:53

in our view, in the modern world,

play01:55

is to apply what has traditionally been

play01:57

thought of as cloud computing functions,

play01:59

big, big data,

play02:01

but running in an industrial environment,

play02:02

or running on a machine.

play02:04

And so, we call it as really big data

play02:06

operating in the world's smallest footprint, okay,

play02:09

and the real point of this

play02:10

for industrial customers,

play02:11

which is our primary focus, industrial IoT,

play02:14

is to deliver as much analytic machine learning,

play02:18

deep learning AI capability

play02:20

on live-streaming sensor data, okay,

play02:23

and what that means is rather than

play02:24

persisting a lot of data either on prem,

play02:26

and then sending it to the cloud,

play02:27

or trying to stream all this to the cloud

play02:29

to make sense of terabytes or petabytes a day,

play02:32

per machine sometimes, right,

play02:33

think about a jet engine,

play02:34

a petabyte every flight.

play02:35

You want to do the compute

play02:37

as close to the source as possible,

play02:39

and if possible,

play02:39

on the live streaming data,

play02:41

not after you've persisted it

play02:42

on a big storage system.

play02:44

So that's the idea. >> So you touch on

play02:46

all kinds of stuff there.

play02:47

So we'll break it down. >> Unpack it,

play02:48

yeah. >> Unpack it.

play02:49

So first off, just kind of the OT/IT thing,

play02:52

and I think that's really important,

play02:53

and we talked before turning the cameras on

play02:54

about Dr. Tom from HP,

play02:56

he loves to make a big symbolic handshake of

play02:58

the operations technology, >> One of our partners.

play03:00

>> Right, and IT,

play03:01

and the marriage of these two things,

play03:02

where before, as you said,

play03:03

the OT guys, the guys that

play03:04

have been running factories, you know,

play03:06

they've been doing this for a long time,

play03:07

and now suddenly,

play03:09

the IT folks are butting in

play03:10

and want to get access to that data

play03:12

to provide more control.

play03:13

So, you know, as you see the marriage of

play03:15

those two things coming together,

play03:17

what are the biggest points of friction,

play03:18

and really, what's the biggest opportunity?

play03:20

>> Great set of questions.

play03:21

So, quite right,

play03:22

the OT folks are inherently suspicious

play03:25

of IT, right?

play03:26

I mean, if you don't know the history,

play03:28

40 plus years ago,

play03:29

there was a fork in the road,

play03:31

where in factory operations,

play03:33

were they going to embrace things like ethernet,

play03:36

the internet,

play03:37

connected systems?

play03:39

In fact, they purposely air gapped

play03:41

an island of those systems

play03:42

'cause they was all about machine control,

play03:44

real-time, for safety,

play03:46

productivity, and uptime of the machine.

play03:47

They don't want any,

play03:49

you can't use kind of standard ethernet,

play03:50

it has to be industrial ethernet, right?

play03:52

It has to have time bound and deterministic.

play03:54

It can't be a retry kind of a system, right?

play03:56

So different MAC layer for a reason,

play03:58

for example.

play03:59

What did the physical wiring look like?

play04:01

It's also different cabling,

play04:02

because you can't have cuts,

play04:03

jumps in the cable, right?

play04:05

So it's a different environment entirely

play04:07

that OT grew up in,

play04:08

and so, FogHorn is trying to really

play04:10

bring the value of what people are

play04:12

delivering for AI, essentially,

play04:15

into that environment

play04:16

in a way that's non-threatening to,

play04:18

it's supplemental to,

play04:19

and adds value in the OT world.

play04:21

So Dr. Tom is right,

play04:22

this idea of bringing IT and OT together

play04:25

is inherently challenging,

play04:26

because these were kind of fork in the road,

play04:29

island-ed in the networks, if you will,

play04:31

different systems,

play04:33

different nomenclature,

play04:34

different protocols,

play04:35

and so, there's a real education curve

play04:38

that IT companies are going through,

play04:40

and the idea of taking all this OT data

play04:43

that's already been produced

play04:44

in tremendous volumes already

play04:46

before you add new kinds of sensing,

play04:48

and sending it across a LAN

play04:50

which it's never talked to before,

play04:51

then across a WAN to go to a cloud,

play04:54

to get some insight

play04:55

doesn't make any sense, right?

play04:56

So you want to leverage the cloud,

play04:58

you want to leverage data centers,

play04:59

you want to leverage the LAN,

play05:00

you want to leverage 5G,

play05:01

you want to leverage all the new IT technologies,

play05:03

but you have to do it in a way

play05:05

that makes sense for it and adds value

play05:06

in the OT context.

play05:08

>> I'm just curious,

play05:09

you talked about the air gapping,

play05:10

the two systems,

play05:12

which means they are not connected,

play05:14

right? >> No, they're connected

play05:15

with a duct, they're connected to themselves,

play05:17

in the industrial-- >> Right, right, but before,

play05:18

the OT system was air gapped from the IT system,

play05:21

so thinking about security

play05:23

and those types of threats,

play05:25

now, if those things are connected,

play05:28

that security measure has gone away,

play05:29

so what is the excitement,

play05:33

adoption scare when now, suddenly,

play05:35

these things that were separate,

play05:37

especially in the age of breaches

play05:39

that we know happen all the time

play05:40

as you bring those things

play05:41

together? >> Well, in fact,

play05:42

there have been cyber breaches in the OT context.

play05:45

Think about Stuxnet,

play05:46

think about things that have happened,

play05:47

think about the utilities back keys

play05:49

that were found to have malwares

play05:51

implanted in them.

play05:52

And so, this idea of industrial IoT

play05:54

is very exciting,

play05:55

the ability to get real-time

play05:58

kind of game changing insights

play05:59

about your production.

play06:02

A huge amount of economic activity in the world

play06:04

could be dramatically improved.

play06:06

You can talk about trillions of dollars of value

play06:08

which the McKenzie, and BCG,

play06:09

and Bain talk about, right,

play06:11

by bringing kind of AI,

play06:13

ML into the plant environment.

play06:15

But the inherent problem is that

play06:17

by connecting the systems,

play06:18

you introduce security problems.

play06:20

You're talking about a huge amount of cost

play06:22

to move this data around,

play06:23

persist it then add value,

play06:25

and it's not real-time, right?

play06:26

So, it's not that cloud is not relevant,

play06:29

it's not that it's not used,

play06:31

it's that you want to do the compute

play06:33

where it makes sense,

play06:34

and for industrial,

play06:35

the more industrialized the environment,

play06:37

the more high frequency,

play06:39

high volume data,

play06:40

the closer to the system

play06:42

that you can do the compute, the better,

play06:43

and again, it's multi-layer of compute.

play06:45

You probably have something on the machine,

play06:47

something in the plant,

play06:48

and something in the cloud, right?

play06:50

But rather than send raw OT data to the cloud,

play06:52

you're going to send processed

play06:53

intelligent metadata insights

play06:55

that have already been derived at the edge,

play06:57

update what they call

play06:58

the fleet-wide digital twin, right?

play07:00

The digital twin for that whole fleet of assets

play07:02

should sit in the cloud,

play07:03

but the digital twin of the specific asset

play07:05

should probably be on the asset.

play07:07

>> So let's break that down a little bit.

play07:09

There's so much good stuff here.

play07:11

So, we talked about OT/IT and that marriage.

play07:14

Next, I just want to touch on cloud,

play07:15

'cause a lot of people know cloud,

play07:16

it's very hot right now,

play07:17

and the ultimate promise of cloud, right,

play07:20

is you have infinite capacity

play07:22

>> Right, infinite compute. >> Available on demand,

play07:24

and you have infinite compute,

play07:25

and hopefully you have some big fat pipes

play07:27

to get your stuff in and out.

play07:29

But the OT challenge is,

play07:30

and as you said,

play07:31

the device challenge is very, very different.

play07:33

They've got proprietary operating systems,

play07:35

they've been running for a very, very long time.

play07:37

As you said, they put off boatloads,

play07:39

and boatloads, and boatloads of data

play07:40

that was never really designed

play07:43

to feed necessarily a machine learning algorithm,

play07:46

or an artificial intelligence algorithm

play07:48

when these things were designed.

play07:49

It wasn't really part of the equation.

play07:51

And we talk all the time about you know,

play07:53

do you move the compute to the data,

play07:55

you move the data to the compute,

play07:56

and really, what you're talking about

play07:57

in this fog computing world

play07:59

is kind of a hybrid, if you will,

play08:01

of trying to figure out which data

play08:03

you want to process locally,

play08:05

and then which data you have time,

play08:07

relevance, and other factors

play08:09

that just go ahead and pump it upstream.

play08:11

>> Right, that's a great way to describe it.

play08:12

Actually, we're trying to move

play08:14

as much of the compute as possible to the data.

play08:17

That's really the point of,

play08:19

that's why we say fog computing is

play08:21

a nebulous term about edge compute.

play08:23

It doesn't have any value

play08:24

until you actually decide

play08:25

what you're trying to do with it,

play08:26

and what we're trying to do is to take

play08:27

as much of the harder compute challenges,

play08:30

like analytics, machine learning,

play08:31

deep learning, AI,

play08:33

and bring it down to the source,

play08:34

as close to the source as you can,

play08:36

because you can essentially streamline

play08:38

or make more efficient

play08:38

every layer of the stack.

play08:39

Your models will get much better, right?

play08:42

You might have built them

play08:43

in the cloud initially,

play08:44

think about a deep learning model,

play08:45

but it may only be 60, 70% accurate.

play08:47

How do you do the improvement of the model

play08:49

to get it closer to perfect?

play08:50

I can't go send all the data up

play08:51

to keep trying to improve it.

play08:53

Well, typically, what happens is

play08:54

I down sample the data,

play08:55

I average it and I send it up,

play08:56

and I don't see any changes in the average data.

play08:59

Guess what?

play09:00

We should do is inference all the time

play09:01

and all the data,

play09:02

run it in our stack,

play09:03

and then send the metadata up,

play09:05

and then have the cloud look across

play09:06

all the assets of a similar type, and say,

play09:08

oh, the global fleet-wide model

play09:10

needs to be updated,

play09:11

and then to push it down.

play09:12

So, with Google just about a month ago,

play09:14

in Barcelona, at the IoT show,

play09:16

what we demonstrated was

play09:17

the world's first instance of AI for industrial,

play09:19

which is closed loop machine learning.

play09:21

We were taking a model,

play09:22

a TensorFlow model,

play09:23

trained in the cloud in the data center,

play09:25

brought into our stack

play09:26

and referring 100% inference-ing

play09:28

in all the live data,

play09:29

pushing the insights back up into Google Cloud,

play09:32

and then automatically updating the model

play09:34

without a human or data scientist

play09:35

having to look at it.

play09:37

Because essentially, it's ML on ML.

play09:39

And that to us,

play09:39

ML on ML is the foundation of AI for industrial.

play09:43

>> I just love that something comes up

play09:44

all the time, right?

play09:45

We used to make decisions based on

play09:47

the sampling of historical data after the fact.

play09:49

>> That's right, that's how

play09:50

we've all been doing it. >> Now, right, right now,

play09:51

the promise of streaming is

play09:52

you can make it based on all the data,

play09:54

>> All the time. >> All the time in real time.

play09:57

>> Permanently. >> This is a very

play09:58

different thing.

play09:59

So, but as you talked about,

play10:01

you know, running some complex models,

play10:03

and running ML,

play10:04

and retraining these things.

play10:05

You know, when you think of edge,

play10:07

you think of some little hockey puck

play10:08

that's out on the edge of a field,

play10:09

with limited power, limited connectivity,

play10:13

so you know,

play10:15

what's the reality of,

play10:17

how much power do you have at

play10:19

some of these more remote edges,

play10:20

or we always talk about the field of turbines,

play10:23

oil platforms,

play10:25

and how much power do you need,

play10:26

and how much compute that it actually

play10:28

starts to be meaningful in terms of

play10:30

the platform for the software?

play10:31

>> Right, there's definitely use cases,

play10:33

like you think about the smart meters,

play10:35

right, in the home.

play10:37

The older generation of those meters

play10:39

may have had very limited compute, right,

play10:41

like you know, talking about

play10:43

single megabyte of memory maybe,

play10:45

or less, right, kilobytes of memory.

play10:47

Very hard to run a stack on

play10:48

that kind of footprint.

play10:49

The latest generation of smart meters

play10:50

have about 250 megabytes of memory.

play10:53

A Raspberry Pi today is anywhere from

play10:54

a half a gig to a gig of memory,

play10:56

and we're fundamentally memory-bound,

play10:57

and obviously, CPU if it's trying to

play10:58

really fast compute,

play11:00

like vibration analysis,

play11:01

or acoustic, or video.

play11:02

But if you're just trying to

play11:03

take digital sensing data,

play11:04

like temperature, pressure,

play11:06

velocity, torque,

play11:08

we can take humidity,

play11:09

we can take all of that,

play11:10

believe it or not,

play11:12

run literally dozens and dozens of models,

play11:14

even train the models in something

play11:16

as small as a Raspberry Pi,

play11:17

or a low end x86.

play11:19

So our stack can run in any hardware,

play11:21

we're completely OS independent.

play11:22

It's a full up software layer.

play11:24

But the whole stack is about

play11:26

100 megabytes of memory,

play11:27

with all the components,

play11:28

including Docker containerization, right,

play11:30

which compares to about 10 gigs of

play11:32

running a stream processing stack

play11:33

like Spark in the Cloud.

play11:35

So it's that order of magnitude of

play11:37

footprint reduction

play11:39

and speed of execution improvement.

play11:41

So as I said,

play11:42

world's smallest fastest compute engine.

play11:44

You need to do that if you're going to talk about,

play11:46

like a wind turbine,

play11:47

it's generating data, right,

play11:49

every millisecond, right.

play11:50

So you have high frequency data,

play11:51

like turbine pitch,

play11:53

and you have other conceptual data

play11:55

you're trying to bring in,

play11:56

like wind conditions,

play11:58

reference information about

play11:59

how the turbine is supposed to operate.

play12:00

You're bringing in a torrential amount of data

play12:02

to do this computation on the fly.

play12:04

And so, the challenge for a lot of

play12:05

the companies that have really started

play12:07

to move into the space,

play12:08

the cloud companies, like our partners,

play12:09

Google, and Amazon, and Microsoft,

play12:11

is they have great cloud capabilities for AI, ML.

play12:15

They're trying to move down to the edge

play12:16

by just transporting the whole stack to there.

play12:18

So in a plant environment,

play12:20

okay, that might work if you have

play12:21

massive data centers that can run it.

play12:23

Now I still got to stream all my assets,

play12:25

all the data from all of my assets

play12:26

to that central point.

play12:28

What we're trying to do is

play12:28

come out the opposite way,

play12:29

which is by having the world's

play12:31

smallest, fastest engine,

play12:32

we can run it in a small compute,

play12:34

very limited compute on the asset,

play12:36

or near the asset,

play12:38

or you can run this in a big compute

play12:39

and we can take on lots and lots of

play12:41

use cases for models simultaneously.

play12:44

>> I'm just curious on the small compute case,

play12:45

and again, you want all the data--

play12:48

>> You want to inference another thing, right?

play12:50

>> Does it eventually go back,

play12:52

or is there a lot of cases where

play12:54

you can get the information

play12:56

you need off the stream

play12:57

and you don't necessarily have to save

play12:58

or send that upstream?

play13:00

>> So fundamentally today,

play13:01

in the OT world,

play13:02

the data usually gets,

play13:04

if the PLC, the production line controller,

play13:06

that has simple KPIs,

play13:08

if temperature goes to X

play13:09

or pressure goes to Y, do this.

play13:10

Those simple KPIs,

play13:12

if nothing is executed,

play13:13

it gets dumped into a local protocol server,

play13:15

and then about every 30, 60, 90 days,

play13:17

it gets written over.

play13:18

Nobody ever looks at it, right?

play13:20

That's why I say,

play13:21

99% of the brown field data in OT

play13:23

has never really been-- >> Almost like a security--

play13:25

>> Has never been mined for insight.

play13:26

Right, it just gets-- >> It runs, and runs, and runs,

play13:27

and every so often-- >> Exactly, and so,

play13:28

if you're doing inference-ing,

play13:29

and doing real time decision making,

play13:31

real time actual with our stack,

play13:33

what you would then persist is

play13:34

metadata insights, right?

play13:36

Here is an event,

play13:37

or here is an outcome,

play13:38

and oh, by the way,

play13:39

if you're doing deep learning

play13:40

or machine learning,

play13:41

and you're seeing deviation or drift

play13:43

from the model's prediction,

play13:44

you probably want to keep that

play13:46

and some of the raw data packets

play13:47

from that moment in time,

play13:50

and send that to the cloud or data center to say,

play13:52

oh, our fleet-wide model may not be accurate,

play13:55

or may be drifting, right?

play13:57

And so, what you want to do, again,

play13:59

different horses for different courses.

play14:01

Use our stack to do the lion's share of

play14:04

the heavy duty real time compute,

play14:06

produce metadata that you can send

play14:07

to either a data center or a cloud environment

play14:09

for further learning.

play14:10

>> Right, so your piece is really

play14:12

the gathering and the ML,

play14:14

and then if it needs to go back out

play14:15

for more heavy lifting,

play14:16

you'll send it back up,

play14:17

or do you have the cloud application as well

play14:19

that connects if you need? >> Yeah,

play14:21

so we build connectors to you know,

play14:22

Google Cloud Platform,

play14:24

Google IoT Core,

play14:25

to AWS S3, to Microsoft Azure,

play14:29

virtually any, Kafka, Hadoop.

play14:30

We can send the data wherever you want,

play14:32

either on plant,

play14:33

right back into the existing control systems,

play14:35

we can send it to OSIsoft PI,

play14:37

which is a great time series database

play14:39

that a lot of process industries use.

play14:41

You could of course send it to any public cloud

play14:42

or a Hadoop data lake private cloud.

play14:44

You can send the data wherever you want.

play14:46

Now, we also have,

play14:47

one of our components is a time series database.

play14:49

You can also persist it

play14:50

in memory in our stack,

play14:51

just for buffering,

play14:52

or if you have high value data that

play14:53

you want to take a measurement,

play14:55

a value from a previous calculation

play14:57

and bring it into another calculation

play14:59

during later, right,

play15:00

so, it's a very flexible system.

play15:01

>> Yeah, we were at OSIsoft PI World

play15:03

earlier this year.

play15:05

Some fascinating stories that came out of--

play15:07

>> 30 year company.

play15:08

>> The building maintenance,

play15:10

and all kinds of stuff.

play15:11

So I'm just curious,

play15:12

some of the easy to understand applications

play15:15

that you've seen in the field,

play15:16

and maybe some of the ones

play15:17

that were a surprise on the OT side.

play15:20

I mean, obviously,

play15:21

preventative maintenance is always

play15:23

towards the top of the list.

play15:23

>> Yeah, I call it the layer cake, right?

play15:26

Especially when you get to remote assets

play15:28

that are either not monitored

play15:30

or lightly monitored.

play15:31

They call it drive-by monitoring.

play15:32

Somebody shows up and listens

play15:33

or looks at a valve or gauge and leaves.

play15:36

Condition-based monitoring, right?

play15:38

That is actually a big breakthrough for some,

play15:40

you know, think about fracking sites,

play15:42

or remote oil fields,

play15:43

or mining sites.

play15:46

The second layer is predictive maintenance,

play15:47

which the next generation is kind of

play15:49

predictive, prescriptive,

play15:51

even preventive maintenance, right?

play15:52

You're making predictions

play15:53

or you're helping to avoid downtime.

play15:55

The third layer,

play15:56

which is really where our stack

play15:58

is sort of unique today in delivering

play15:59

is asset performance optimization.

play16:01

How do I increase throughput,

play16:03

how do I reduce scrap,

play16:04

how do I improve worker safety,

play16:05

how do I get better processing of the data

play16:08

that my PLC can't give me,

play16:09

so I can actually improve

play16:10

the performance of the machine?

play16:12

Now, ultimately,

play16:13

what we're finding is a couple of things.

play16:15

One is, you can look at

play16:16

individual asset optimization,

play16:18

process optimization,

play16:19

but there's another layer.

play16:20

So often, we're deployed to

play16:21

two layers on premise.

play16:22

There's also the plant-wide optimization.

play16:24

We talked about wind farm before, off camera.

play16:26

So you've got the wind turbine.

play16:28

You can do a lot of things about

play16:30

turbine health,

play16:31

the blade pitch and condition of the blade,

play16:34

you can do things on the battery,

play16:35

all the systems on the turbine,

play16:37

but you also need a stack running, like ours,

play16:40

at that concentration point

play16:41

where there's 200 plus turbines

play16:43

that come together,

play16:44

'cause the optimization of the whole farm,

play16:47

every turbine affects the other turbine,

play16:49

so a single turbine can't tell you

play16:52

speed, rotation,

play16:53

things that need to change,

play16:54

if you want to adjust the speed of one turbine,

play16:56

versus the one next to it.

play16:58

So there's also kind of

play16:58

a plant-wide optimization.

play17:00

Talking about time that's driving,

play17:02

there's going to be five layers of compute, right?

play17:04

You're going to have the,

play17:05

almost what I call the ECU level,

play17:06

the individual sub-system in the car that,

play17:09

the engine, how it's performing.

play17:11

You're going to have the gateway in the car

play17:12

to talk about things that are happening

play17:14

across systems in the car.

play17:15

You're going to have

play17:17

the peer to peer connection over 5G

play17:19

to talk about optimization

play17:20

right between vehicles.

play17:22

You're going to have the base station algorithms

play17:24

looking at a micro soil or macro soil

play17:26

within a geographic area,

play17:27

and of course, you'll have the ultimate cloud,

play17:28

'cause you want to have the data

play17:29

on all the assets, right,

play17:31

but you don't want to send

play17:32

all that data to the cloud,

play17:33

you want to send the right metadata to the cloud.

play17:35

>> That's why there are big trucks full

play17:36

of compute now. >> By the way,

play17:38

you mentioned one thing that

play17:39

I should really touch on,

play17:41

which is, we've talked a lot about

play17:43

what I call traditional brown field

play17:45

automation and control type analytics

play17:46

and machine learning,

play17:48

and that's kind of where we started

play17:49

in discrete manufacturing a few years ago.

play17:51

What we found is that in that domain,

play17:52

and in oil and gas, and in mining,

play17:54

and in agriculture, transportation,

play17:56

in all those places,

play17:57

the most exciting new development this year

play18:00

is the movement towards video,

play18:02

3D imaging and audio sensing,

play18:04

'cause those sensors are now

play18:05

becoming very economical,

play18:07

and people have never thought about,

play18:08

well, if I put a camera

play18:09

and apply it to a certain application,

play18:12

what can I learn,

play18:12

what can I do that I never did before?

play18:14

And often, they even have cameras today,

play18:17

they haven't made use of any of the data.

play18:19

So there's a very large customer of ours

play18:22

who has literally video inspection data

play18:26

every product they produce

play18:27

everyday around the world,

play18:28

and this is in hundreds of plants.

play18:30

And that data never gets looked at, right,

play18:32

other than training operators like,

play18:33

hey, you missed the defects this day.

play18:35

The system, as you said,

play18:36

they just write over that data

play18:37

after 30 days.

play18:39

Well, guess what,

play18:39

you can apply deep learning

play18:40

tensor flow algorithms

play18:42

to build a convolutional neural network model

play18:44

and essentially do the human visioning,

play18:46

rather than an operator staring at a camera,

play18:49

or trying to look at training tapes.

play18:51

30 days later,

play18:52

I'm doing inference-ing of

play18:53

the video image on the fly.

play18:56

>> So, do your systems close loop

play18:59

back to the control systems now,

play19:00

or is it more of a tuning mechanism

play19:02

for someone to go back and do it later?

play19:04

>> Great question, I just got asked that

play19:05

this morning by a large oil and gas super major

play19:07

that Intel just introduced us to.

play19:09

The short answer is,

play19:10

our stack can absolutely go right back

play19:12

into the control loop.

play19:13

In fact, one of our investors and partners,

play19:15

I should mention,

play19:16

our investors for series A was GE,

play19:18

Bosch, Yokogawa, Dell EMC,

play19:21

and our series debuted a year ago was Intel,

play19:23

Saudi Aramco, and Honeywell.

play19:24

So we have one foot in tech,

play19:27

one foot in industrial,

play19:29

and really, what we're really trying to

play19:30

bring is, you said, IT, OT together.

play19:34

The short answer is,

play19:34

you can do that,

play19:35

but typically in the industrial environment,

play19:37

there's a conservatism about,

play19:38

hey, I don't want to touch,

play19:41

you know, affect the machine

play19:42

until I've proven it out.

play19:44

So initially, people tend to start with alerting,

play19:46

so we send an automatic alert

play19:47

back into the control system to say,

play19:48

hey, the machine needs to be re-tuned.

play19:52

Very quickly, though,

play19:53

certainly for things that are

play19:54

not so time-sensitive,

play19:55

they will just have us,

play19:57

now, Yokogawa, one of our investors,

play19:58

I pointed out our investors,

play20:00

actually is putting us in PLCs.

play20:02

So rather than sending the data off the PLC

play20:05

to another gateway running our stack,

play20:07

like an x86 or ARM gateway,

play20:09

we're actually, those PLCs now have

play20:12

Raspberry Pi plus capabilities.

play20:13

A lot of them are-- >> To what types of mechanism?

play20:16

>> Well, right now,

play20:18

they're doing the IO

play20:19

and the control of the machine,

play20:20

but they have enough compute now

play20:22

that you can run us in a separate module,

play20:23

like the little brain

play20:24

sitting right next to the control room,

play20:25

and then do the AI on the fly,

play20:27

and there, you actually don't even need to

play20:28

send the data off the PLC.

play20:30

We just re-program the actuator.

play20:32

So that's where it's heading.

play20:33

It's eventually, and it could take years

play20:35

before people get comfortable

play20:36

doing this automatically,

play20:37

but what you'll see is that

play20:39

what AI represents in industrial

play20:41

is the self-healing machine,

play20:43

the self-improving process,

play20:45

and this is where it starts.

play20:47

>> Well, the other thing

play20:48

I think is so interesting is

play20:49

what are you optimizing for,

play20:51

and there is no right answer, right?

play20:53

It could be you're optimizing for,

play20:54

like you said, a machine.

play20:55

You could be optimizing for the field.

play20:58

You could be optimizing for maintenance,

play21:00

but if there is a spike in pricing,

play21:02

you may say, eh,

play21:03

we're not optimizing now for maintenance,

play21:05

we're actually optimizing for output,

play21:06

because we have this temporary condition

play21:09

and it's worth the trade-off.

play21:10

So I mean, there's so many ways that

play21:11

you can skin the cat

play21:13

when you have a lot more information

play21:15

and a lot more data. >> No, that's right,

play21:16

and I think what we typically like to do

play21:18

is start out with

play21:19

what's the business value, right?

play21:21

We don't want to go do a science project.

play21:22

Oh, I can make that machine work 50% better,

play21:25

but if it doesn't make any difference

play21:26

to your business operations, so what?

play21:28

So we always start the investigation with

play21:31

what is a high value business problem

play21:34

where you have sufficient data

play21:36

where applying this kind of AI and the edge concept

play21:38

will actually make a difference?

play21:40

And that's the kind of proof of concept

play21:41

we like to start with.

play21:42

>> So again, just to come full circle,

play21:44

what's the craziest thing an OT guy said,

play21:46

oh my goodness, you IT guys

play21:47

actually brought some value here

play21:48

that I didn't know. >> Well, I touched on video,

play21:51

right, so without going into

play21:53

the whole details of the story,

play21:54

one of our big investors,

play21:55

a very large oil and gas company,

play21:57

we said, look,

play21:58

you guys have done some great work with

play22:00

I call it software defined SCADA,

play22:02

which is a term,

play22:03

SCADA is the network environment for OT, right,

play22:06

and so, SCADA is what the PLCs and DCSes

play22:09

connect over these SCADA networks.

play22:10

That's the control automation role.

play22:12

And this investor said, look,

play22:13

you can come in,

play22:14

you've already shown us,

play22:15

that's why they invested,

play22:16

that you've gone into

play22:17

brown field SCADA environments,

play22:19

done deep mining of the existing data

play22:20

and shown value by reducing scrap

play22:23

and improving output,

play22:24

improving worker safety,

play22:25

all the great business outcomes for industrial.

play22:27

If you come into our operation,

play22:29

our plant people are going to say, no,

play22:31

you're not touching my PLC.

play22:32

You're not touching my SCADA network.

play22:34

So come in and do something

play22:35

that's non-invasive to that world,

play22:37

and so that's where we actually

play22:39

got started with video about 18 months ago.

play22:40

They said, hey,

play22:41

we've got all these video cameras,

play22:42

and we're not doing anything.

play22:43

We just have human operators writing down,

play22:45

oh, I had a bad event.

play22:47

It's a totally non-automated system.

play22:49

So we went in and did a video use case around,

play22:52

we call it, flare monitoring.

play22:53

You know, hundreds of stacks of

play22:55

burning of oil and gas in a production plant.

play22:59

24 by seven team of operators

play23:00

just staring at it, writing down,

play23:02

oh, I think I had a bad flare.

play23:03

I mean, it's a very interesting

play23:05

old world process.

play23:06

So by automating that

play23:07

and giving them an AI dashboard essentially.

play23:09

Oh, I've got a permanent record of

play23:11

exactly how high the flare was,

play23:12

how smoky was it,

play23:14

what was the angle,

play23:15

and then you can then fuse that data

play23:16

back into plant data,

play23:17

what caused that,

play23:19

and also OSIsoft data,

play23:20

what was the gas composition?

play23:21

Was it in fact a safety violation?

play23:23

Was it in fact an environmental violation?

play23:26

So, by starting with video,

play23:28

and doing that use case,

play23:29

we've now got dozens of use cases

play23:31

all around video.

play23:32

Oh, I could put a camera on this.

play23:34

I could put a camera on a rig.

play23:35

I could've put a camera down the hole.

play23:37

I could put the camera on the pipeline,

play23:38

on a drone.

play23:39

There's just a million places

play23:40

that video can show up,

play23:42

or audio sensing, right, acoustic.

play23:44

So, video is great if you can see the event,

play23:47

like I'm flying over the pipe,

play23:49

I can see corrosion, right,

play23:50

but sometimes, like you know,

play23:52

a burner or an oven,

play23:53

I can't look inside the oven with a camera.

play23:55

There's no camera that could survive 600 degrees.

play23:58

So what do you do?

play23:59

Well, that's probably,

play24:00

you can do something like

play24:01

either vibration or acoustic.

play24:03

Like, inside the pipe,

play24:04

you got to go with sound.

play24:06

Outside the pipe, you go video.

play24:07

But these are the kind of things that people,

play24:09

traditionally, how did they inspect pipe?

play24:11

Drive by.

play24:13

>> Yes, fascinating story.

play24:14

Even again, I think at the end of the day,

play24:16

it's again, you can make real decisions

play24:18

based on all the data in real time,

play24:20

versus some of the data after the fact.

play24:24

All right, well, great conversation,

play24:26

and look forward to watching

play24:27

the continued success of FogHorn.

play24:30

>> Thank you very much. >> All right.

play24:31

>> Appreciate it. >> He's David King,

play24:32

I'm Jeff Frick,

play24:32

you're watching theCUBE.

play24:33

We're having a CUBE conversation

play24:34

at our Palo Alto studio.

play24:35

Thanks for watching, we'll see you next time.

play24:38

(uplifting symphonic music)

Rate This

5.0 / 5 (0 votes)

関連タグ
Edge ComputingFog ComputingCloud ComputingIndustrial IoTReal-Time AnalyticsAI in IndustryOT/IT ConvergenceCybersecurityData OptimizationFogHorn Systems
英語で要約が必要ですか?