David C King, FogHorn Systems | CUBEConversation, November 2018
Summary
TLDRIn this CUBE Conversation, Jeff Frick interviews David King, CEO of FogHorn Systems, about the convergence of edge computing, fog computing, and cloud computing. King explains fog computing as an advanced form of edge computing that brings cloud functions like big data analytics to industrial environments. FogHorn's focus is on delivering AI capabilities on live-streaming sensor data to optimize industrial IoT processes in real-time, reducing the need to send massive data volumes to the cloud. The discussion also covers the integration of IT and OT, the challenges of cybersecurity in connected systems, and the potential of video and audio sensing in industrial applications.
Takeaways
- ๐ FogHorn Systems is a company focused on fog computing, which is an extension of edge computing and aims to bring cloud computing functions closer to the source of data.
- ๐ Fog computing is designed to perform analytics and machine learning on live-streaming sensor data, reducing the need to send massive amounts of data to the cloud.
- ๐ค The convergence of Operational Technology (OT) and Information Technology (IT) is crucial for leveraging AI and IoT in industrial settings, despite the historical separation of the two domains.
- ๐ ๏ธ FogHorn's technology can run on a variety of hardware, from small devices like Raspberry Pi to larger systems, emphasizing the flexibility for different industrial needs.
- ๐ Security is a significant concern as connecting OT systems to IT networks can introduce vulnerabilities, despite the benefits of real-time data insights.
- ๐ก The industrial IoT is not just about data collection but also about applying AI and machine learning to improve operations in real-time, leading to significant economic benefits.
- ๐ FogHorn's stack is designed to handle high-frequency data from industrial machines, enabling on-the-fly computation and decision-making.
- ๐ The concept of 'ML on ML' or machine learning models improving other machine learning models in an automated loop is a key aspect of FogHorn's approach to industrial AI.
- ๐ FogHorn's technology can be integrated into existing industrial systems, either by sending processed data back to the cloud or directly into control systems for immediate action.
- ๐น There's a growing trend in industrial IoT towards using video, 3D imaging, and audio sensing for insights, which was traditionally underutilized.
Q & A
What is the main topic of discussion in the video?
-The main topic of discussion is edge computing, fog computing, and cloud computing, with a focus on how these technologies intersect and their applications, particularly in industrial IoT.
Who is David King and what is his role in the discussion?
-David King is the CEO of FogHorn Systems, a company focused on fog computing. He is in the discussion to provide insights into the company's background and the concept of fog computing.
What does fog computing represent according to the discussion?
-Fog computing represents the intersection between cloud and on-premises computing, aiming to bring advanced computing capabilities like analytics, machine learning, and AI closer to the source of data, typically in industrial environments.
How does FogHorn Systems differentiate between edge computing and fog computing?
-FogHorn Systems views fog computing as more than just edge computing. While edge computing has been around for decades in industrial settings, fog computing is seen as a more advanced form that applies cloud computing functions, such as big data analytics, in an industrial context or directly on a machine.
What is the significance of 'big data operating in the world's smallest footprint' mentioned by David King?
-This phrase signifies the concept of performing complex data analytics and machine learning on a small scale, close to the source of data, which is essential for real-time decision making in industrial IoT without the need to send massive amounts of data to the cloud.
What are the challenges in merging OT (Operations Technology) and IT (Information Technology) as discussed in the video?
-The challenges include historical separation and different priorities, such as real-time control and safety in OT versus data-driven insights in IT. There's also a need for education and understanding between the two fields, as well as addressing security concerns when connecting previously isolated systems.
How does FogHorn Systems address the issue of data persistence and analysis in industrial settings?
-FogHorn Systems focuses on performing analytics and machine learning on live-streaming sensor data at the edge, reducing the need to persist large amounts of data on-premises or send it to the cloud for processing.
What is the concept of 'ML on ML' mentioned by David King?
-'ML on ML' refers to the concept of machine learning models improving other machine learning models in an automated fashion, such as updating a global fleet-wide model based on insights gathered from edge devices, without human intervention.
How does FogHorn Systems handle the computational challenges at the edge, especially with limited resources?
-FogHorn Systems has developed a software stack that is lightweight and OS-independent, capable of running on small form factor devices like Raspberry Pi, making it suitable for edge environments with limited power and connectivity.
What are some of the practical applications of FogHorn Systems' technology in the field?
-Practical applications include condition-based monitoring, predictive maintenance, asset performance optimization, and plant-wide optimization. The technology also enables the use of video, 3D imaging, and audio sensing for insights not traditionally derived from such data.
How does FogHorn Systems ensure that its solutions are non-invasive and compatible with existing industrial infrastructure?
-FogHorn Systems ensures non-invasiveness by developing solutions that can run on existing hardware, such as PLCs, and by initially providing alerting and insights without directly interfacing with control systems, allowing for gradual integration and proof of concept.
Outlines
๐ Introduction to Edge, Fog, and Cloud Computing
Jeff Frick from theCUBE welcomes viewers to a discussion on edge computing, fog computing, and cloud computing at the Palo Alto studios. He introduces David King, CEO of FogHorn Systems, a company focused on fog computing. David explains that fog computing is an evolution of edge computing, aiming to bring cloud computing capabilities to industrial environments. FogHorn Systems was founded to give substance to the concept of fog computing, which involves processing data close to its source using advanced analytics and AI, thus reducing the need to send massive amounts of data to the cloud.
๐ The Convergence of OT and IT
The conversation delves into the integration of Operational Technology (OT) and Information Technology (IT), highlighting the historical separation and current convergence due to technological advancements. David discusses the challenges and opportunities of merging these two domains, emphasizing the need for IT to understand and respect the real-time and safety-critical nature of OT. FogHorn's approach is to augment OT with AI and analytics without disrupting existing systems, focusing on adding value through intelligent data processing at the edge.
๐ ๏ธ The Role of Fog Computing in Industrial IoT
David King elaborates on FogHorn's role in Industrial Internet of Things (IIoT), explaining how their technology enables real-time analytics and machine learning directly on sensor data. This approach minimizes the need to store and transfer vast amounts of data to the cloud, allowing for more efficient and immediate decision-making. The discussion touches on the importance of processing data at the edge of the network, particularly for high-frequency data generated by industrial machines, and how this can lead to significant economic benefits.
๐ก Real-World Applications and the Future of Industrial Automation
The discussion moves to practical applications of FogHorn's technology, with a focus on condition-based monitoring, predictive maintenance, and asset performance optimization. David shares examples of how their technology is being used in remote and brownfield sites, emphasizing the shift from traditional monitoring methods to more advanced, data-driven approaches. The conversation also explores the potential for video and audio sensing in industrial settings, highlighting the move towards using these technologies for real-time decision-making and process optimization.
๐ง Closing Thoughts on AI in Industry and the Path Forward
In the final part of the conversation, David and Jeff discuss the future of AI in industrial settings, including the concept of self-healing machines and self-improving processes. They touch on the importance of starting with high-value business problems when implementing AI and edge computing solutions. David shares an example of how video monitoring in an oil and gas plant led to significant insights and improvements, showcasing the potential for non-invasive AI applications in traditional OT environments.
Mindmap
Keywords
๐กEdge Computing
๐กFog Computing
๐กIndustrial IoT (IIoT)
๐กOT/IT Convergence
๐กCybersecurity
๐กMachine Learning
๐กDeep Learning
๐กTime Series Database
๐กDigital Twin
๐กCondition-Based Monitoring
๐กVideo Analytics
Highlights
Introduction to edge computing, fog computing, and cloud computing, and their significance in the current tech landscape.
David King, CEO of FogHorn Systems, discusses the company's focus on fog computing and its industrial applications.
FogHorn Systems' origin and its mission to define and add value to the term 'fog computing'.
The distinction between edge computing and fog computing, especially in the context of industrial IoT.
How fog computing aims to bring cloud computing functions to industrial environments.
The importance of processing data close to its source to improve efficiency and reduce data overload.
Challenges and opportunities in merging operations technology (OT) with information technology (IT).
The historical context of OT and IT, and their traditional separation in industrial settings.
FogHorn's strategy to integrate AI into OT environments without disrupting existing systems.
The potential economic impact of industrial IoT and the value of real-time insights in production.
Addressing security concerns as OT systems become more connected.
The concept of 'ML on ML' and its role in the foundation of AI for industrial applications.
How FogHorn's technology enables real-time analytics and machine learning directly on live-streaming sensor data.
The practicality of running complex AI models on edge devices with limited resources.
Examples of how FogHorn's technology is being used in the field, including condition-based monitoring and predictive maintenance.
The potential for video and audio sensing in industrial IoT and how it expands the possibilities for data collection and analysis.
FogHorn's approach to starting with high-value business problems when implementing AI and edge computing solutions.
The future of AI in industry, including self-healing machines and self-improving processes.
Real-world examples of how FogHorn's technology has been integrated into existing industrial processes.
Transcripts
(uplifting orchestral music)
>> Hey, welcome back, everybody.
Jeff Frick here with theCUBE.
We're at the Palo Alto studios,
having theCUBE Conversation,
a little break in the action
of the conference season
before things heat up,
before we kind of come to the close of 2018.
It's been quite a year.
But it's nice to be back in the studio.
Things are a little bit less crazy,
and we're excited to talk about
one of the really hot topics right now,
which is edge computing,
fog computing, cloud computing.
What do all these things mean,
how do they all intersect,
and we've got with us today David King.
He's the CEO of FogHorn Systems.
David, first off, welcome.
>> Thank you, Jeff.
>> So, FogHorn Systems,
I guess by the fog,
you guys are all about the fog,
and for those that don't know,
fog is kind of this intersection between cloud,
and on prem, and...
So first off, give us a little bit of
the background of the company
and then let's jump into
what this fog thing is all about.
>> Sure, actually, it all dovetails together.
So yeah, you're right,
FogHorn, the name itself,
came from Cisco's invented term,
called fog computing,
from almost a decade ago,
and it connoted this idea of
computing at the edge,
but didn't really have
a lot of definition early on.
And so, FogHorn was started actually
by a Palo Alto Incubator, just nearby here,
that had the idea that hey,
we got to put some real meaning
and some real meat on the bones here,
with fog computing.
And what we think FogHorn has become
over the last three and a half years,
since we took it out of the incubator,
since I joined,
was to put some real purpose,
meaning, and value in that term.
And so, it's more than just edge computing.
Edge computing is a related term.
In the industrial world,
people would say, hey,
I've had edge computing for three, 40, 50 years
with my production line control
and also my distributed control systems.
I've got hard wired compute.
I run, they call them,
industrial PCs in the factory.
That's edge compute.
The IT roles come along and said,
no, no, no, fog compute is
a more advanced form of it.
Well, the real purpose of fog computing
and edge computing,
in our view, in the modern world,
is to apply what has traditionally been
thought of as cloud computing functions,
big, big data,
but running in an industrial environment,
or running on a machine.
And so, we call it as really big data
operating in the world's smallest footprint, okay,
and the real point of this
for industrial customers,
which is our primary focus, industrial IoT,
is to deliver as much analytic machine learning,
deep learning AI capability
on live-streaming sensor data, okay,
and what that means is rather than
persisting a lot of data either on prem,
and then sending it to the cloud,
or trying to stream all this to the cloud
to make sense of terabytes or petabytes a day,
per machine sometimes, right,
think about a jet engine,
a petabyte every flight.
You want to do the compute
as close to the source as possible,
and if possible,
on the live streaming data,
not after you've persisted it
on a big storage system.
So that's the idea. >> So you touch on
all kinds of stuff there.
So we'll break it down. >> Unpack it,
yeah. >> Unpack it.
So first off, just kind of the OT/IT thing,
and I think that's really important,
and we talked before turning the cameras on
about Dr. Tom from HP,
he loves to make a big symbolic handshake of
the operations technology, >> One of our partners.
>> Right, and IT,
and the marriage of these two things,
where before, as you said,
the OT guys, the guys that
have been running factories, you know,
they've been doing this for a long time,
and now suddenly,
the IT folks are butting in
and want to get access to that data
to provide more control.
So, you know, as you see the marriage of
those two things coming together,
what are the biggest points of friction,
and really, what's the biggest opportunity?
>> Great set of questions.
So, quite right,
the OT folks are inherently suspicious
of IT, right?
I mean, if you don't know the history,
40 plus years ago,
there was a fork in the road,
where in factory operations,
were they going to embrace things like ethernet,
the internet,
connected systems?
In fact, they purposely air gapped
an island of those systems
'cause they was all about machine control,
real-time, for safety,
productivity, and uptime of the machine.
They don't want any,
you can't use kind of standard ethernet,
it has to be industrial ethernet, right?
It has to have time bound and deterministic.
It can't be a retry kind of a system, right?
So different MAC layer for a reason,
for example.
What did the physical wiring look like?
It's also different cabling,
because you can't have cuts,
jumps in the cable, right?
So it's a different environment entirely
that OT grew up in,
and so, FogHorn is trying to really
bring the value of what people are
delivering for AI, essentially,
into that environment
in a way that's non-threatening to,
it's supplemental to,
and adds value in the OT world.
So Dr. Tom is right,
this idea of bringing IT and OT together
is inherently challenging,
because these were kind of fork in the road,
island-ed in the networks, if you will,
different systems,
different nomenclature,
different protocols,
and so, there's a real education curve
that IT companies are going through,
and the idea of taking all this OT data
that's already been produced
in tremendous volumes already
before you add new kinds of sensing,
and sending it across a LAN
which it's never talked to before,
then across a WAN to go to a cloud,
to get some insight
doesn't make any sense, right?
So you want to leverage the cloud,
you want to leverage data centers,
you want to leverage the LAN,
you want to leverage 5G,
you want to leverage all the new IT technologies,
but you have to do it in a way
that makes sense for it and adds value
in the OT context.
>> I'm just curious,
you talked about the air gapping,
the two systems,
which means they are not connected,
right? >> No, they're connected
with a duct, they're connected to themselves,
in the industrial-- >> Right, right, but before,
the OT system was air gapped from the IT system,
so thinking about security
and those types of threats,
now, if those things are connected,
that security measure has gone away,
so what is the excitement,
adoption scare when now, suddenly,
these things that were separate,
especially in the age of breaches
that we know happen all the time
as you bring those things
together? >> Well, in fact,
there have been cyber breaches in the OT context.
Think about Stuxnet,
think about things that have happened,
think about the utilities back keys
that were found to have malwares
implanted in them.
And so, this idea of industrial IoT
is very exciting,
the ability to get real-time
kind of game changing insights
about your production.
A huge amount of economic activity in the world
could be dramatically improved.
You can talk about trillions of dollars of value
which the McKenzie, and BCG,
and Bain talk about, right,
by bringing kind of AI,
ML into the plant environment.
But the inherent problem is that
by connecting the systems,
you introduce security problems.
You're talking about a huge amount of cost
to move this data around,
persist it then add value,
and it's not real-time, right?
So, it's not that cloud is not relevant,
it's not that it's not used,
it's that you want to do the compute
where it makes sense,
and for industrial,
the more industrialized the environment,
the more high frequency,
high volume data,
the closer to the system
that you can do the compute, the better,
and again, it's multi-layer of compute.
You probably have something on the machine,
something in the plant,
and something in the cloud, right?
But rather than send raw OT data to the cloud,
you're going to send processed
intelligent metadata insights
that have already been derived at the edge,
update what they call
the fleet-wide digital twin, right?
The digital twin for that whole fleet of assets
should sit in the cloud,
but the digital twin of the specific asset
should probably be on the asset.
>> So let's break that down a little bit.
There's so much good stuff here.
So, we talked about OT/IT and that marriage.
Next, I just want to touch on cloud,
'cause a lot of people know cloud,
it's very hot right now,
and the ultimate promise of cloud, right,
is you have infinite capacity
>> Right, infinite compute. >> Available on demand,
and you have infinite compute,
and hopefully you have some big fat pipes
to get your stuff in and out.
But the OT challenge is,
and as you said,
the device challenge is very, very different.
They've got proprietary operating systems,
they've been running for a very, very long time.
As you said, they put off boatloads,
and boatloads, and boatloads of data
that was never really designed
to feed necessarily a machine learning algorithm,
or an artificial intelligence algorithm
when these things were designed.
It wasn't really part of the equation.
And we talk all the time about you know,
do you move the compute to the data,
you move the data to the compute,
and really, what you're talking about
in this fog computing world
is kind of a hybrid, if you will,
of trying to figure out which data
you want to process locally,
and then which data you have time,
relevance, and other factors
that just go ahead and pump it upstream.
>> Right, that's a great way to describe it.
Actually, we're trying to move
as much of the compute as possible to the data.
That's really the point of,
that's why we say fog computing is
a nebulous term about edge compute.
It doesn't have any value
until you actually decide
what you're trying to do with it,
and what we're trying to do is to take
as much of the harder compute challenges,
like analytics, machine learning,
deep learning, AI,
and bring it down to the source,
as close to the source as you can,
because you can essentially streamline
or make more efficient
every layer of the stack.
Your models will get much better, right?
You might have built them
in the cloud initially,
think about a deep learning model,
but it may only be 60, 70% accurate.
How do you do the improvement of the model
to get it closer to perfect?
I can't go send all the data up
to keep trying to improve it.
Well, typically, what happens is
I down sample the data,
I average it and I send it up,
and I don't see any changes in the average data.
Guess what?
We should do is inference all the time
and all the data,
run it in our stack,
and then send the metadata up,
and then have the cloud look across
all the assets of a similar type, and say,
oh, the global fleet-wide model
needs to be updated,
and then to push it down.
So, with Google just about a month ago,
in Barcelona, at the IoT show,
what we demonstrated was
the world's first instance of AI for industrial,
which is closed loop machine learning.
We were taking a model,
a TensorFlow model,
trained in the cloud in the data center,
brought into our stack
and referring 100% inference-ing
in all the live data,
pushing the insights back up into Google Cloud,
and then automatically updating the model
without a human or data scientist
having to look at it.
Because essentially, it's ML on ML.
And that to us,
ML on ML is the foundation of AI for industrial.
>> I just love that something comes up
all the time, right?
We used to make decisions based on
the sampling of historical data after the fact.
>> That's right, that's how
we've all been doing it. >> Now, right, right now,
the promise of streaming is
you can make it based on all the data,
>> All the time. >> All the time in real time.
>> Permanently. >> This is a very
different thing.
So, but as you talked about,
you know, running some complex models,
and running ML,
and retraining these things.
You know, when you think of edge,
you think of some little hockey puck
that's out on the edge of a field,
with limited power, limited connectivity,
so you know,
what's the reality of,
how much power do you have at
some of these more remote edges,
or we always talk about the field of turbines,
oil platforms,
and how much power do you need,
and how much compute that it actually
starts to be meaningful in terms of
the platform for the software?
>> Right, there's definitely use cases,
like you think about the smart meters,
right, in the home.
The older generation of those meters
may have had very limited compute, right,
like you know, talking about
single megabyte of memory maybe,
or less, right, kilobytes of memory.
Very hard to run a stack on
that kind of footprint.
The latest generation of smart meters
have about 250 megabytes of memory.
A Raspberry Pi today is anywhere from
a half a gig to a gig of memory,
and we're fundamentally memory-bound,
and obviously, CPU if it's trying to
really fast compute,
like vibration analysis,
or acoustic, or video.
But if you're just trying to
take digital sensing data,
like temperature, pressure,
velocity, torque,
we can take humidity,
we can take all of that,
believe it or not,
run literally dozens and dozens of models,
even train the models in something
as small as a Raspberry Pi,
or a low end x86.
So our stack can run in any hardware,
we're completely OS independent.
It's a full up software layer.
But the whole stack is about
100 megabytes of memory,
with all the components,
including Docker containerization, right,
which compares to about 10 gigs of
running a stream processing stack
like Spark in the Cloud.
So it's that order of magnitude of
footprint reduction
and speed of execution improvement.
So as I said,
world's smallest fastest compute engine.
You need to do that if you're going to talk about,
like a wind turbine,
it's generating data, right,
every millisecond, right.
So you have high frequency data,
like turbine pitch,
and you have other conceptual data
you're trying to bring in,
like wind conditions,
reference information about
how the turbine is supposed to operate.
You're bringing in a torrential amount of data
to do this computation on the fly.
And so, the challenge for a lot of
the companies that have really started
to move into the space,
the cloud companies, like our partners,
Google, and Amazon, and Microsoft,
is they have great cloud capabilities for AI, ML.
They're trying to move down to the edge
by just transporting the whole stack to there.
So in a plant environment,
okay, that might work if you have
massive data centers that can run it.
Now I still got to stream all my assets,
all the data from all of my assets
to that central point.
What we're trying to do is
come out the opposite way,
which is by having the world's
smallest, fastest engine,
we can run it in a small compute,
very limited compute on the asset,
or near the asset,
or you can run this in a big compute
and we can take on lots and lots of
use cases for models simultaneously.
>> I'm just curious on the small compute case,
and again, you want all the data--
>> You want to inference another thing, right?
>> Does it eventually go back,
or is there a lot of cases where
you can get the information
you need off the stream
and you don't necessarily have to save
or send that upstream?
>> So fundamentally today,
in the OT world,
the data usually gets,
if the PLC, the production line controller,
that has simple KPIs,
if temperature goes to X
or pressure goes to Y, do this.
Those simple KPIs,
if nothing is executed,
it gets dumped into a local protocol server,
and then about every 30, 60, 90 days,
it gets written over.
Nobody ever looks at it, right?
That's why I say,
99% of the brown field data in OT
has never really been-- >> Almost like a security--
>> Has never been mined for insight.
Right, it just gets-- >> It runs, and runs, and runs,
and every so often-- >> Exactly, and so,
if you're doing inference-ing,
and doing real time decision making,
real time actual with our stack,
what you would then persist is
metadata insights, right?
Here is an event,
or here is an outcome,
and oh, by the way,
if you're doing deep learning
or machine learning,
and you're seeing deviation or drift
from the model's prediction,
you probably want to keep that
and some of the raw data packets
from that moment in time,
and send that to the cloud or data center to say,
oh, our fleet-wide model may not be accurate,
or may be drifting, right?
And so, what you want to do, again,
different horses for different courses.
Use our stack to do the lion's share of
the heavy duty real time compute,
produce metadata that you can send
to either a data center or a cloud environment
for further learning.
>> Right, so your piece is really
the gathering and the ML,
and then if it needs to go back out
for more heavy lifting,
you'll send it back up,
or do you have the cloud application as well
that connects if you need? >> Yeah,
so we build connectors to you know,
Google Cloud Platform,
Google IoT Core,
to AWS S3, to Microsoft Azure,
virtually any, Kafka, Hadoop.
We can send the data wherever you want,
either on plant,
right back into the existing control systems,
we can send it to OSIsoft PI,
which is a great time series database
that a lot of process industries use.
You could of course send it to any public cloud
or a Hadoop data lake private cloud.
You can send the data wherever you want.
Now, we also have,
one of our components is a time series database.
You can also persist it
in memory in our stack,
just for buffering,
or if you have high value data that
you want to take a measurement,
a value from a previous calculation
and bring it into another calculation
during later, right,
so, it's a very flexible system.
>> Yeah, we were at OSIsoft PI World
earlier this year.
Some fascinating stories that came out of--
>> 30 year company.
>> The building maintenance,
and all kinds of stuff.
So I'm just curious,
some of the easy to understand applications
that you've seen in the field,
and maybe some of the ones
that were a surprise on the OT side.
I mean, obviously,
preventative maintenance is always
towards the top of the list.
>> Yeah, I call it the layer cake, right?
Especially when you get to remote assets
that are either not monitored
or lightly monitored.
They call it drive-by monitoring.
Somebody shows up and listens
or looks at a valve or gauge and leaves.
Condition-based monitoring, right?
That is actually a big breakthrough for some,
you know, think about fracking sites,
or remote oil fields,
or mining sites.
The second layer is predictive maintenance,
which the next generation is kind of
predictive, prescriptive,
even preventive maintenance, right?
You're making predictions
or you're helping to avoid downtime.
The third layer,
which is really where our stack
is sort of unique today in delivering
is asset performance optimization.
How do I increase throughput,
how do I reduce scrap,
how do I improve worker safety,
how do I get better processing of the data
that my PLC can't give me,
so I can actually improve
the performance of the machine?
Now, ultimately,
what we're finding is a couple of things.
One is, you can look at
individual asset optimization,
process optimization,
but there's another layer.
So often, we're deployed to
two layers on premise.
There's also the plant-wide optimization.
We talked about wind farm before, off camera.
So you've got the wind turbine.
You can do a lot of things about
turbine health,
the blade pitch and condition of the blade,
you can do things on the battery,
all the systems on the turbine,
but you also need a stack running, like ours,
at that concentration point
where there's 200 plus turbines
that come together,
'cause the optimization of the whole farm,
every turbine affects the other turbine,
so a single turbine can't tell you
speed, rotation,
things that need to change,
if you want to adjust the speed of one turbine,
versus the one next to it.
So there's also kind of
a plant-wide optimization.
Talking about time that's driving,
there's going to be five layers of compute, right?
You're going to have the,
almost what I call the ECU level,
the individual sub-system in the car that,
the engine, how it's performing.
You're going to have the gateway in the car
to talk about things that are happening
across systems in the car.
You're going to have
the peer to peer connection over 5G
to talk about optimization
right between vehicles.
You're going to have the base station algorithms
looking at a micro soil or macro soil
within a geographic area,
and of course, you'll have the ultimate cloud,
'cause you want to have the data
on all the assets, right,
but you don't want to send
all that data to the cloud,
you want to send the right metadata to the cloud.
>> That's why there are big trucks full
of compute now. >> By the way,
you mentioned one thing that
I should really touch on,
which is, we've talked a lot about
what I call traditional brown field
automation and control type analytics
and machine learning,
and that's kind of where we started
in discrete manufacturing a few years ago.
What we found is that in that domain,
and in oil and gas, and in mining,
and in agriculture, transportation,
in all those places,
the most exciting new development this year
is the movement towards video,
3D imaging and audio sensing,
'cause those sensors are now
becoming very economical,
and people have never thought about,
well, if I put a camera
and apply it to a certain application,
what can I learn,
what can I do that I never did before?
And often, they even have cameras today,
they haven't made use of any of the data.
So there's a very large customer of ours
who has literally video inspection data
every product they produce
everyday around the world,
and this is in hundreds of plants.
And that data never gets looked at, right,
other than training operators like,
hey, you missed the defects this day.
The system, as you said,
they just write over that data
after 30 days.
Well, guess what,
you can apply deep learning
tensor flow algorithms
to build a convolutional neural network model
and essentially do the human visioning,
rather than an operator staring at a camera,
or trying to look at training tapes.
30 days later,
I'm doing inference-ing of
the video image on the fly.
>> So, do your systems close loop
back to the control systems now,
or is it more of a tuning mechanism
for someone to go back and do it later?
>> Great question, I just got asked that
this morning by a large oil and gas super major
that Intel just introduced us to.
The short answer is,
our stack can absolutely go right back
into the control loop.
In fact, one of our investors and partners,
I should mention,
our investors for series A was GE,
Bosch, Yokogawa, Dell EMC,
and our series debuted a year ago was Intel,
Saudi Aramco, and Honeywell.
So we have one foot in tech,
one foot in industrial,
and really, what we're really trying to
bring is, you said, IT, OT together.
The short answer is,
you can do that,
but typically in the industrial environment,
there's a conservatism about,
hey, I don't want to touch,
you know, affect the machine
until I've proven it out.
So initially, people tend to start with alerting,
so we send an automatic alert
back into the control system to say,
hey, the machine needs to be re-tuned.
Very quickly, though,
certainly for things that are
not so time-sensitive,
they will just have us,
now, Yokogawa, one of our investors,
I pointed out our investors,
actually is putting us in PLCs.
So rather than sending the data off the PLC
to another gateway running our stack,
like an x86 or ARM gateway,
we're actually, those PLCs now have
Raspberry Pi plus capabilities.
A lot of them are-- >> To what types of mechanism?
>> Well, right now,
they're doing the IO
and the control of the machine,
but they have enough compute now
that you can run us in a separate module,
like the little brain
sitting right next to the control room,
and then do the AI on the fly,
and there, you actually don't even need to
send the data off the PLC.
We just re-program the actuator.
So that's where it's heading.
It's eventually, and it could take years
before people get comfortable
doing this automatically,
but what you'll see is that
what AI represents in industrial
is the self-healing machine,
the self-improving process,
and this is where it starts.
>> Well, the other thing
I think is so interesting is
what are you optimizing for,
and there is no right answer, right?
It could be you're optimizing for,
like you said, a machine.
You could be optimizing for the field.
You could be optimizing for maintenance,
but if there is a spike in pricing,
you may say, eh,
we're not optimizing now for maintenance,
we're actually optimizing for output,
because we have this temporary condition
and it's worth the trade-off.
So I mean, there's so many ways that
you can skin the cat
when you have a lot more information
and a lot more data. >> No, that's right,
and I think what we typically like to do
is start out with
what's the business value, right?
We don't want to go do a science project.
Oh, I can make that machine work 50% better,
but if it doesn't make any difference
to your business operations, so what?
So we always start the investigation with
what is a high value business problem
where you have sufficient data
where applying this kind of AI and the edge concept
will actually make a difference?
And that's the kind of proof of concept
we like to start with.
>> So again, just to come full circle,
what's the craziest thing an OT guy said,
oh my goodness, you IT guys
actually brought some value here
that I didn't know. >> Well, I touched on video,
right, so without going into
the whole details of the story,
one of our big investors,
a very large oil and gas company,
we said, look,
you guys have done some great work with
I call it software defined SCADA,
which is a term,
SCADA is the network environment for OT, right,
and so, SCADA is what the PLCs and DCSes
connect over these SCADA networks.
That's the control automation role.
And this investor said, look,
you can come in,
you've already shown us,
that's why they invested,
that you've gone into
brown field SCADA environments,
done deep mining of the existing data
and shown value by reducing scrap
and improving output,
improving worker safety,
all the great business outcomes for industrial.
If you come into our operation,
our plant people are going to say, no,
you're not touching my PLC.
You're not touching my SCADA network.
So come in and do something
that's non-invasive to that world,
and so that's where we actually
got started with video about 18 months ago.
They said, hey,
we've got all these video cameras,
and we're not doing anything.
We just have human operators writing down,
oh, I had a bad event.
It's a totally non-automated system.
So we went in and did a video use case around,
we call it, flare monitoring.
You know, hundreds of stacks of
burning of oil and gas in a production plant.
24 by seven team of operators
just staring at it, writing down,
oh, I think I had a bad flare.
I mean, it's a very interesting
old world process.
So by automating that
and giving them an AI dashboard essentially.
Oh, I've got a permanent record of
exactly how high the flare was,
how smoky was it,
what was the angle,
and then you can then fuse that data
back into plant data,
what caused that,
and also OSIsoft data,
what was the gas composition?
Was it in fact a safety violation?
Was it in fact an environmental violation?
So, by starting with video,
and doing that use case,
we've now got dozens of use cases
all around video.
Oh, I could put a camera on this.
I could put a camera on a rig.
I could've put a camera down the hole.
I could put the camera on the pipeline,
on a drone.
There's just a million places
that video can show up,
or audio sensing, right, acoustic.
So, video is great if you can see the event,
like I'm flying over the pipe,
I can see corrosion, right,
but sometimes, like you know,
a burner or an oven,
I can't look inside the oven with a camera.
There's no camera that could survive 600 degrees.
So what do you do?
Well, that's probably,
you can do something like
either vibration or acoustic.
Like, inside the pipe,
you got to go with sound.
Outside the pipe, you go video.
But these are the kind of things that people,
traditionally, how did they inspect pipe?
Drive by.
>> Yes, fascinating story.
Even again, I think at the end of the day,
it's again, you can make real decisions
based on all the data in real time,
versus some of the data after the fact.
All right, well, great conversation,
and look forward to watching
the continued success of FogHorn.
>> Thank you very much. >> All right.
>> Appreciate it. >> He's David King,
I'm Jeff Frick,
you're watching theCUBE.
We're having a CUBE conversation
at our Palo Alto studio.
Thanks for watching, we'll see you next time.
(uplifting symphonic music)
Browse More Related Video
7 High-Income Skills that AI Canโt Replace in next decade I top it skills for 2030 @CareersTalk
What is Edge Computing for Data & AI, and Should You Be Interested?
Apa itu Cloud Computing?
In Demand TECH Jobs in 2020 (What you should study?!)
Cloud Computing - Overview
Top 50+ GOOGLE CLOUD Services Explained in 7 Minutes
5.0 / 5 (0 votes)