AI for Embedded Systems | Embedded systems podcast, in Pyjama
Summary
TLDRIn this engaging discussion, a group of five individuals explore the practical applications of AI in embedded systems. They delve into the current capabilities of AI for tasks like reading and interacting with data sheets, with mixed results. The conversation covers the challenges of relying on AI for coding assistance, the limitations of AI in understanding specific documentation, and the potential for AI to generate code and unit tests. The group also touches on the broader implications of AI in the software development process, highlighting both its benefits and the need for cautious adoption.
Takeaways
- 😀 The group discusses the use of AI in embedded systems and its current applications, focusing on large language models.
- 🔍 Wasim shares his experience using AI to interpret data sheets, noting the model's mixed success in providing relevant information.
- 📚 The conversation highlights the limitations of AI when dealing with poorly documented or proprietary hardware data sheets.
- 🤖 A member of the group explores using local AI models like llama 3 for tasks to maintain data privacy, especially for company-specific hardware.
- 🛠️ The group acknowledges AI's utility in writing blockware code such as HTML and CSS, but its less effective performance with more complex or custom software engineering code.
- 🔧 Some participants find AI-generated code suggestions distracting and sometimes inaccurate, leading to a preference for disabling certain AI features.
- 🔄 The discussion points out AI's tendency to 'hallucinate' or generate incorrect information, necessitating verification of its outputs.
- 🔑 The importance of understanding AI's limitations is emphasized, such as its inability to understand the context as deeply as a human expert.
- 🔑️🔒 Privacy and security are considered when deciding to use local AI models to avoid uploading sensitive data to the cloud.
- 📈 The group sees potential in AI for reducing research time and providing meaningful responses for common queries found on the internet.
- 🛑 The script concludes with a cliffhanger about whether AI will replace embedded engineers, suggesting that it's currently far from happening.
Q & A
What is the main topic of discussion in the video?
-The main topic of discussion is the use of AI in embedded systems, focusing on how AI, particularly large language models, can be applied in this field.
How is Wasim using AI in his current work?
-Wasim is exploring the use of AI to read and chat with datasheets, although he has faced challenges with the accuracy of the responses.
What is the general consensus about the reliability of AI-generated responses for technical documentation?
-The consensus is that AI-generated responses can be hit or miss, often providing incorrect or incomplete information, which can be unreliable for technical documentation.
What alternative method is being explored for using AI with local PDFs?
-An alternative method involves using a local LLM model like LLaMA 3 and converting PDFs to text and embeddings for querying, thus avoiding cloud-based solutions for proprietary data.
What are some challenges mentioned regarding the use of AI for writing code?
-Challenges include AI generating incorrect code, creating distractions with irrelevant autocomplete suggestions, and sometimes hallucinating incorrect solutions.
What are the advantages of using AI for scripting languages mentioned in the discussion?
-AI is found to be helpful in writing scripts like Python and JavaScript, as well as generating boilerplate code for HTML and CSS.
How do the speakers use AI for generating unit tests?
-They use AI to generate basic unit tests by inferring from the code, which can help in testing all combinations of input data types and expected outputs.
What is one of the significant limitations of AI in coding, according to the discussion?
-A significant limitation is AI's inability to handle custom or complex codebases effectively, often generating more noise than useful logic.
What are the speakers' thoughts on the future improvement of AI in coding?
-They believe that as AI gets used more often and receives more feedback, its accuracy and usefulness in coding will improve over time.
What is a common problem with AI-generated technical solutions as highlighted in the video?
-A common problem is AI's tendency to hallucinate solutions that seem plausible but are actually incorrect, leading to confusion and mistrust in its responses.
Why do some speakers prefer to run AI models locally rather than using cloud-based solutions?
-They prefer local models to ensure the privacy and security of proprietary data, which might be at risk if uploaded to cloud-based AI services.
What is the perceived gap between AI's current capabilities and the potential to replace embedded engineers?
-The perceived gap is significant, as AI currently lacks the ability to fully understand and implement complex hardware and software integration, which is critical in embedded engineering.
Outlines
🤖 AI in Embedded Systems and Documentation Challenges
The group discusses the application of AI in embedded systems, starting with general AI usage and then focusing on its relevance in embedded systems. They share personal experiences with AI tools like chat PDF for reading datasheets, highlighting the varying degrees of success and the challenges of finding precise information. The conversation also touches on the reliability of documentation and the potential of AI to assist in programming tasks.
🔍 Exploring Local AI Solutions for Proprietary Hardware
Members of the group explore the idea of using AI locally to interact with PDF documents, particularly for proprietary hardware where cloud-based solutions might not be ideal. They discuss the use of models like llama 3 for local language model (LLM) implementations and the process of converting PDFs into text and embeddings for AI training. The summary also includes the challenges of getting AI to understand and interact with specific documents locally.
🛠 AI's Role in Code Writing and the Limitations Encountered
The discussion delves into the use of AI for writing blockware code, such as HTML and CSS, and the issues faced when attempting to write more complex, custom database scripts. It is noted that AI can be distracting and sometimes generates incorrect code, leading to a lack of trust in its output. The group also shares anecdotes about AI-generated code that required correction and the subsequent challenges in getting accurate responses.
🔄 AI's Predictive Text Capabilities and the Risk of Hallucination
The conversation examines the predictive text capabilities of AI, noting its tendency to 'hallucinate' or generate incorrect information. The group discusses the analogy of AI trying to 'wing it' without a clear understanding of the task, leading to inaccuracies. They also touch on the potential for AI to backtrack and correct its predictions based on probability and user feedback.
📈 AI's Evolution and Its Impact on Software Development
The group reflects on the rapid development of AI and its impact on software creation, comparing it to the speed at which the COVID-19 vaccines were developed. They discuss the importance of user feedback in improving AI and the potential for AI to become more accurate and creative over time. The conversation also includes the use of AI for summarizing documents and conversations, as well as its limitations in certain areas like code writing.
🍽️ Wrapping Up the Discussion and Looking Forward to Future AI Topics
The group concludes the discussion with a humorous note about AI replacing embedded engineers, acknowledging that while AI has come far, it is not yet capable of completely取代 human engineers. They plan to continue the conversation in a follow-up session, where they will explore the potential of AI to replace certain roles and the ethical considerations of AI development.
👋 Final Thoughts and Goodbyes
The final paragraph captures the group's light-hearted sign-off, with a member humorously referencing the 'monkeys typing' scenario to illustrate the potential for AI to eventually generate meaningful output. The group agrees to reconvene for further discussions on AI's role and capabilities.
Mindmap
Keywords
💡AI in embedded systems
💡Large language models (LLMs)
💡Chat PDF
💡Data sheets
💡Proprietary hardware
💡Local AI models
💡Code generation
💡Code bloat
💡Unit tests
💡Documentation gap
💡AI-generated errors
Highlights
The group discusses the use of AI in embedded systems and large language models.
Wasim explores using AI to read and chat with data sheets, with mixed results.
The group debates the reliability of AI-generated information compared to human-written documentation.
AI's limitations in understanding and providing accurate information from data sheets are discussed.
Exploration of using local AI models like llama 3 for proprietary hardware documentation.
The process of converting PDFs to text and training AI models with the text is outlined.
Google IO's announcement of AI models that understand and respond to PDF documents is mentioned.
AI's utility in writing blockware code like HTML and CSS is highlighted, with limitations in SE code.
The group shares experiences with AI-generated code inaccuracies and the need for manual correction.
AI's struggle with custom database constructs and the resulting 'noise' in code predictions.
The analogy of AI as a human trying to 'wing it' without a clear direction is used to describe its limitations.
The importance of feedback loops for AI improvement and the comparison to software development cycles.
AI's potential to replace embedded engineers is humorously dismissed by the group.
The group acknowledges the rapid pace of AI development, drawing parallels to the COVID-19 vaccine.
The practical applications of AI in summarizing documents and conversations are praised.
The group discusses AI's role in creating unit tests and its potential impact on software development.
Licensing and attribution issues related to AI-generated code are raised as concerns.
The group concludes with a cliffhanger on whether AI will replace embedded engineers, to be continued in the next session.
Transcripts
but all right hey everyone we back again
the five us uh you know
finally had like intersection of free
time and we have decided this time
around to discuss on the use of AI in
embedded systems yes well I think we'll
just focus well or at least start from
how we are using AI in general the large
language models and then maybe go from
there and talk about you know how
relevant it is you know can we use it in
embeded systems to what degree and so on
and so
forth so yeah whoever feels like jumping
in go ahead jump
in yeah I think Wasim had one
interesting yeah so currently I'm
[Music]
exploring was same into the under the
bus good yeah
nice so current I'm exploring like how I
can use AI to read or chat with the data
sheets so I tried but uh the AI model
which I was using it was chat PDF so
some of the answers was very relevant
and some of the answers it was not able
to find it out like exact register which
register I should update
to and so on and so also is this a tool
that's online where you have to kind of
push your PDF yeah it is a chat PDF I
see okay so what's this I don't think
that is ai's problem uh if I give that
data sheet to the person who wrote that
data sheet he won't be able to find
relevant information in
either you know my my okay this is
interesting because my nephew is trying
to program the stm32 and for some reason
he had to program the DC or something
yeah and there there was
like okay I'm paraphrasing him but the
he reported that there were few bits
that were undocumented and he found it
on Reddit or somewhere you know finally
cracked the problem that way uh but yeah
I think the documentation is also
reliable to like a good degree but you
know not 100% yeah yeah that said you
know along the same lines of what Wasim
was mentioning I was also on the site
trying to explore uh you know AIS that
can or rather the llm models that can be
used to chat with the PDFs but I wanted
to do so like locally so I was try
trying o lama lama 3 and any reason for
the local way well I suppose I was
trying it out for our audience majorly
uh in the sense that you know if you're
working on a proprietary piece of
hardware and the data sheet for that is
like local to your
company then it wouldn't be like a good
idea to push it onto yeah some cloud
based AI so the old Lama at least I have
had like good Su well good success I
I've had some success running it locally
at least the llm models run and it
responds back to General queries answers
like I don't rely on the answers usually
the PDF part I have not cracked it uh
there are a lot of uh tutorials that I'm
watching that go around kind of you know
they call it rag
some generating so what happens is they
there is like a bunch of uh python
libraries that you need to kind of you
know use to First convert the PDF
document into just texts and then
convert that text into like vectors or
embeddings and then feed it
as a training data to the llms and then
after all of that you know go ahead we
can go ahead and ask it questions so
I've just seen few T tutorials didn't
get to the point where you know I was
feeding it the local PDF but yeah that
seems something useful and relevant for
us
yeah I wonder how so most of these
models have now start are become model
right so we can attach the complete PDF
document can we can we upload the PDF
document into Bing or CH
gity I don't know but in in you know
Google IO they mention that you can uh
you know provide a PDF document and it
will pass the document and then it will
you know act as it knows about that PDF
and then you can ask any questions and
it will you know respond to your queries
I think there is might be because so the
most of these chat most of these models
are constrained by the context window
right so it's a 2 Milli supports 1.5
million context window so it can pass
big
documents right
nice but then it would be consuming this
PDF out of the drive right the Google
Drive locally maybe locally ultimately
that D uploaded to Cloud somehow yeah I
think
okay well I if you ask me I use AI
mostly to write my blockware
code
like what I have find is it is really
good when you're are writing uh scripts
like python or any kind of JavaScript
Etc it's really helpful to write
blockware codes like HTML and
CSS but what I have found is if you try
writing SE code it can predict to some
extent but if you are working on a
custom database like very which where
it's not generic construct it's very it
sometimes create more noise than actual
logic yeah
so yes so what I so most of these Co
code assists what they do is as you type
like Snippets like code Snippets or auto
complete will do the it will create some
uh code something in light gray giving
you a prediction you know maybe this is
what you are trying to type the
autocomplete feature in the IDS but the
thing is those are limited to some
extent like some words or some or one or
two lines but AI what I have seen is
mostly with co-pilot and others like it
tries to be smart and create the
complete logic out of it and it become
and to me sometimes that logic is not
right and I get distracted by that
because I want to read that what it
generated and that shifts my focus from
what I was thinking to what it then I
found then I find out it's useless I
type I get context switch back into what
I was typing I type that and it
generates new a new autocomplete
suggestion big which again distracts me
and and ultimately what I do is so I
have to disable the auto complete the
Cod and I use it in the console mode
like when I want it to work I go to
console and type it can you do this for
in the you know this this just reminded
me the whole point of it creating noise
so I also enabled this U again llama 3
based you know there's some extension in
vs code that can you know consume an LL
uh llm model and then try and predict
the code and my God it's just a pain
yeah whatever rajit mentioned is like
100% true you know I'm trying to type
something and it predicts like three
lines worth of garbage um and
you know pretty much the same situation
which is I think it's good for writing
scripts um to some degree it does like a
good job Python scripts or shell scripts
C code really
bad you know the worst scripts wherever
you want to write the functional logic
of the script right I have seen that I
I'm better off writing it myself what I
can do is this so for example even in my
C code what I use it for is let's say I
want a link implementation I just tell
it Implement a link list for me in which
the data structure should have XYZ
elements mhm and then it generates the
blockware code of what the node
structure should look like insert remove
insert delete except print and then I
write the function logic what I whatever
I want to do with that list so for these
purposes AI is really
good yeah I think that's fair by the way
one other thing I want to call out is um
there were cases in which I asked it to
generate code and it kind of generated
some incorrect code and then I call out
saying hey you know these XYZ lines are
wrong and then it says oh yeah yeah yeah
you know sorry I made a mistake and goes
ahead writes some new code which also
doesn't work like 60% of the time it
doesn't work and then I ask it to again
correct it and it uses the same library
or the same you know incorrect
statements from the first response like
for example if you some okay I'm
forgetting the respon um kind of example
that I asked
it uh I had yeah okay this is the one so
I was working on this course like videos
for the Linker script course and then at
some point I asked it hey you know how
can I ask the Linker to strictly follow
my Linker script because apparently
linkers what they do is um while
processing the Linker script they add
stuff that is convenient so for example
let
say um I wanted to demonstrate that I'm
only picking some section from a given
object file and what happens is that
object file has some content which is
kind of in available in another object
file like a global variable or something
what I wanted to demonstrate was that if
I specifically mention that only include
one object file it should only include
you know contents of that one object
file the Linker is very convenient if
you give it the inputs and the other
object file has the content it will just
add that in so I wanted to kind of you
know uh try and find out if there is a
flag compiler flag or not compiler but
Linker flag that specifically restricts
Linker to only process the Linker script
I provided so yeah the you know lri
model says oh there's like this minus s
option that you can use I try it doesn't
work then I tell it that hey you know
this option doesn't work is like oh
sorry my bad my bad try a RX then I try
ARX and you know R is like an unknown
option or something like that happens
and then I'm like man this is also wrong
sounds like tar Comm something like that
yes I think r r and a might be relevant
X also might be relevant but they don't
do what you know the model told me yeah
then I tell it that hey you know this is
also incorrect it's not working and then
it goes ahead and uses the minus s
option again and so this has happened
with me like this is one instance but
this has happened with me over and over
again enough times that I don't trust
any output it gives me I cross check it
on Google again or huc is big problem it
just confuses it just hallucinate like
hey it if you know let's say compiler
has an option then Linker must also have
a similar option just try out but you
know in that sense it sounds as though
it's becoming more and more human like
because it's like hey you know I'll just
wing it let's see how far this
goes oh this didn't work no problem try
RX
confidence
exactly it's good that we have a textual
proof that what it said otherwise
likey man when
did that's you know this reminds me
early days when this chat gbt Etc were
there and you know I was trying out so
at that time I was trying to work with
the trace 32 and something else uh so I
and very custom uh so that tool has a
very custom debug uh script uh syntax
and I'm like but it's quite common the
manuals are out there so I'm like you
know let's put AI to try it's uh rather
than me going and learning it I want
very small thing to do let me ask it and
so I asked uh I think I asked multiple
more model the same question and uh I
asked them can you generate a trace 32
script for me that can do XYZ
functionality and it generated the code
and I was like really amazed I'm like
okay it generated the code it is really
helpful I don't have to go through the
complete manual to understand it I run
that and the TR to complains that this
is not even the
syntax then I went back and read the
manual and that was not the
syntax
ah I think one thing that's starting to
surf is now is that the okay large
language models are more or less just
trying to predict the next word right
they're just trying to predict more and
more of the
statement and
uh yeah I mean if if you think of
someone who is trying to just predict
statements and you know that those
statements happen to be something that
look like
code uh doesn't mean that you know that
machine might be actually understanding
what it has written like so at least to
me that is super clear
now so one analogy comes to mind like
like the office scene where Michael spot
says right I just start a statement
knowing where without knowing where am I
going and just you know wing it on at
run time maybe yeah is also kind of
doing this who knows who knows or at
least well
fundamentally it just trying to predict
the next word the next word so not
everything it says is accurate
at least that's my learning and again
I'm only exp I'm not really expert on
this but is there any way it can
backtrack saying hey you know this
series of sentence doesn't lead me
anywhere let me backtrack to the like
start word and start again the do you
know if that is something how would you
decide when to termine it and question
it like I would imagine like the next
words probability are so low like you
know the best choic is even the
worst compared to the previous choices
like had you gone past you know some
words the probability list becomes so
narrow that you will you feel like hey
let me go back and try do you know if
that oh I I suppose that is where those
settings come in be more creative be
more
accurate uh I think tempure temperature
yeah temperature settings and I suppose
there is also one to help the AI kind of
backtrack a little bit which is like
regenerate the response it is in ch Chad
i' seen seen those options I think it's
it's there in every
mod Gemini also like they produce three
responses like I see okay okay
interesting okay but uh I I do see a
thing that things can improve and I
think as as it gets used more and more
often it will improve
again what I think it's one and a half
year and that's too small for a software
life
cycle
yeah I I I don't think any of the big
softwares were amazing within one and a
half year I suppose I suppose the last
time something got built with this rapid
Pace was the covid
vaccines and we don't know where they
are like going or what what kind of side
effects they will have later in life or
even
now but okay not a
medic he's saying he doesn't what did
you say man you have never seen a
software which is quite perfect in one
year of its initiation one and a half
year yes one and a half year I will show
you my calculator which I built in
college perfect got it right first right
my hello I'm sure if I give it to some
product manager he will he'll find 500s
in it
and then we'll ask you to implement
standups and multiple cycles and that
will take another one track progress
Sprints all of this provide a heat map
of how the progress is going Sprint
planning all of that so the thing is um
when you use it and nothing nothing
against it it's in general the more and
more users use it the more and more
feedback comes in because everyone
because like like AI we also hallucinate
you know if YZ feature was there in this
it would be really amazing and that's
what the creative part of humans is
right yeah this calculator should Al
also show temperature you know
exactly all of that good stuff
yes yeah I I think like uh like right
now if you use a for generic you know
questions like which are very popular in
the content is present on the net then
it's like it will give you know precise
answer that okay this is how uh you
would want to go ahead but then like if
you are asking for some specific area
then it might give you some results
which are which you don't know okay what
is correct and then also it hallucinates
so you cannot really see okay you know
whatever you have received may not be
correct right so you'll have to verify
that but I think personally for me uh I
don't use that much but wherever I use
like on the Google it generate that you
know summary s g right M and for generic
queries like you know if you are asking
okay how do I develop a you know
particular driver or a firmware or a
particular you know maybe algorithm or
software it will give you uh you know
some some SG uh content and that is you
know well formatt that you you know line
by line and then okay figure out okay
this is the content and then it will
give you some signat of the code right
so that you can figure out okay what is
this code and then again some content
and then some lines of code right so
that gives you you know how you can use
it how you can learn it to uh you know
for for your benefits right yeah I think
I would agree with that you know it does
narrow the search space a lot in terms
of like doing the research it can give a
lot of meaningful respons it feeds you
the right keywords where you can go on
about looking yeah that that I think is
a major upshot of you you know the
llms uh wherein they kind of reduce the
research time yeah that's I I think um
so some of the use cases like people
have been highlighting are really
amazing except from the Cod space if we
take it out from the Cod space like
summarizing documents summarizing
conversations those are really amazing
like so there were demos at the io
Google IO where they showcased its
integration with Gmail and workspace
where you can just rather than going to
each group and reading all the messages
you can add a chat bot into it and just
ask did anybody come to a conclusion on
whether the event should happen or not
it can and the answer will always be no
no one to
con I think as a productivity it might
and as I said right even in code assist
what I have found is it's really good at
creating bloatware code another use case
is creating unit tests for your code h
it can do really great job at that yeah
yeah I think that functional end like if
uh like the one where the where they say
swe right where the product manager put
a requirement and generates the code for
you and everybody's like you don't need
Engineers product managers can generate
yeah I think we are way far off from
that far away from
that man that's what Engineers would say
even if we are not like I mean just
yeah don't worry about it it's like far
away yes so so for unit test right you
cannot provide your actual code but you
have to specify okay what you are
looking for
right I I think you can you can actually
you can so what happens is how most of
these companies are offering like if you
look at co-pilot Etc your code base it
does not train data on your Cod or train
on your code base it infers on your code
base and even at they are not using it
for the generic co-pilot
training so never leaves your leaves the
server correct so uh you know the what
rajit is pointing to is given a code
that you have written in your ID the AI
will try and predict what kind of test
to wrap yeah it will do inferencing not
the training part okay and the kind of
inference it does then like the basic
ones is okay it figures out what are
like the data types of the inputs then
you know okay feed all combination and
check for all the written values and
done okay and I think the other problem
with the code assist kind of code assist
and I think that may resolve over time
is
licensing oh nice yeah so it's trained
on the data of the web and it can
generate code bases Etc and tell you and
you are not even aware where it actually
picked that came from
yeah attribution would become a problem
yeah definitely
Perfect by the way I have to call it out
right now and I think the internet will
also get to know but it's dinner time on
my end and my mom is calling me so can I
can I drop off I think we can uh you
know record Another followup where we
will have a question where okay will AI
replace the eded of I yeah we have few
questions which we would like to answer
these are kind of questions that comes
regularly on the LinkedIn and this is
like a cliff Cliffhanger but in
podcast okay so next time around
we so next time around we chat on okay
the probability of like AI replacing
embed engineer and and it is Zer Z
zero just kidding just no one know I
don't know no comments cool okay so
let's let comments next time maybe not
today not not now yes I I only believe
that they can just just I would say the
only way for AI to replace embed
Engineers if it starts building its own
Hardware oh because today the
documentation Gap that comes from
designer to embed engineer is the reason
why embed Engineers are needed in a lot
of don't give AI ideas
man I I I don't think I am giving ideas
people are already starting to think in
that direction yeah okay maybe Sam Al
that's why Sam Alon needs 7 trillion to
design AI chip maybe it's asking Chad j
to design that
chip could be could okay cool we discuss
next you know sorry I'll just make final
comment it looks to me like that you
know like that situation of if you let
the monkey put enough you know alphabets
in a
row
will long time yeah it will generate
that cool anyways maybe you know I'll go
have dinner and let's meet up next I see
a flying chle on your
way you should
now
not sorry well okay
I didn't get one mommy one
mom cut cut
cut both of them are super amazing human
beings I love both of them yes why did
you why did you win at the I didn't I
did I any anyhoo let's let's drop off
for now and let's catch up next time
take carebye bye
Ver Más Videos Relacionados
Debunking AI: Tech Industry Secrets Exposed!
"Who Will Make Better Music - People Or AI?" will.i.am vs Piers Morgan
GitHub's Devin Competitor, Sam Altman Talks GPT-5 and AGI, Amazon Q, Rabbit R1 Hacked (AI News)
AI Engineering with Scrimba CEO Per Borgen – freeCodeCamp.org Podcast Interview
Is AI Replacing Software Engineering?
AI now beats humans at some basic tasks, report finds
5.0 / 5 (0 votes)