AI for Embedded Systems | Embedded systems podcast, in Pyjama

Embedded Systems Podcast, in Pyjama!
1 Jun 202425:32

Summary

TLDRIn this engaging discussion, a group of five individuals explore the practical applications of AI in embedded systems. They delve into the current capabilities of AI for tasks like reading and interacting with data sheets, with mixed results. The conversation covers the challenges of relying on AI for coding assistance, the limitations of AI in understanding specific documentation, and the potential for AI to generate code and unit tests. The group also touches on the broader implications of AI in the software development process, highlighting both its benefits and the need for cautious adoption.

Takeaways

  • πŸ˜€ The group discusses the use of AI in embedded systems and its current applications, focusing on large language models.
  • πŸ” Wasim shares his experience using AI to interpret data sheets, noting the model's mixed success in providing relevant information.
  • πŸ“š The conversation highlights the limitations of AI when dealing with poorly documented or proprietary hardware data sheets.
  • πŸ€– A member of the group explores using local AI models like llama 3 for tasks to maintain data privacy, especially for company-specific hardware.
  • πŸ› οΈ The group acknowledges AI's utility in writing blockware code such as HTML and CSS, but its less effective performance with more complex or custom software engineering code.
  • πŸ”§ Some participants find AI-generated code suggestions distracting and sometimes inaccurate, leading to a preference for disabling certain AI features.
  • πŸ”„ The discussion points out AI's tendency to 'hallucinate' or generate incorrect information, necessitating verification of its outputs.
  • πŸ”‘ The importance of understanding AI's limitations is emphasized, such as its inability to understand the context as deeply as a human expert.
  • πŸ”‘οΈβ€πŸ”’ Privacy and security are considered when deciding to use local AI models to avoid uploading sensitive data to the cloud.
  • πŸ“ˆ The group sees potential in AI for reducing research time and providing meaningful responses for common queries found on the internet.
  • πŸ›‘ The script concludes with a cliffhanger about whether AI will replace embedded engineers, suggesting that it's currently far from happening.

Q & A

  • What is the main topic of discussion in the video?

    -The main topic of discussion is the use of AI in embedded systems, focusing on how AI, particularly large language models, can be applied in this field.

  • How is Wasim using AI in his current work?

    -Wasim is exploring the use of AI to read and chat with datasheets, although he has faced challenges with the accuracy of the responses.

  • What is the general consensus about the reliability of AI-generated responses for technical documentation?

    -The consensus is that AI-generated responses can be hit or miss, often providing incorrect or incomplete information, which can be unreliable for technical documentation.

  • What alternative method is being explored for using AI with local PDFs?

    -An alternative method involves using a local LLM model like LLaMA 3 and converting PDFs to text and embeddings for querying, thus avoiding cloud-based solutions for proprietary data.

  • What are some challenges mentioned regarding the use of AI for writing code?

    -Challenges include AI generating incorrect code, creating distractions with irrelevant autocomplete suggestions, and sometimes hallucinating incorrect solutions.

  • What are the advantages of using AI for scripting languages mentioned in the discussion?

    -AI is found to be helpful in writing scripts like Python and JavaScript, as well as generating boilerplate code for HTML and CSS.

  • How do the speakers use AI for generating unit tests?

    -They use AI to generate basic unit tests by inferring from the code, which can help in testing all combinations of input data types and expected outputs.

  • What is one of the significant limitations of AI in coding, according to the discussion?

    -A significant limitation is AI's inability to handle custom or complex codebases effectively, often generating more noise than useful logic.

  • What are the speakers' thoughts on the future improvement of AI in coding?

    -They believe that as AI gets used more often and receives more feedback, its accuracy and usefulness in coding will improve over time.

  • What is a common problem with AI-generated technical solutions as highlighted in the video?

    -A common problem is AI's tendency to hallucinate solutions that seem plausible but are actually incorrect, leading to confusion and mistrust in its responses.

  • Why do some speakers prefer to run AI models locally rather than using cloud-based solutions?

    -They prefer local models to ensure the privacy and security of proprietary data, which might be at risk if uploaded to cloud-based AI services.

  • What is the perceived gap between AI's current capabilities and the potential to replace embedded engineers?

    -The perceived gap is significant, as AI currently lacks the ability to fully understand and implement complex hardware and software integration, which is critical in embedded engineering.

Outlines

00:00

πŸ€– AI in Embedded Systems and Documentation Challenges

The group discusses the application of AI in embedded systems, starting with general AI usage and then focusing on its relevance in embedded systems. They share personal experiences with AI tools like chat PDF for reading datasheets, highlighting the varying degrees of success and the challenges of finding precise information. The conversation also touches on the reliability of documentation and the potential of AI to assist in programming tasks.

05:00

πŸ” Exploring Local AI Solutions for Proprietary Hardware

Members of the group explore the idea of using AI locally to interact with PDF documents, particularly for proprietary hardware where cloud-based solutions might not be ideal. They discuss the use of models like llama 3 for local language model (LLM) implementations and the process of converting PDFs into text and embeddings for AI training. The summary also includes the challenges of getting AI to understand and interact with specific documents locally.

10:01

πŸ›  AI's Role in Code Writing and the Limitations Encountered

The discussion delves into the use of AI for writing blockware code, such as HTML and CSS, and the issues faced when attempting to write more complex, custom database scripts. It is noted that AI can be distracting and sometimes generates incorrect code, leading to a lack of trust in its output. The group also shares anecdotes about AI-generated code that required correction and the subsequent challenges in getting accurate responses.

15:04

πŸ”„ AI's Predictive Text Capabilities and the Risk of Hallucination

The conversation examines the predictive text capabilities of AI, noting its tendency to 'hallucinate' or generate incorrect information. The group discusses the analogy of AI trying to 'wing it' without a clear understanding of the task, leading to inaccuracies. They also touch on the potential for AI to backtrack and correct its predictions based on probability and user feedback.

20:05

πŸ“ˆ AI's Evolution and Its Impact on Software Development

The group reflects on the rapid development of AI and its impact on software creation, comparing it to the speed at which the COVID-19 vaccines were developed. They discuss the importance of user feedback in improving AI and the potential for AI to become more accurate and creative over time. The conversation also includes the use of AI for summarizing documents and conversations, as well as its limitations in certain areas like code writing.

25:06

🍽️ Wrapping Up the Discussion and Looking Forward to Future AI Topics

The group concludes the discussion with a humorous note about AI replacing embedded engineers, acknowledging that while AI has come far, it is not yet capable of completely取代 human engineers. They plan to continue the conversation in a follow-up session, where they will explore the potential of AI to replace certain roles and the ethical considerations of AI development.

πŸ‘‹ Final Thoughts and Goodbyes

The final paragraph captures the group's light-hearted sign-off, with a member humorously referencing the 'monkeys typing' scenario to illustrate the potential for AI to eventually generate meaningful output. The group agrees to reconvene for further discussions on AI's role and capabilities.

Mindmap

Keywords

πŸ’‘AI in embedded systems

AI in embedded systems refers to the integration of artificial intelligence capabilities within devices that perform specific functions, such as sensors or home automation systems. In the video, the theme revolves around the application of AI in these systems, discussing how it can be utilized and its relevance in the context of embedded systems. For example, the participants discuss using AI to interact with data sheets and the potential of AI in improving the functionality of embedded devices.

πŸ’‘Large language models (LLMs)

Large language models are AI systems that have been trained on vast amounts of text data, enabling them to generate human-like text based on the input they receive. In the script, LLMs are mentioned in the context of their ability to process and interact with PDF documents, such as data sheets, which is a significant aspect of the discussion on how AI can be applied in various scenarios, including embedded systems.

πŸ’‘Chat PDF

Chat PDF refers to an AI tool that can read and interact with PDF documents, allowing users to ask questions about the content. In the video, one participant mentions using a chat PDF tool to extract information from data sheets, highlighting both the usefulness and limitations of the AI's ability to understand and provide relevant information from these documents.

πŸ’‘Data sheets

Data sheets are documents that provide detailed information about a product's specifications, typically used in engineering and technical fields. The script discusses the use of AI to read and interpret data sheets, which is crucial for understanding how AI can assist in tasks such as hardware design and embedded system development.

πŸ’‘Proprietary hardware

Proprietary hardware refers to hardware that is unique to a company or individual and not available for public use. In the context of the video, the discussion about using AI locally for proprietary hardware data sheets emphasizes the importance of keeping sensitive information secure and within the company's control.

πŸ’‘Local AI models

Local AI models are AI systems that run on a user's own device or server, as opposed to cloud-based AI services. The script mentions the desire to use local AI models for handling proprietary data, which is important for maintaining the confidentiality of sensitive information and avoiding reliance on external cloud services.

πŸ’‘Code generation

Code generation is the process by which AI systems can create code snippets or entire programs based on user input or requirements. In the video, participants discuss the use of AI for generating code, particularly in the context of scripting and blockware code like HTML and CSS, but also highlight the challenges and inaccuracies that can arise in more complex or custom code scenarios.

πŸ’‘Code bloat

Code bloat refers to the unnecessary increase in the size of a software program's code, often due to the inclusion of redundant or overly complex code. The script mentions AI's tendency to generate bloatware code, which may not be efficient or optimal, and the need for engineers to carefully review and refine the AI-generated code.

πŸ’‘Unit tests

Unit tests are a type of software testing methodology where individual components or units of a software are tested to determine if they are fit for use. The script discusses how AI can be used to generate unit tests, which can be beneficial in ensuring the quality and correctness of the code, although it also notes the limitations and the need for human oversight.

πŸ’‘Documentation gap

The documentation gap refers to the discrepancy between the information provided in design documents and what is actually needed by engineers to implement a system. In the video, the participants joke about the possibility of AI replacing embedded engineers, but acknowledge that the documentation gap is a significant reason why human engineers are still required in the development process.

πŸ’‘AI-generated errors

AI-generated errors refer to mistakes made by AI systems when they attempt to perform tasks such as code generation or data interpretation. The script provides examples of AI providing incorrect information or suggestions, such as incorrect linker flags, which highlight the need for verification and correction by human engineers.

Highlights

The group discusses the use of AI in embedded systems and large language models.

Wasim explores using AI to read and chat with data sheets, with mixed results.

The group debates the reliability of AI-generated information compared to human-written documentation.

AI's limitations in understanding and providing accurate information from data sheets are discussed.

Exploration of using local AI models like llama 3 for proprietary hardware documentation.

The process of converting PDFs to text and training AI models with the text is outlined.

Google IO's announcement of AI models that understand and respond to PDF documents is mentioned.

AI's utility in writing blockware code like HTML and CSS is highlighted, with limitations in SE code.

The group shares experiences with AI-generated code inaccuracies and the need for manual correction.

AI's struggle with custom database constructs and the resulting 'noise' in code predictions.

The analogy of AI as a human trying to 'wing it' without a clear direction is used to describe its limitations.

The importance of feedback loops for AI improvement and the comparison to software development cycles.

AI's potential to replace embedded engineers is humorously dismissed by the group.

The group acknowledges the rapid pace of AI development, drawing parallels to the COVID-19 vaccine.

The practical applications of AI in summarizing documents and conversations are praised.

The group discusses AI's role in creating unit tests and its potential impact on software development.

Licensing and attribution issues related to AI-generated code are raised as concerns.

The group concludes with a cliffhanger on whether AI will replace embedded engineers, to be continued in the next session.

Transcripts

play00:04

but all right hey everyone we back again

play00:07

the five us uh you know

play00:09

finally had like intersection of free

play00:12

time and we have decided this time

play00:14

around to discuss on the use of AI in

play00:18

embedded systems yes well I think we'll

play00:21

just focus well or at least start from

play00:24

how we are using AI in general the large

play00:27

language models and then maybe go from

play00:30

there and talk about you know how

play00:31

relevant it is you know can we use it in

play00:33

embeded systems to what degree and so on

play00:36

and so

play00:37

forth so yeah whoever feels like jumping

play00:41

in go ahead jump

play00:44

in yeah I think Wasim had one

play00:48

interesting yeah so currently I'm

play00:51

[Music]

play00:52

exploring was same into the under the

play00:55

bus good yeah

play00:58

nice so current I'm exploring like how I

play01:01

can use AI to read or chat with the data

play01:06

sheets so I tried but uh the AI model

play01:11

which I was using it was chat PDF so

play01:15

some of the answers was very relevant

play01:17

and some of the answers it was not able

play01:20

to find it out like exact register which

play01:24

register I should update

play01:26

to and so on and so also is this a tool

play01:30

that's online where you have to kind of

play01:32

push your PDF yeah it is a chat PDF I

play01:36

see okay so what's this I don't think

play01:40

that is ai's problem uh if I give that

play01:43

data sheet to the person who wrote that

play01:44

data sheet he won't be able to find

play01:46

relevant information in

play01:51

either you know my my okay this is

play01:53

interesting because my nephew is trying

play01:55

to program the stm32 and for some reason

play02:00

he had to program the DC or something

play02:03

yeah and there there was

play02:05

like okay I'm paraphrasing him but the

play02:08

he reported that there were few bits

play02:10

that were undocumented and he found it

play02:12

on Reddit or somewhere you know finally

play02:15

cracked the problem that way uh but yeah

play02:17

I think the documentation is also

play02:20

reliable to like a good degree but you

play02:24

know not 100% yeah yeah that said you

play02:28

know along the same lines of what Wasim

play02:30

was mentioning I was also on the site

play02:33

trying to explore uh you know AIS that

play02:37

can or rather the llm models that can be

play02:39

used to chat with the PDFs but I wanted

play02:41

to do so like locally so I was try

play02:44

trying o lama lama 3 and any reason for

play02:47

the local way well I suppose I was

play02:51

trying it out for our audience majorly

play02:54

uh in the sense that you know if you're

play02:56

working on a proprietary piece of

play02:58

hardware and the data sheet for that is

play03:00

like local to your

play03:02

company then it wouldn't be like a good

play03:04

idea to push it onto yeah some cloud

play03:08

based AI so the old Lama at least I have

play03:12

had like good Su well good success I

play03:16

I've had some success running it locally

play03:19

at least the llm models run and it

play03:21

responds back to General queries answers

play03:24

like I don't rely on the answers usually

play03:27

the PDF part I have not cracked it uh

play03:29

there are a lot of uh tutorials that I'm

play03:33

watching that go around kind of you know

play03:35

they call it rag

play03:38

some generating so what happens is they

play03:44

there is like a bunch of uh python

play03:47

libraries that you need to kind of you

play03:50

know use to First convert the PDF

play03:52

document into just texts and then

play03:56

convert that text into like vectors or

play03:58

embeddings and then feed it

play04:00

as a training data to the llms and then

play04:03

after all of that you know go ahead we

play04:05

can go ahead and ask it questions so

play04:08

I've just seen few T tutorials didn't

play04:11

get to the point where you know I was

play04:12

feeding it the local PDF but yeah that

play04:15

seems something useful and relevant for

play04:18

us

play04:21

yeah I wonder how so most of these

play04:24

models have now start are become model

play04:27

right so we can attach the complete PDF

play04:32

document can we can we upload the PDF

play04:34

document into Bing or CH

play04:38

gity I don't know but in in you know

play04:43

Google IO they mention that you can uh

play04:46

you know provide a PDF document and it

play04:48

will pass the document and then it will

play04:51

you know act as it knows about that PDF

play04:55

and then you can ask any questions and

play04:57

it will you know respond to your queries

play05:00

I think there is might be because so the

play05:03

most of these chat most of these models

play05:05

are constrained by the context window

play05:08

right so it's a 2 Milli supports 1.5

play05:11

million context window so it can pass

play05:13

big

play05:14

documents right

play05:17

nice but then it would be consuming this

play05:20

PDF out of the drive right the Google

play05:23

Drive locally maybe locally ultimately

play05:26

that D uploaded to Cloud somehow yeah I

play05:29

think

play05:33

okay well I if you ask me I use AI

play05:37

mostly to write my blockware

play05:40

code

play05:43

like what I have find is it is really

play05:46

good when you're are writing uh scripts

play05:47

like python or any kind of JavaScript

play05:50

Etc it's really helpful to write

play05:53

blockware codes like HTML and

play05:55

CSS but what I have found is if you try

play05:58

writing SE code it can predict to some

play06:01

extent but if you are working on a

play06:03

custom database like very which where

play06:07

it's not generic construct it's very it

play06:10

sometimes create more noise than actual

play06:13

logic yeah

play06:15

so yes so what I so most of these Co

play06:19

code assists what they do is as you type

play06:22

like Snippets like code Snippets or auto

play06:24

complete will do the it will create some

play06:26

uh code something in light gray giving

play06:30

you a prediction you know maybe this is

play06:31

what you are trying to type the

play06:32

autocomplete feature in the IDS but the

play06:35

thing is those are limited to some

play06:38

extent like some words or some or one or

play06:40

two lines but AI what I have seen is

play06:43

mostly with co-pilot and others like it

play06:47

tries to be smart and create the

play06:48

complete logic out of it and it become

play06:51

and to me sometimes that logic is not

play06:54

right and I get distracted by that

play06:57

because I want to read that what it

play06:58

generated and that shifts my focus from

play07:01

what I was thinking to what it then I

play07:04

found then I find out it's useless I

play07:06

type I get context switch back into what

play07:09

I was typing I type that and it

play07:11

generates new a new autocomplete

play07:13

suggestion big which again distracts me

play07:15

and and ultimately what I do is so I

play07:18

have to disable the auto complete the

play07:20

Cod and I use it in the console mode

play07:25

like when I want it to work I go to

play07:26

console and type it can you do this for

play07:29

in the you know this this just reminded

play07:31

me the whole point of it creating noise

play07:33

so I also enabled this U again llama 3

play07:37

based you know there's some extension in

play07:39

vs code that can you know consume an LL

play07:42

uh llm model and then try and predict

play07:45

the code and my God it's just a pain

play07:50

yeah whatever rajit mentioned is like

play07:52

100% true you know I'm trying to type

play07:55

something and it predicts like three

play07:56

lines worth of garbage um and

play08:00

you know pretty much the same situation

play08:02

which is I think it's good for writing

play08:06

scripts um to some degree it does like a

play08:08

good job Python scripts or shell scripts

play08:12

C code really

play08:13

bad you know the worst scripts wherever

play08:17

you want to write the functional logic

play08:18

of the script right I have seen that I

play08:21

I'm better off writing it myself what I

play08:23

can do is this so for example even in my

play08:26

C code what I use it for is let's say I

play08:28

want a link implementation I just tell

play08:31

it Implement a link list for me in which

play08:33

the data structure should have XYZ

play08:34

elements mhm and then it generates the

play08:38

blockware code of what the node

play08:40

structure should look like insert remove

play08:43

insert delete except print and then I

play08:46

write the function logic what I whatever

play08:47

I want to do with that list so for these

play08:49

purposes AI is really

play08:51

good yeah I think that's fair by the way

play08:54

one other thing I want to call out is um

play08:57

there were cases in which I asked it to

play09:00

generate code and it kind of generated

play09:03

some incorrect code and then I call out

play09:05

saying hey you know these XYZ lines are

play09:08

wrong and then it says oh yeah yeah yeah

play09:10

you know sorry I made a mistake and goes

play09:12

ahead writes some new code which also

play09:15

doesn't work like 60% of the time it

play09:18

doesn't work and then I ask it to again

play09:21

correct it and it uses the same library

play09:24

or the same you know incorrect

play09:26

statements from the first response like

play09:28

for example if you some okay I'm

play09:30

forgetting the respon um kind of example

play09:33

that I asked

play09:35

it uh I had yeah okay this is the one so

play09:39

I was working on this course like videos

play09:42

for the Linker script course and then at

play09:44

some point I asked it hey you know how

play09:47

can I ask the Linker to strictly follow

play09:49

my Linker script because apparently

play09:51

linkers what they do is um while

play09:54

processing the Linker script they add

play09:57

stuff that is convenient so for example

play09:59

let

play10:00

say um I wanted to demonstrate that I'm

play10:04

only picking some section from a given

play10:08

object file and what happens is that

play10:11

object file has some content which is

play10:14

kind of in available in another object

play10:16

file like a global variable or something

play10:19

what I wanted to demonstrate was that if

play10:21

I specifically mention that only include

play10:23

one object file it should only include

play10:26

you know contents of that one object

play10:27

file the Linker is very convenient if

play10:30

you give it the inputs and the other

play10:32

object file has the content it will just

play10:34

add that in so I wanted to kind of you

play10:37

know uh try and find out if there is a

play10:40

flag compiler flag or not compiler but

play10:42

Linker flag that specifically restricts

play10:46

Linker to only process the Linker script

play10:48

I provided so yeah the you know lri

play10:51

model says oh there's like this minus s

play10:54

option that you can use I try it doesn't

play10:56

work then I tell it that hey you know

play10:58

this option doesn't work is like oh

play10:59

sorry my bad my bad try a RX then I try

play11:03

ARX and you know R is like an unknown

play11:05

option or something like that happens

play11:07

and then I'm like man this is also wrong

play11:09

sounds like tar Comm something like that

play11:12

yes I think r r and a might be relevant

play11:16

X also might be relevant but they don't

play11:17

do what you know the model told me yeah

play11:21

then I tell it that hey you know this is

play11:23

also incorrect it's not working and then

play11:25

it goes ahead and uses the minus s

play11:27

option again and so this has happened

play11:30

with me like this is one instance but

play11:32

this has happened with me over and over

play11:34

again enough times that I don't trust

play11:36

any output it gives me I cross check it

play11:38

on Google again or huc is big problem it

play11:44

just confuses it just hallucinate like

play11:46

hey it if you know let's say compiler

play11:49

has an option then Linker must also have

play11:51

a similar option just try out but you

play11:54

know in that sense it sounds as though

play11:56

it's becoming more and more human like

play11:58

because it's like hey you know I'll just

play12:00

wing it let's see how far this

play12:05

goes oh this didn't work no problem try

play12:10

RX

play12:12

confidence

play12:14

exactly it's good that we have a textual

play12:17

proof that what it said otherwise

play12:19

likey man when

play12:24

did that's you know this reminds me

play12:27

early days when this chat gbt Etc were

play12:29

there and you know I was trying out so

play12:32

at that time I was trying to work with

play12:34

the trace 32 and something else uh so I

play12:37

and very custom uh so that tool has a

play12:40

very custom debug uh script uh syntax

play12:44

and I'm like but it's quite common the

play12:46

manuals are out there so I'm like you

play12:49

know let's put AI to try it's uh rather

play12:52

than me going and learning it I want

play12:54

very small thing to do let me ask it and

play12:56

so I asked uh I think I asked multiple

play12:58

more model the same question and uh I

play13:01

asked them can you generate a trace 32

play13:04

script for me that can do XYZ

play13:06

functionality and it generated the code

play13:08

and I was like really amazed I'm like

play13:10

okay it generated the code it is really

play13:12

helpful I don't have to go through the

play13:14

complete manual to understand it I run

play13:16

that and the TR to complains that this

play13:18

is not even the

play13:20

syntax then I went back and read the

play13:22

manual and that was not the

play13:24

syntax

play13:25

ah I think one thing that's starting to

play13:28

surf is now is that the okay large

play13:31

language models are more or less just

play13:34

trying to predict the next word right

play13:37

they're just trying to predict more and

play13:39

more of the

play13:40

statement and

play13:42

uh yeah I mean if if you think of

play13:45

someone who is trying to just predict

play13:47

statements and you know that those

play13:49

statements happen to be something that

play13:51

look like

play13:52

code uh doesn't mean that you know that

play13:54

machine might be actually understanding

play13:56

what it has written like so at least to

play14:00

me that is super clear

play14:02

now so one analogy comes to mind like

play14:05

like the office scene where Michael spot

play14:07

says right I just start a statement

play14:09

knowing where without knowing where am I

play14:11

going and just you know wing it on at

play14:14

run time maybe yeah is also kind of

play14:17

doing this who knows who knows or at

play14:19

least well

play14:21

fundamentally it just trying to predict

play14:24

the next word the next word so not

play14:26

everything it says is accurate

play14:29

at least that's my learning and again

play14:32

I'm only exp I'm not really expert on

play14:35

this but is there any way it can

play14:36

backtrack saying hey you know this

play14:39

series of sentence doesn't lead me

play14:40

anywhere let me backtrack to the like

play14:43

start word and start again the do you

play14:45

know if that is something how would you

play14:47

decide when to termine it and question

play14:49

it like I would imagine like the next

play14:51

words probability are so low like you

play14:54

know the best choic is even the

play14:57

worst compared to the previous choices

play15:00

like had you gone past you know some

play15:04

words the probability list becomes so

play15:07

narrow that you will you feel like hey

play15:08

let me go back and try do you know if

play15:11

that oh I I suppose that is where those

play15:15

settings come in be more creative be

play15:17

more

play15:18

accurate uh I think tempure temperature

play15:22

yeah temperature settings and I suppose

play15:24

there is also one to help the AI kind of

play15:27

backtrack a little bit which is like

play15:29

regenerate the response it is in ch Chad

play15:33

i' seen seen those options I think it's

play15:35

it's there in every

play15:38

mod Gemini also like they produce three

play15:41

responses like I see okay okay

play15:47

interesting okay but uh I I do see a

play15:50

thing that things can improve and I

play15:53

think as as it gets used more and more

play15:56

often it will improve

play15:59

again what I think it's one and a half

play16:02

year and that's too small for a software

play16:04

life

play16:05

cycle

play16:06

yeah I I I don't think any of the big

play16:10

softwares were amazing within one and a

play16:11

half year I suppose I suppose the last

play16:14

time something got built with this rapid

play16:17

Pace was the covid

play16:20

vaccines and we don't know where they

play16:22

are like going or what what kind of side

play16:26

effects they will have later in life or

play16:29

even

play16:29

now but okay not a

play16:32

medic he's saying he doesn't what did

play16:35

you say man you have never seen a

play16:37

software which is quite perfect in one

play16:41

year of its initiation one and a half

play16:43

year yes one and a half year I will show

play16:45

you my calculator which I built in

play16:47

college perfect got it right first right

play16:51

my hello I'm sure if I give it to some

play16:54

product manager he will he'll find 500s

play16:57

in it

play16:59

and then we'll ask you to implement

play17:01

standups and multiple cycles and that

play17:04

will take another one track progress

play17:06

Sprints all of this provide a heat map

play17:09

of how the progress is going Sprint

play17:11

planning all of that so the thing is um

play17:15

when you use it and nothing nothing

play17:17

against it it's in general the more and

play17:19

more users use it the more and more

play17:21

feedback comes in because everyone

play17:24

because like like AI we also hallucinate

play17:27

you know if YZ feature was there in this

play17:31

it would be really amazing and that's

play17:32

what the creative part of humans is

play17:34

right yeah this calculator should Al

play17:36

also show temperature you know

play17:39

exactly all of that good stuff

play17:42

yes yeah I I think like uh like right

play17:47

now if you use a for generic you know

play17:50

questions like which are very popular in

play17:52

the content is present on the net then

play17:56

it's like it will give you know precise

play17:58

answer that okay this is how uh you

play18:01

would want to go ahead but then like if

play18:03

you are asking for some specific area

play18:05

then it might give you some results

play18:07

which are which you don't know okay what

play18:10

is correct and then also it hallucinates

play18:12

so you cannot really see okay you know

play18:15

whatever you have received may not be

play18:18

correct right so you'll have to verify

play18:20

that but I think personally for me uh I

play18:24

don't use that much but wherever I use

play18:27

like on the Google it generate that you

play18:29

know summary s g right M and for generic

play18:33

queries like you know if you are asking

play18:35

okay how do I develop a you know

play18:37

particular driver or a firmware or a

play18:39

particular you know maybe algorithm or

play18:41

software it will give you uh you know

play18:45

some some SG uh content and that is you

play18:49

know well formatt that you you know line

play18:53

by line and then okay figure out okay

play18:54

this is the content and then it will

play18:56

give you some signat of the code right

play18:59

so that you can figure out okay what is

play19:00

this code and then again some content

play19:02

and then some lines of code right so

play19:04

that gives you you know how you can use

play19:06

it how you can learn it to uh you know

play19:09

for for your benefits right yeah I think

play19:12

I would agree with that you know it does

play19:14

narrow the search space a lot in terms

play19:18

of like doing the research it can give a

play19:20

lot of meaningful respons it feeds you

play19:21

the right keywords where you can go on

play19:24

about looking yeah that that I think is

play19:26

a major upshot of you you know the

play19:30

llms uh wherein they kind of reduce the

play19:33

research time yeah that's I I think um

play19:38

so some of the use cases like people

play19:40

have been highlighting are really

play19:42

amazing except from the Cod space if we

play19:44

take it out from the Cod space like

play19:46

summarizing documents summarizing

play19:49

conversations those are really amazing

play19:51

like so there were demos at the io

play19:54

Google IO where they showcased its

play19:55

integration with Gmail and workspace

play19:58

where you can just rather than going to

play20:00

each group and reading all the messages

play20:02

you can add a chat bot into it and just

play20:04

ask did anybody come to a conclusion on

play20:06

whether the event should happen or not

play20:09

it can and the answer will always be no

play20:12

no one to

play20:15

con I think as a productivity it might

play20:18

and as I said right even in code assist

play20:21

what I have found is it's really good at

play20:23

creating bloatware code another use case

play20:26

is creating unit tests for your code h

play20:29

it can do really great job at that yeah

play20:33

yeah I think that functional end like if

play20:36

uh like the one where the where they say

play20:39

swe right where the product manager put

play20:42

a requirement and generates the code for

play20:44

you and everybody's like you don't need

play20:46

Engineers product managers can generate

play20:48

yeah I think we are way far off from

play20:51

that far away from

play20:53

that man that's what Engineers would say

play20:56

even if we are not like I mean just

play20:59

yeah don't worry about it it's like far

play21:02

away yes so so for unit test right you

play21:05

cannot provide your actual code but you

play21:07

have to specify okay what you are

play21:09

looking for

play21:10

right I I think you can you can actually

play21:13

you can so what happens is how most of

play21:16

these companies are offering like if you

play21:17

look at co-pilot Etc your code base it

play21:21

does not train data on your Cod or train

play21:23

on your code base it infers on your code

play21:26

base and even at they are not using it

play21:29

for the generic co-pilot

play21:31

training so never leaves your leaves the

play21:34

server correct so uh you know the what

play21:38

rajit is pointing to is given a code

play21:41

that you have written in your ID the AI

play21:44

will try and predict what kind of test

play21:46

to wrap yeah it will do inferencing not

play21:49

the training part okay and the kind of

play21:50

inference it does then like the basic

play21:52

ones is okay it figures out what are

play21:54

like the data types of the inputs then

play21:57

you know okay feed all combination and

play21:59

check for all the written values and

play22:00

done okay and I think the other problem

play22:04

with the code assist kind of code assist

play22:06

and I think that may resolve over time

play22:08

is

play22:10

licensing oh nice yeah so it's trained

play22:14

on the data of the web and it can

play22:17

generate code bases Etc and tell you and

play22:19

you are not even aware where it actually

play22:21

picked that came from

play22:23

yeah attribution would become a problem

play22:27

yeah definitely

play22:29

Perfect by the way I have to call it out

play22:32

right now and I think the internet will

play22:35

also get to know but it's dinner time on

play22:39

my end and my mom is calling me so can I

play22:42

can I drop off I think we can uh you

play22:46

know record Another followup where we

play22:48

will have a question where okay will AI

play22:51

replace the eded of I yeah we have few

play22:53

questions which we would like to answer

play22:56

these are kind of questions that comes

play22:57

regularly on the LinkedIn and this is

play22:59

like a cliff Cliffhanger but in

play23:03

podcast okay so next time around

play23:07

we so next time around we chat on okay

play23:12

the probability of like AI replacing

play23:14

embed engineer and and it is Zer Z

play23:19

zero just kidding just no one know I

play23:22

don't know no comments cool okay so

play23:25

let's let comments next time maybe not

play23:27

today not not now yes I I only believe

play23:31

that they can just just I would say the

play23:35

only way for AI to replace embed

play23:37

Engineers if it starts building its own

play23:40

Hardware oh because today the

play23:43

documentation Gap that comes from

play23:44

designer to embed engineer is the reason

play23:47

why embed Engineers are needed in a lot

play23:50

of don't give AI ideas

play23:57

man I I I don't think I am giving ideas

play24:00

people are already starting to think in

play24:02

that direction yeah okay maybe Sam Al

play24:05

that's why Sam Alon needs 7 trillion to

play24:07

design AI chip maybe it's asking Chad j

play24:10

to design that

play24:12

chip could be could okay cool we discuss

play24:17

next you know sorry I'll just make final

play24:20

comment it looks to me like that you

play24:23

know like that situation of if you let

play24:26

the monkey put enough you know alphabets

play24:30

in a

play24:32

row

play24:34

will long time yeah it will generate

play24:38

that cool anyways maybe you know I'll go

play24:41

have dinner and let's meet up next I see

play24:43

a flying chle on your

play24:46

way you should

play24:50

now

play24:53

not sorry well okay

play24:59

I didn't get one mommy one

play25:03

mom cut cut

play25:05

cut both of them are super amazing human

play25:09

beings I love both of them yes why did

play25:11

you why did you win at the I didn't I

play25:15

did I any anyhoo let's let's drop off

play25:19

for now and let's catch up next time

play25:22

take carebye bye

Rate This
β˜…
β˜…
β˜…
β˜…
β˜…

5.0 / 5 (0 votes)

Related Tags
AIEmbedded SystemsHardwareDevelopmentData SheetsCode GenerationSoftware EngineeringLLM ModelsChat PDFTechnical Discussion