Using Gemini 1.5 PRO to Automatically FIX GitHub Issues (Insane) (Part -2)
Summary
TLDRIn diesem Video zeigt der Sprecher, wie man mit Google Gemini 1.5 Pro GitHub-Probleme löst. Er demonstriert, wie er eine vollständige Repo von 'old Lama' herunterlädt und mit Gemini Pro auffordert, GitHub-Probleme zu lösen. Durch das Hochladen des gesamten 'Lama'-Ordners in Google AI Studio kann der Sprecher Fragen stellen und erhält Antworten, die mit der Dokumentation und dem Source Code des Projekts übereinstimmen. Die Demonstration umfasst sowohl einfache als auch komplexere Fragen, wobei das Ergebnis zeigt, dass die AI bei der Lösung von GitHub-Problemen hilfreich sein kann und als erster Support-Schritt für Entwickler nützlich ist, um Zeit bei der Wartung von Open-Source-Projekten zu sparen.
Takeaways
- 🌟 Google Gemini 1.5 Pro ist ein AI-Programm, das für die Lösung von GitHub-Problemen eingesetzt werden kann.
- 🔍 Es kann GitHub-Probleme durch Suchen in Dokumentation und Quellcode lösen, indem es relevante Informationen ausgibt.
- 🚀 Die Leistungsfähigkeit von Google Gemini 1.5 Pro wird durch seine Fähigkeit, den gesamten Codebasis eines Projekts zu verstehen, beeinflusst.
- 📚 Das Programm kann als nützlicher Programmieraushilfe dienen und die Zeit von Entwicklern, die Open-Source-Projekte pflegen, sparen.
- 🛠️ Es ermöglicht das Hochladen von Projekt-Repositories und die Beantwortung von Fragen basierend auf dem bereitgestellten Quellcode und Dokumentationen.
- 📈 Google Gemini 1.5 Pro verfügt über 1 Million Tokens und ein Kontextfenster, das für die Verarbeitung von Anfragen genutzt wird.
- 📌 Das Programm kann auch für die Erstellung oder Modifikation von Code verwendet werden, indem es vorhandene Skripte bearbeitet und aktualisiert.
- 🔧 Es bietet Lösungen für spezifische GitHub-Probleme, indem es auf der Grundlage des bereitgestellten Codes und der Dokumentation Antworten generiert.
- 💡 Google AI Studio, das von Gemini 1.5 Pro unterstützt wird, kann bei der Problemlösung helfen, indem es Fragen beantwortet und Anweisungen gibt.
- 🔄 Das System kann auch bei der Aktualisierung von Installationsskripten helfen, indem es die bestehenden Skripte erkennt und entsprechend modifiziert.
- 💬 Die Interaktion mit Google Gemini 1.5 Pro kann dazu beitragen, die Qualität der Unterstützung bei GitHub-Problemen zu verbessern.
Q & A
Was ist Google Gemini 1.5 Pro und wie kann es verwendet werden?
-Google Gemini 1.5 Pro ist eine künstliche Intelligenz-Software, die zur Lösung von GitHub-Problemen eingesetzt werden kann. Es kann verwendet werden, um den gesamten Codebasis eines Projekts zu verstehen und als nützlicher Programmier-Assistent oder als erste Instanz der Unterstützung bei der Lösung von GitHub-Problemen zu dienen.
Wie kann man Google Gemini 1.5 Pro auf die GitHub-Probleme anwenden?
-Man kann Google Gemini 1.5 Pro anwenden, indem man den gesamten Code des Projekts hochlädt, damit es den Kontext versteht und dann spezifische Fragen oder Probleme aus GitHub-Issues eingibt, um Lösungen zu erhalten.
Was ist der Vorteil von Google Gemini 1.5 Pro bei der Lösung von GitHub-Problemen?
-Der Vorteil von Google Gemini 1.5 Pro besteht darin, dass es die gesamte Codebasis eines Projekts verstehen kann, was dazu beitragen kann, präzisere und effektivere Lösungen für auftretende Probleme zu finden.
Welche Art von Fragen kann Google Gemini 1.5 Pro beantworten?
-Google Gemini 1.5 Pro kann eine Vielzahl von Fragen beantworten, die sich auf die Codebasis beziehen, sowie Fragen zu spezifischen GitHub-Problemen, die in Issues diskutiert werden.
Wie lange dauert es in der Regel, bis Google Gemini 1.5 Pro eine Antwort liefert?
-Die Antwortzeit variiert je nach Komplexität der Frage und dem von der AI verwendeten Kontextfenster. In einigen Fällen kann es eine Weile dauern, bis eine Antwort bereitgestellt wird.
Kann Google Gemini 1.5 Pro den gesamten AMA-Codebase bearbeiten und verständnisvoll verarbeiten?
-Ja, Google Gemini 1.5 Pro kann den gesamten AMA-Codebase hochladen, verarbeiten und verstehen, um spezifische Fragen oder Probleme zu lösen.
Welche Art von GitHub-Problemen kann Google Gemini 1.5 Pro am besten lösen?
-Google Gemini 1.5 Pro ist am besten in der Lage, einfache bis mittlere Probleme zu lösen, die in der Dokumentation oder im Code des Projekts klar definiert sind.
Könnte Google Gemini 1.5 Pro auch für die Erstellung oder Modifikation von Code verwendet werden?
-Ja, Google Gemini 1.5 Pro kann verwendet werden, um neuen Code zu erstellen oder bestehenden Code zu modifizieren, basierend auf seiner Fähigkeit, den Kontext und die Anforderungen zu verstehen.
Wie kann man Google AI Studio mit Google Gemini 1.5 Pro verwenden?
-Man kann Google AI Studio verwenden, indem man den gesamten Codebase und die Dokumentation in das System hochlädt und dann spezifische Fragen oder Anfragen stellt, um Lösungen oder Informationen zu erhalten.
Welche Rolle spielt die AI bei der Unterstützung von Open-Source-Projekten?
-Die AI kann als erste Instanz der Unterstützung dienen, indem sie automatisierte Antworten auf GitHub-Issues liefert, die das Auflösen von Problemen beschleunigen und den Entwicklern Zeit sparen.
Gibt es Einschränkungen bei der Verwendung von Google Gemini 1.5 Pro?
-Es gibt möglicherweise Einschränkungen in Bezug auf die Komplexität der Probleme, die es lösen kann, sowie die Genauigkeit der bereitgestellten Lösungen. Es ist auch wichtig, dass es den richtigen Kontext versteht und keine Halluzinationen erzeugt.
Outlines
🤖 Lösung von GitHub-Problemen mit Google Gemini 1.5 Pro
In diesem Paragraph wird gezeigt, wie man GitHub-Probleme mit Googles Gemini 1.5 Pro löst. Der Sprecher erklärt, dass er eine vollständige Repo von 'old Lama' heruntergeladen hat und Gemini Pro bitten wird, GitHub-Probleme für ihn zu lösen. Er demonstriert dies mit einem einfachen GitHub-Issue über die Deinstallation von 'o Lama' von Ubuntu WSL. Die AI von Google liefert eine genaue Antwort, die dem Dokumentationsmaterial entspricht. Der Sprecher erklärt außerdem, wie er die Repo heruntergeladen und in Google AI Studio hochgeladen hat, um die Funktionsweise zu veranschaulichen.
🔍 Testen von einfachen und komplexeren GitHub-Problemen
Der Sprecher beschreibt, wie er mit Google Gemini 1.5 Pro sowohl einfache als auch etwas kompliziertere GitHub-Probleme löst. Er wählt ein einfaches Problem aus dem Issues-Logbuch und zeigt, wie er die AI dazu bringen kann, eine Lösung zu finden. Anschließend versucht er, ein etwas schwierigeres Problem zu lösen, aber er findet keine zufriedenstellende Antwort. Er erklärt auch, wie man die GitHub-Probleme auswählt und wie man die AI mit den gewünschten Fragen interagiert, um Lösungen zu finden.
💡 Anwendung von AI bei der Lösung von GitHub-Issues
In diesem Abschnitt wird gezeigt, wie AI bei der Lösung von GitHub-Issues helfen kann. Der Sprecher erklärt, dass er verschiedene Issues auswählt und Google AI Studio fragt, um Lösungen zu finden. Er diskutiert auch, wie man die Installationsskripts modifizieren kann, um Funktionen wie die Auto-Erkennung des Init-Systems in Linux-Systemen zu unterstützen. Der Sprecher betont, dass AI ein nützliches Werkzeug sein kann, um GitHub-Probleme zu lösen oder zumindest als erste Instanz bei der Unterstützung von Open-Source-Projekten.
Mindmap
Keywords
💡GitHub
💡Google Gemini 1.5 Pro
💡Software Engineering
💡AMA
💡Ubuntu WSL
💡AI Studio
💡Konversationale Geschichte
💡Docker
💡CUDA
💡Installationsskript
💡OpenRC
💡Automatisierte Unterstützung
Highlights
Google Gemini 1.5 Pro is being used to solve GitHub issues, showcasing its capabilities in real-world software engineering tasks.
The AI is able to replicate the demonstrations shown in the video, indicating a high level of accuracy and understanding.
A new GitHub issue regarding uninstalling olama from Ubuntu WSL is used as a test case for the AI's problem-solving skills.
The AI provides a correct solution to the GitHub issue by referencing the official documentation, demonstrating its ability to integrate external knowledge.
The AMA folder is downloaded and uploaded to Google AI Studio, utilizing the MIT license for legal compliance.
Google Gemini 1.5 Pro comes with 1 million tokens for context window, which are used to understand and process the uploaded AMA folder.
The AI can answer both simple and complex questions based on the AMA source code and documentation, showing versatility in handling different query types.
The AI successfully guides through the process of running an LM with olama, providing accurate and detailed instructions.
The AI addresses a GitHub issue about disabling history influence in conversational models, offering a potential solution.
The AI's response to a GitHub issue regarding response format not supported shows its ability to understand and correct API request issues.
The AI suggests creating a Docker file with the necessary Cuda libraries to address a GitHub issue about building Cuda-ready Docker images.
The AI proposes an auto-detect feature for the installation script to identify the current init system on Linux machines, showcasing its coding and modification skills.
The AI's ability to understand and edit code is demonstrated by its attempt to update the install.sh script based on the init system.
The AI's performance in solving GitHub issues suggests its potential as a supportive tool for developers and maintainers of open-source projects.
The AI's capacity to provide solutions to GitHub issues indicates a promising future for AI in software development and debugging.
The video demonstrates the practical application of Google Gemini 1.5 Pro in coding and software engineering tasks, beyond theoretical knowledge.
The AI's interaction with the AMA code base and GitHub issues provides a glimpse into its potential as a collaborative programming tool.
Transcripts
I can solve GitHub issues with Google
Gemini 1.5 Pro everyone seems to be
super impressed with dein supposedly the
world's first software engineer and one
of the important benchmarks that Devon
used is this one this is a real world
software engineering performance and
they said percentage of GitHub issues
resolved by D I can solve GitHub issues
with Google Gemini 1.5 Pro if you have
got 1.5 pro access you should be able to
pretty much replicate every everything
that I'm showing on this particular
video I'm going to take this entire repo
from old Lama and I'm going to ask
Gemini Pro to solve giup issues for me
first I'm going to pick easier issues
like this during the course of the video
we'll also see if it can solve slightly
more complicated issues than simple
issues let me quickly show you a demo so
this is a new GitHub issue that was
opened just yesterday and the issue is
how can I uninstall olama from my Ubuntu
WS so I'm going to take this just simply
go here and I'm going to go put here
that the same thing so I've just
literally pasted the same thing here I
said solve the following GitHub issue
guys how can I uninstall o Lama from my
Ubuntu WSL and then it goes on giving me
the answer okay it says first stop it
then disable it and then it says that I
have to remove the file path where o
Lama is there which is exactly what is
in the documentation and this literally
solves this particular GitHub issue so
successfully one out of one we have
managed to solve using Google Gemini 1.5
Pro how have we reached here that's one
something that I want to share so what I
have done is I have gone ahead and then
downloaded the entire AMA folder
thankfully this is an MIT license so I'm
not violating any copyrights in this
particular case so I downloaded the
entire repo the entire repo so the way
you download a repo is not very
difficult you can either get clone it
and get it to your local machine or you
can go here and then download the
zip file so after I downloaded this ZIP
file I unzipped it and I got the entire
folder here this is the total folder of
wama and it has got documentation it has
got the source code it has got examples
it has basically got everything that you
need to understand AMA now Google Gemini
1.5 Pro comes with 1 million tokens
context window as you can see here I out
of 1 million tokens context window I've
already used up close to 300,000 that is
because the first thing that I did here
is I uploaded the entire olama file so I
downloaded the W folder the repo
unzipped it and I uploaded the entire W
folder to Google AI Studio which is now
powered by Gemini 1.5 Pro and with after
I do this now is the time I can go ahead
and ask any question I can start with a
very simple question and I can go with
complicated question question so let's
start with a very simple question for
bravy I will edit the response time so
the response time might be longer based
on the tokens that you have got but uh
I'm not going to show you the entire
time that you have to wait but you can
understand that it will take a little
bit of time for it to run so the first
question is give how to use
olama for running an running an LM maybe
simple question let's see how to use ol
for running a lum it's a very poorly
formed question but as you can see here
the question is sent to this particular
system and the chat this particular
session has got knowledge about olama
source code which has got the source
code and also the documentation so it is
going to take the documentation and the
source code and it is ultimately going
to give me an answer which will evaluate
whether it is the right answer or not
comparing the documentation or all the
other things meanwhile I can go to the
documentation here and then see if they
have got any example here so you can see
there is a Linux example you can see
there is a readme.md so there's a quick
start you can quick see quick start and
how to start AMA so download the model
like this and then you can just run the
model now when we go to our Google AI
Studio it is uh guiding us so the
question is how to use ol for running an
llm it says Okay first install AMA then
you pull the model and then you run the
model and then you ask the question this
is 100 % the right way to run AMA in
fact it goes on and gives me couple of
options one is the ca option the
interactive mode option the AA option
and then it gives me the path in fact
like okay you want model options go
click here I'm clicking the file
hopefully it's not hallucinated it's not
hallucinated and I have got the entire
model file if I want to customize the
llm with let's say system prompt with my
own temperature value with my own hyper
parameters for the llm which also works
100% completely and it also says GPU
acceleration if you have compatible
Nvidia GPU AMA can leverage it for
faster inference ensure you have the
necessary cod. drivers and configuration
so the first test very simple code it
successfully gave now the second thing
that I want to do is I'm going to go
pick one of the GitHub issues I'm going
to go click issues here and uh I'm going
to start with very simple questions so
I'm going to select the question tag and
inside the question tag I'm going to I'm
going to find question so let's let's
pick a question here which is maybe just
easy to read so here integrated GPU
support it says integrated GPU support
and what is the answer the answer goes
on back and forth back and forth back
and forth I'm not picking up this kind
of an issue because uh it might be
little difficult for us to validate
whether it will work or it will not work
so let's pick one more issue here so it
says okay there is no way to disable
this in the Ripple although if you want
to clear out the current context you can
use load model OKAY some some issue is
there so I'm going to go take this
question right and I'm going to come
back here and then ask this question
here is a GitHub issue on ol repo help
me Solve IT paste the solution send it
at this point the entire text goes I'm
not putting a lot of effort in
formatting and this text is sent there
so I'm going to go back and then just
read it for my understanding okay I want
to understand how does a conversational
history is fed back into the template
from the model file here is the model
file I'm able to do conversational
question and answering on the terminal
but I'm not sure how does a template
take care of the history so the answer
here is that there is no way to disable
this in the Ripple on the contrary how
do I disable such Behavior I want to run
a fine tune model which just answer
based on the question I don't want it to
get influenced by the previous question
so right now there is no way to clear
out and you can just use load model it
says you can use the fresh load model to
load the fine tune model that you have
got and uh if you want to load the model
again it will remain in the memory and
it won't be reloaded and it will make
the model forget anything from before so
this is the only option let's see if
Google AI Studio can figure out so it
gives a solution I don't think it has
figured out okay con fedback okay it
explains how the conversation is fled
black okay cool it says R okay disabling
the history influence it says you can
reduce the number of context parameter
uh that can reduce I don't think that is
valid it says you can clear the context
in the terminal use a command set
context okay to reset the context I'm
not sure how 100% right it is but um
will it work I'm not sure like you can
set the context and reset if you want to
completely isolate the question from
previous interactions you can manually
clear the context before each new
question in the terminal you can use
this and then do this cool uh I don't
know honestly I don't know if if it will
work if you think it will work let me
know in the comment section but um it
somehow gave me the answer let's pick
the next get up issue which in this case
the title is response format not
supported when sending this request to
open AA endpoint I don't get the
requested Json and the answer to this is
it looks like you may be sending a
response schematic in the response
format this is not supported by AMA
currently one format Json is supported
okay so that is the solution and
somebody else came and gave one more
solution that we have got uh there is a
work around let's use the same thing and
I have given the same question here as
you can see here I pasted the entire
thing and then it kind of gives me the
response back once again the response is
slightly more veros maybe we can control
that by giving a condition there one of
the answers which is exactly what the
GitHub answer here is is the format Json
and as you can see here it says correct
format parameter double check the format
parameter in your API request it said to
Json this is essential for requesting
Json output and it also gives me the
example code with the same thing that I
can send and get the exact same format
which is Json not validating the last
one but this kind of um explains how to
do it and 100% solves the issue so let's
pick one more GitHub issue here which is
and let's pick one of the closed ones
because it's we know that it has got a
solution build Cuda ready docker image
um let's see if it works currently the
official image container does not
contain necessary Cuda libraries this is
inconvenient and um there is I see you
have provided Rock M images for AMD gpus
can you provide C images if that's not
feasible how about providing a specific
Docker file I'm using AMA container this
one and Cuda libraries are there let's
see if Google can figure out the answer
and then give us let's ask the same
question currently the official
container image does not Coda libraries
send the same question let's see what it
gives back I've got a response back it
says okay it agrees that it is not
possible and it gives me an option to
create a Docker file which is take from
the AMA current image and then install
the Cod libraries whatever that is
required build the image run the image
and there is a second option it says we
can use nvidia's container toolkit let
me click this and then see if it works
actually oh obviously the link works
it's not hallucinated link and then we
have the response back
once again I'm not sure if it works but
it looks compellingly correct so so far
we have tried a bunch of easy issues and
in many cases I found the response is
really good like I can I can see that it
is a good response now what we are going
to do is instead of taking a question
which is the question labeled GitHub
issue let's take one of the issues which
is uh let's say like a request or
something like that so I'm going to go
here and then pick one of the feature
request
so open RC in it support so it says
could be really nice if the installation
script had Auto detect feature to
identify the current ran in it system on
the Linux machine to paste the entire
thing and then I'm going to go here and
solve this and send this that's it um
because that this particular session
already knows code and it also has got
the other external knowledge it'll be
really interesting to see if it can
create new code or modify the existing
install.sh script it took about 50
seconds for it to create okay it says I
agree that auto detect feature and the
install s script would be helpful here
is a possible approach it gives me this
approach detect the init system check
for this this one if we check or check
for open RC open RC is exactly what this
person has shown so that they have got
open RC so it says detect it and then
give me and install W service file
accordingly and then you can install it
and then you have open RC integrated
integration into install s so it says
replace the existing init system
specific install logic in the install sh
file so it tells us to do it and if we
go here and then we find the file which
is uh the install sh scripts file go
file you can see here so now what I'm
going to do is I'm going to say replace
it and give me the entire file so I'm
going to say replace it yourself and
give me the new
install.sh file and I want to the reason
I'm asking this is I want to know
whether it can actually fetch the
install.sh file make the changes in the
right place it give and then return us
back and um that is very important
because anybody can give this suggestion
the fact that it knows all these
information I think it makes it far
superior if it can edit the code and
give us back we have got the updated
code here uh the updated code honestly
like looks much smaller than what we
have so it doesn't look like probably it
has got the code directly from here
because it has probably missed a lot of
parts maybe that is something that we
should explicitly say go here and then
get it which is what the de team said
but I think it has done a decent job uh
with uh okay identifying the init system
here and then based on that giving the
particular set of installation based on
whatever init system that has got so at
this point uh I just wanted to give you
a glipse of how to to solve GitHub
issues with AI and I think like it is
extremely helpful that you have got an
AI system that can understand the entire
code base of your own project so it can
be a useful pair programmer and it can
be helpful in solving GitHub issues or
at least like you can have like the
first line of support which is like the
AI generated answer added automatically
to the GitHub issue which could solve
the time of the developer who is
maintaining an open source project if
you are interested in me checking out
any other Library let me know in the
comment section but I'm super interested
in trying Google Gemini 1.5 Pro for
coding more
تصفح المزيد من مقاطع الفيديو ذات الصلة
Turning Notion Into Your Automated CRM system
Risikofrei von 100x Hebel profitieren? So gehts! (Bitcoin Trading Anleitung)
NeuronWriter - erste Schritte
Wie du Trello als CRM und Sales-Pipeline für dein Unternehmen verwendest (auf deutsch)
Der Geld-Check | Reportage für Kinder | Checker Tobi
The BEST Way to Summarize Anything With GPT-4o
5.0 / 5 (0 votes)