GPT-4 Makes Old ChatGPT Look Like a JOKE!
Summary
TLDRIn this video, Nick explores the newly released GPT-4 model by OpenAI, testing its capabilities in software engineering and solution architecture. As a Charge GBT Plus member, he dives into the model's advanced reasoning and conciseness, setting it a challenge to create a REST API for movies using C# and .NET 6, with PostgreSQL as the database. Impressed by its detailed and accurate responses, including handling DB update concurrency and suggesting secure practices for connection strings, Nick contemplates the potential impact on junior and mid-level developers' jobs, as GPT-4 demonstrates the ability to perform tasks traditionally requiring human expertise.
Takeaways
- 🧑💻 The video introduces the new GPT-4 model by OpenAI, which is designed for chat and is available to the speaker as a Charge GBT Plus member.
- 🔍 The speaker plans to test the GPT-4 model's capabilities in software engineering, solution architecture, and common questions that junior and senior developers would need to answer.
- 📝 The GPT-4 model is tasked with creating a REST API in C# with .NET 6, using PostgreSQL for the database and Entity Framework for data access, demonstrating its ability to generate code and handle complex tasks.
- 🛡️ The model correctly suggests using secret managers or environment variables for secure handling of connection strings, instead of storing them in plaintext.
- 🔒 It provides detailed instructions for implementing secure practices with Azure Key Vault and suggests using environment variables for production settings.
- 📚 The GPT-4 model offers comprehensive responses, including code examples and explanations, which are impressive for their depth and accuracy on the first attempt.
- 📝 It also provides guidance on writing integration tests for the API using a web application factory, which could save developers significant time and effort.
- 🔧 The model demonstrates knowledge of performance testing tools like k6 and provides a basic setup for load testing an API, showing adaptability to different testing scenarios.
- 🏗️ It can generate Infrastructure as Code (IaC) scripts for deploying an API in AWS using Terraform, including services like RDS, ECR, and ECS with Fargate.
- ⚠️ The model acknowledges the limitations of a single instance of Fargate tasks and suggests the need for scaling groups and an Application Load Balancer (ALB) to handle high traffic.
- 🤖 The video concludes with the speaker expressing both excitement and concern about the capabilities of GPT-4, as it could potentially replace the work of junior and mid-level developers with proper oversight.
Q & A
What is the main topic of the video by Nick?
-The main topic of the video is to explore and evaluate the capabilities of the newly released GPT-4 model by OpenAI, specifically in the context of software development tasks such as creating a REST API with C# and .NET 6.
What does Nick mention about his previous experience with the older model of GPT?
-Nick mentions that his previous experience with the older model of GPT was impressive but not without its challenges. He had to try many times until it provided answers that made sense, indicating it wasn't perfect.
What is the significance of the GPT-4 model's ability to handle DB update concurrency exceptions?
-The ability to handle DB update concurrency exceptions is significant because it shows the model's advanced understanding of common software development issues and its capacity to provide solutions that are robust and considerate of real-world scenarios.
What is the role of REST API in the context of the video?
-In the context of the video, a REST API is used as a test case for the GPT-4 model to demonstrate its capability to generate code for creating, retrieving, updating, and deleting movie data, similar to platforms like Rotten Tomatoes or IMDb.
Why does Nick consider the GPT-4 model's response to be impressive?
-Nick considers the GPT-4 model's response impressive because it provided a complete and correct code solution for creating a REST API on the first try, including handling of concurrency exceptions and proper RESTful route naming conventions.
What security issue does Nick identify with the GPT-4 model's initial code example?
-Nick identifies a security issue with the GPT-4 model's initial code example where the connection string is stored as plain text in the app settings, which is considered a bad practice.
What is the GPT-4 model's suggestion to improve the security of the connection string?
-The GPT-4 model suggests using secret managers like Azure Key Vault or AWS Secrets Manager, and loading sensitive information as environment variables to improve security.
How does Nick feel about the potential impact of AI tools like GPT-4 on software developers' jobs?
-Nick expresses a mix of skepticism and fear about the potential impact of AI tools like GPT-4 on software developers' jobs, as he believes the tool could replace many tasks that junior and even mid-level developers perform, making the process more efficient.
What is the role of integration tests in the context of the video?
-In the context of the video, integration tests are used to validate the functionality of the REST API created by the GPT-4 model, ensuring that the API works as expected when all components are combined.
What does Nick explore beyond .NET specific tasks in the video?
-Beyond .NET specific tasks, Nick explores the use of the GPT-4 model for infrastructure as code (IaC) scripting, performance testing with k6, and scaling solutions to handle high loads, demonstrating the model's versatility.
Outlines
😲 Exploring GPT-4's Capabilities in Software Development
In this paragraph, Nick introduces his experiment with OpenAI's newly released GPT-4 model, emphasizing his excitement and lack of prior experience with this version. As a GPT Plus member, he has immediate access to the model and plans to test its capabilities in software engineering and solution architecture. He intends to ask the AI a series of questions that a junior or senior developer might encounter in their daily tasks. The paragraph highlights Nick's anticipation and the potential efficiency of GPT-4 compared to human developers.
🔒 Addressing Security Concerns with Connection Strings
The second paragraph delves into the issue of security when handling connection strings in application settings. Nick acknowledges the advice from a friend and seeks GPT-4's suggestions for more secure practices. The AI recommends using secret managers like Azure Key Vault for local and production environments, providing code examples for implementing these solutions. It also covers platform-specific settings and the importance of scoping the secret management tools to the service in use.
🚀 Scaling Infrastructure for High-Demand Applications
In the third paragraph, Nick discusses the need for infrastructure that can handle a high volume of requests, specifically 10,000 requests per second. He asks GPT-4 to provide Infrastructure as Code (IAC) scripts for deploying an API in AWS using Terraform. The AI suggests using RDS for the database, ECR for Docker image registry, and ECS with Fargate for running the service. It also addresses the need for scaling by recommending the use of an ECS service with an Application Load Balancer (ALB) to manage the load.
Mindmap
Keywords
💡GPT4
💡REST API
💡C# and .NET 6
💡Postgres
💡Entity Framework
💡Startup.cs
💡Concurrency
💡Infrastructure as Code (IaC)
💡Terraform
💡ECS and Fargate
💡Load Testing
Highlights
Introduction of the new GPT-4 model by OpenAI and its potential to outperform human developers in certain tasks.
The presenter, Nick, is a charge GBT plus member and has access to the GPT-4 model for testing.
Comparison of the GPT-4 model with the default and Legacy models, highlighting its advanced reasoning and conciseness.
Rate limiting of the GPT-4 model to 100 messages every four hours due to computational intensity.
Creation of a REST API using C# and .NET 6 for managing movies and ratings, demonstrating GPT-4's coding capabilities.
Inclusion of proper async programming and handling of DB update concurrency exceptions in the API code.
Correct naming conventions for REST API routes and proper use of location headers for resource creation.
Provision of a comprehensive explanation and documentation alongside the generated code.
Discussion on the security implications of storing connection strings in plaintext and suggestions for improvement.
Recommendation to use Azure Key Vault or other secret managers for securing connection strings.
The model's ability to generate detailed code for AWS integration, including the use of AWS Secret Manager.
Generation of integration tests for the API using the Web Application Factory, saving developers time.
Performance testing of the API using k6, with the model providing a valid and detailed test script.
Infrastructure as Code (IAC) scripts for deploying the API in AWS using Terraform, showcasing the model's versatility.
Discussion on scaling the API to handle 10,000 requests per second with ECS and ALB configurations.
Reflection on the potential impact of GPT-4 on the job market for developers and the efficiency it could bring.
The presenter's mixed feelings of skepticism, fear, and excitement about the capabilities of GPT-4.
Transcripts
this basically eliminates the need of
having some major or Junior developer
doing the work for you and in many cases
this might actually be faster and more
efficient than that developer hello
everybody I'm Nick and in this video I'm
going to try the brand new gpt4 model
that was just released by openai in chat
GPT now I am a charge gbt plus member so
I can already use it as you can see it
is an option in this drop down however I
haven't tried it at all I just woke up
to the announcement and I'm walking into
this blind the previous video I made on
chart gbt with the old model was
impressive but at the time of recording
I had already tried the model and knew
what it could it could not do and I had
to tried many many times until it
eventually got to the right answers that
made sense needless to say it wasn't
great in this video I will only give it
a single chance and I have a set of
questions I want to ask it both in
software engineering and solution
architecture and scrubby questions about
Junior made and Senior developers would
need to answer as part of the day job if
you like to have content and you want to
see more make sure you subscribe bring
this notification Bell and for more
training check out Nick chapsters.com
alright so let's see what we have here
so we have these three models right the
default which is supposed to be fair and
reasoning very fast and not really
concise then you have the Legacy model
which I don't know if it was the
original GPT or not but it seems to be a
previous charge GPT plus model which has
different characteristics and then gbt4
which supposed to be the most advanced
where you have excellence in reasoning
and conciseness but it's not as fast and
we're gonna use that oh interesting so
the first thing I see here is that
there's a cap of 100 messages for every
four hours so I guess they're rate
limited because of how much
computationally intense it is so what
I'm going to ask it as the first
question is please create a rest API
with C sharp and Dot net 6 that allows
users to create retrieve update and
delete movies and also write write them
think Rotten Tomatoes or IMDb an API for
those things where you can list a movie
and rate it and then use postgres for
the database and empty framework for the
data access and the framework is like an
oramin.net so let's see what can do with
that so straight away just like before
we have the same stream response and I
would say it's equally as slow so let's
wait and see what it spits out profiles
are good everything here is things I
would do so we have the settings we get
the whole file previously sometimes you
would get it sometimes you would only
get the line you need to add you have
the model and it is using NT framework
which allows you to have nested objects
that point into other objects and
generated database on demand then it
knows to make a movie DB context as well
movies and ratings great now it's going
to use the startup.cs which we don't
really use anymore in c-sharp but given
when this was trained this was very much
relevant now one of the things I
wouldn't do in that startup.cs is run
migrations there and would actually have
them in the program.cs but that is not a
huge mistake I'd say or go straight away
into using a controller it injects the
DB context and then it uses async
programming properly oh that's
impressive for update movie it also
handles DB update concurrency exceptions
which many people would actually forget
to cover so that's pretty cool and then
it knows to name things like the routes
appropriately as a rest API should which
is the plural of the entity you're
trying to deal with or the resource
you're trying to deal with and then it
correctly points to the action that is
supposed to give you the location header
to indicate that on movie creation
that's where you can find your movie so
that's really really impressive oh and
fantastic in the end we also get an
explanation like a documentation of
everything I'm gonna tell you what
except for the thing where I had to
explicitly say continue when the thing
stopped this is a perfectly valid
response that you can actually use and
you wouldn't be in any way wrong can
would this be done better sure but
you're walking into more opinionated
routes this is one of the most generics
way you can do that and it's absolutely
fine and it's the first try all right
let's see how we can make it go further
one of the biggest problems with this is
that the connection string is a raw
string with passwords and usernames as
plain text in the app settings.json file
usually what you would do is you would
actually use a Secrets manager like
Azure key Vault or AWS Secrets manager
or you would load them as environment
variables scope to that specific service
so what I want to ask it is what every
developer might ask hey so my mate said
that having connection strings in the
settings as a clear text is a bad
practice any suggestions on how to make
that more secure
let's see what it says your mate is
correct oh cheers mate interesting so it
acknowledges that it is not secure and
it suggests secret managers like Azure
key Vault or other tools interesting
that it is azure biased and I didn't get
any other suggestion like
um AWS ones oh that's so cool so for
local development use secret manager
tool and what am I supposed to use for
production oh environment variables it
is not wrong as long as there's a scope
to the thing that's loading them that's
absolutely fine and it also suggests
Azure key Vault and it gives me the code
that is nuts I'm wondering if the
Microsoft money make it be more Azure
biased than AWS biased okay I'm gonna go
a bit off the scripted questions because
I want to see if it can do this with AWS
as well actually we are using AWS in my
company and I'm just gonna tell you that
in the previous oh my God straight away
AWS secret manager in the previous video
I had to really guide it and really put
things into context for it to act
actually give good results
this just goes into hey here's the
documentation read about Secrets manager
add the nuget package oh my God and I've
not said goes way more in depth into the
thing it is talking about the previous
method will just give you way less text
and basically tell you for the rest just
figure it out here we're getting
everything actually nuts
I I I have no words Jesus the detail it
goes into even window specific and then
I guess Linux or Mac specific settings
like these are right things if I copy
this it will take me to my credentials
file for WS this is nuts okay what if I
want integration tests write integration
test for the API using the web
application Factory let's just see some
tests the reason why I'm so excited
about this is this will save me so much
time there is so much code I and many
developers I'm sure have to write that
is like this that to a degree is roughly
the same but not exactly the same So
based on how good this is It's Gonna
Save Me so much time and it makes the
money I pay for charge GPT plus were
actually worth it and here's what I
think about this up until now when
people were asking me will this replace
people's jobs I would say no I don't
think so you still need someone who
knows enough to babysit it and review
things but thinking about this I can
totally see someone making a tool that a
developer an actual developer can go
into a GitHub repo create an issue say
that's what the issue is go fix it then
a chart GPT bought using the API can go
in read the issue act and then create a
pull request and then it's just the
developer reviewing the pull request and
charge gbt fixing it automatically as
you go this basically eliminates the
need of having some mid or Junior
developer doing the work for you and in
many cases this might actually be faster
and more efficient than that developer
so
I'm a bit skeptical and scared okay
everything it said in principle here is
correct I'm pretty sure I can ask you to
say add more tests and it can do that
for me now would I validate here for
headers like this specific header I
wouldn't I would check for the response
and see that the object is what I want
it to be so it is not awesome but it
gives you a framework to work on okay
fine now when I go a bit outside of dot
net specific things and say okay right
performance test for this API using k6
k6 it's a performance testing tool that
was actually bought by grafana straight
away links to the documentation and
actually is this a valid link
it is a valid link that is so cool yeah
that looks like exactly how a k6 project
looks like how a file looks like it even
ramps app to Virtual users slowly I
didn't give it any instructions so this
is more of a load test where you
gradually go into your normal load
maintain 20 virtual users then stay
there for a minute and then go lower and
then go all the way down and it knows
how to call it because it has context on
those API endpoints oh and like I said
yeah it acknowledges that this is a load
test scenario you can have other tests
as well like soak tests Spike tests so
many okay let's go beyond what a mid
original developer would do and let's
talk about some infrastructure as code
where it is time to deploy the
application and we need to script out
our infrastructure because you wouldn't
go into the Azure portal or the AWS
console and do things manually you would
use something like terraform to script
out the Avi so I'm going to say it's
time to deploy the API in AWS my company
is using terraform can you write the IAC
scripts necessary to deploy the API in
AWS let's say that's something I've
never asked in the past I've never asked
in the previous video I go off rails now
to see how far I can push it and it
looks like it just does it it chooses
RDS for postgres ECR for the registry
for the docker image so it will actually
build other Docker image and elastic
container service which is the things I
have used in the past to run such apis
it even keeps the context allow inbound
traffic for the movie API oh and it's
also gonna use fargate which is an
excellent option for something like this
these are all valid environment
variables that is nuts and it gets the
connection string as an environmental
variable for the container which is a
decent approach and other better ones
sure you can use other Secrets manager
and then the value here would be the key
of the secret in secret manager so we
can actually ask it to improve on this
but I'm not gonna do that because I'm
sure it will do it and even this is not
a bad approach especially if you use the
right access policies but what I'm
really curious to do is I'm gonna tell
it here's the thing my manager said this
needs to handle 10 000 requests per
second
is this good enough because I can see
from desire account that this will only
run one service one instance of the
service in fargate and even though there
is a load balancer there isn't some Auto
scaling from what I can see here so
let's wait for this to finish and let's
ask it about scaling okay it is finished
and I am so impressed let's ask it that
question 10 000 requests per second can
you handle it
the provider from scripter likes of a
single instance of fargate tasks which
might not be able to handle no ten
thousand requests per second to handle
such load you need another scaling group
for ECS service and configure an ALB
look I'm having a bit of an existential
crisis here because I cannot tell you
how hard I had to try with the previous
model to get it to give good results it
would do great in small things but in
the bigger picture we just lose it
this is
actually insane like truly what I'm
seeing here is it can do everything a
junior developer can do mid developer
and to be honest with this more than
what a senior developer would also do so
as long as you have someone who's
competent enough to review things this
could replace a lot of people and it
sucks to say but it is true and
companies are opportunistic even if what
I'm seeing here is not 100 right
conceptually it is and I can take that
and slightly modify it and be there so
I'm skeptical I'm scared but I'm excited
to see what's going on web3 never had
the chance this does look I'm going to
stop here because I'm sure that at this
point anything I ask it will do I think
it answered every single one of those
questions at first try perfectly
definitely way better than the previous
model so I'm just gonna leave this here
and ask you a question what do you think
about this like
was one thing but this is a whole
different Beast well that's all I have
for you for this video thank you very
much for watching and as always keep
coding Jesus
Weitere ähnliche Videos ansehen
Why OpenAI's Announcement Was A Bigger Deal Than People Think
OpenAI o1 | GPT-5 | Finalmente 🍓🍓🍓🍓🍓🍓🍓🍓🍓🍓🍓🍓🍓🍓🍓🍓🍓🍓🍓🍓🍓🍓🍓🍓🍓🍓🍓🍓🍓🍓🍓🍓🍓🍓🍓🍓🍓🍓🍓🍓🍓🍓🍓🍓🍓🍓🍓🍓🍓🍓🍓🍓🍓🍓🍓🍓🍓🍓🍓🍓🍓🍓🍓🍓🍓🍓🍓🍓🍓
How To Use GPT-4o (GPT4o Tutorial) Complete Guide With Tips and Tricks
Testing Entity Framework Core Correctly in .NET
Adeus Alexa e Siri! Testamos o GPT-4o
Claude 3.5 Deep Dive: This new AI destroys GPT
5.0 / 5 (0 votes)