Excellent Devops and Cloud Engineer Interview questions for ~2 Year Experience including feedback
Summary
TLDRIn this interview, Harish, a DevOps engineer with over two years of experience, discusses his role in a network communication project using AWS services and Jenkins pipelines. He shares insights on managing different AWS environments and touches upon security practices and cost optimization. The interviewer provides feedback on Harish's understanding and suggests exploring certifications and broader AWS knowledge to enhance his consulting skills for future roles.
Takeaways
- 😀 Harish, a DevOps engineer from Hyderabad, has been working in the IT space for two and a half years and is currently working on a network communication project in a microservices architecture.
- 📚 Harish graduated in 2021 and started his career with AWS training, progressing to a Junior DevOps engineer role, and is experienced with AWS services such as EC2, VPC, and S3.
- 🛠️ Harish has practical experience in writing Docker files and Jenkins pipeline files, which are instrumental in building Docker images and automating the deployment process.
- 🔄 The project Harish is involved in uses GitHub for data storage and Git as the version control system, emphasizing the importance of version control in DevOps practices.
- 🤝 Harish has had limited direct interaction with clients, mainly participating in internal pipeline calls, indicating a need for more client-facing experience in a consulting role.
- 🔍 The interview explores Harish's understanding of AWS environments and deployment strategies, revealing that he has primarily worked within development environments rather than production.
- 🚀 Harish's role involves managing network elements with a GUI for network administrators, showcasing the significance of user interface design in system administration.
- 💡 The interview highlights the need for cost optimization strategies, suggesting that Harish should explore AWS Lambda functions for potential cost savings.
- 🔒 Security is a critical aspect of Harish's role, with the use of security groups, network ACLs, and IAM policies to ensure the protection of AWS resources.
- 🔍 Harish has used SonarQube in the pipeline for code quality and vulnerability checks, underlining the importance of code quality in the development process.
- 📈 The interview suggests that Harish should broaden his knowledge beyond his current role, consider obtaining AWS certifications, and gain a deeper understanding of various AWS services and best practices.
Q & A
What is Harish's current role in his IT career?
-Harish is currently working as a DevOps engineer in a project that involves network communication and is based on microservices.
What AWS services is Harish experienced with?
-Harish has expertise in various AWS services, including EC2, VPC, S3, and he has experience in writing Docker files and Jenkins pipeline files.
What is the project Harish is working on about?
-The project Harish is involved in is a network communication project for a US-based company, which provides a UI for network administrators to manage customer services.
How does Harish's team manage different AWS environments for development, staging, and production?
-They use a multi-branching strategy with different pipelines for each branch, and they use AWS to create instances for developers to test their code locally.
What version control system does Harish's team use?
-Harish's team uses Git as their version control system, storing all data in GitHub.
What is the role of an AWS technical consultant at Infosys?
-The role involves working with multiple customers, solving their problems, understanding their situations, identifying gaps in their systems, and implementing solutions based on automation or consulting skills.
How does Harish handle deployments from different branches in his current project?
-Harish uses different pipelines for different branches, with the main branch having a separate pipeline for production and feature branches having their own pipelines.
What is Harish's experience with AWS Lambda and serverless functions?
-Harish has created AWS Lambda serverless functions, primarily for tasks that don't require a server to remain active after execution, such as cost optimization.
What security measures does Harish implement in his project?
-Harish implements security measures such as configuring security groups, network access control lists, proper IAM permissions, bucket policies, and multi-factor authentication.
What feedback does the interviewer provide to Harish regarding his understanding and experience?
-The interviewer suggests that Harish has a high-level understanding of areas he has worked on but struggles with areas outside of his experience. The interviewer recommends exploring certifications and gaining a broader understanding of various scenarios.
What advice does the interviewer give Harish for improving his consulting skills?
-The interviewer advises Harish to go beyond his current role, explore different options, get certified, and build projects outside of his organization to gain experience in various scenarios.
Outlines
👋 Introduction and Initial Discussion
The interview begins with a warm welcome to Harish, who has over two years of experience in the IT space. Harish introduces himself as a Hyderabad-based professional who graduated in 2021 and has been working as a DevOps engineer. He discusses his journey starting with AWS training, progressing to a Junior DevOps engineer, and currently working on a network communication project based in the US. The project is a microservices-based initiative involving network element management, with a focus on UI/GUI for network administrators. Harish also mentions his expertise in various AWS services, Docker, and Jenkins pipeline files.
🔍 Deep Dive into AWS Technical Skills
The conversation shifts to Harish's AWS skills, focusing on his management of different AWS environments such as Dev, Staging, and QA. He explains the use of Git branches, multi-branching pipeline strategies, and how AWS instances are created for developers. Harish also details the deployment process from Jenkins to AWS, including the use of Docker images, ECR, and EKS clusters. The discussion touches on the differentiation between development and production environments and the challenges of managing multiple AWS accounts.
🛡️ Security and Network Configuration
Harish discusses the importance of security in his projects, highlighting the use of security groups, network access control lists (NACLs), and proper IAM permissions. He explains the concept of public and private subnets within a VPC, detailing how private subnets enhance security by not having direct public IP access. The conversation also covers the use of Bastion hosts, NAT Gateways, and load balancers to manage traffic and maintain security. Harish admits to being less experienced in cost optimization and suggests using AWS Lambda for monitoring and controlling costs.
📈 Autoscaling and Monitoring
The discussion moves to autoscaling and monitoring, where Harish explains how autoscaling groups and CloudWatch are used to manage and monitor instances. He describes how logs are captured using CloudWatch and the importance of ensuring logs are enabled for applications running on Docker or Tomcat. Harish also talks about his experience with AWS Lambda, emphasizing its serverless nature and potential use cases, though he admits to not having implemented any Lambda functions himself.
🏗️ High Availability and Security Best Practices
Harish discusses high availability in AWS, explaining the use of multiple regions and availability zones to ensure application uptime. He also outlines five security best practices implemented in his projects, including configuring security groups, network access control lists, proper IAM permissions, bucket policies, and multi-factor authentication. The conversation includes the use of SonarQube in the pipeline for code quality and vulnerability checks, highlighting the importance of developer notifications based on SonarQube analysis.
💡 Feedback and Future Recommendations
The interviewer provides feedback to Harish, noting his high-level understanding of his work but a lack of broader knowledge outside his immediate responsibilities. The feedback emphasizes the need for Harish to expand his understanding of production environments, cost optimization, security, and autoscaling. The interviewer suggests obtaining AWS certifications and engaging in projects beyond his current role to enhance his consulting skills. Harish is encouraged to explore options and certifications to improve his knowledge and employability.
👋 Closing Remarks
The interview concludes with the interviewer wishing Harish the best for his current interview and suggesting they stay connected. Harish thanks the interviewer, and they both say their goodbyes, wrapping up the conversation.
Mindmap
Keywords
💡DevOps
💡AWS
💡Microservices
💡Version Control
💡Docker
💡Jenkins
💡
💡ECR
💡EKS
💡Auto Scaling
💡Security Groups
💡Multi-factor Authentication
💡SonarQube
💡High Availability
💡RDS
💡Terraform
Highlights
Harish introduces himself as a DevOps engineer with two and a half years of experience.
Harish completed his graduation in 2021 from LLY professional University and has been working at Capgemini.
He started his career with AWS DevOps training and progressed to a Junior DevOps engineer.
Harish is currently working on a network communication project, which is a microservices-based US-based project.
The project involves managing network elements and providing a UI for network administrators to manage customer services.
Harish has expertise in various AWS services like EC2, VPC, and S3, and experience in writing Docker and Jenkins pipeline files.
He discusses the role of an AWS technical consultant, which involves working with multiple customers and solving their problems.
Harish admits he has not directly worked as a consultant but has had meetings with clients.
The company he works for, HSV, is US-based and focuses on network communication devices.
Harish explains the use of Git branches and multi-branching pipeline strategies in their development process.
He describes the process of deploying code from different branches to AWS environments using Jenkins pipelines.
Harish mentions the use of Docker images, ECR, and EKS clusters in their deployment process.
He discusses managing different AWS accounts and the use of public and private subnets for security.
Harish explains how autoscaling works and the use of CloudWatch for monitoring and logging.
He talks about the use of AWS Lambda for serverless functions and cost optimization.
Harish discusses high availability in AWS and the use of multiple regions and availability zones.
He outlines five security best practices implemented in his current project, including security groups and IAM permissions.
Harish mentions the use of SonarQube in the pipeline for code quality and vulnerability checks.
The interviewer provides feedback on Harish's understanding and suggests areas for improvement, including certifications and broader knowledge.
Harish is encouraged to explore beyond his current role and consider certifications and external projects to enhance his consulting skills.
Transcripts
[Music]
so hi Harish um welcome to this
interview we will um record this and
publish on social media okay sir good
morning sir good morning so uh Harish
you have around two and a half years of
experience in it
space can you brief about yourself um
what sort of experience you have and
then we can start discussion from there
yeah sure sir thank you for giving me
this opportunity to introduce myself I'm
Harish I'm from Hyderabad I completed my
graduation in 2021 in from LLY
professional University since then I
have been working in cap jimy as a deop
engineer I started my journey as AWS
deop training and then Junior devops
engineer so I'm currently working in a
project which is a network communication
project it is a US based project and it
is micros service project and in the
project what we do is we will manage the
network elements such as there will be W
and devices so we will make sure that we
will there a UI GUI for that and it is
useful for the network ad administrators
to manage the customer services like who
has the which connection and it is a
micros service based project and we
store all the data in GitHub and we use
G as our version control system and I
have expertise in various a services
like IM ec2 VPC um and um S3 all those
things and um I have experience in um
writing Docker files and Jen is pipeline
files which are used to building and
building doer image and pipeline okay
and I have I have experience in Asel
also yeah that's it sir so this role is
that of AWS technical consultant with
infosis uh just let me give you a brief
about this role in this role you will be
expected to work with multiple customers
and solve their problems understand
their situation find out certain gaps in
their systems and then based on maybe
automation or your Consulting skills you
will be implementing those Sol Solutions
as well have you done that that sort of
work in your current job uh no sir
actually we had meetings with the
clients but uh I never this directly
actually we our pipeline team will have
every week call I mean internal call
will be there every day but um we have
pipeline call every Tuesday so we had
meeting with the clients but not
Consultants like this so um is it more
of a product that one product that you
are working on uh specifically yes sir
it's a
actually the company has the oalt and
devices I mean it is an US based company
HSV so and you have like only experience
with one customer do you understand that
different customers will have different
uh requirements different yeah
they different yes yeah you you go ahead
yeah I know different customers have
different requirement but from the
beginning I've been the one only
actually once I joined I got the
training for this project only and I'm
in continuing in the same project this
project is going to end in the next
march but um yeah after that maybe I can
get new project if I stay here okay
let's let's discuss about your AWS
skills then uh because the role is uh
mainly focusing on AWS Consulting uh
role so yeah um what sort of how do you
manage your different um AWS
environments how do you manage your Dev
staging QA Brad environments in your
company so we have U we follow good
branches in git so they will have
different branches like de branch and
mly one master branch is there so and
then we have different feature branches
and developers work on the different
features and and the mind mind code will
go to the production
server sorry sir yeah mine will go to
the production server and we have
different pipelines for the each branch
we use multi branching strategy multi
branching pipeline strategy so whenever
developer commits a code it will go to
the complete life cycle and um we use um
AWS to create the instances for the
developers whenever they want to test
their code locally we use the AWS and
we'll be giving the IM permissions which
are required for them and we also write
terraform files to uh give them the I
mean um E2 instances like that we manage
a like this but then is it how do you
deploy this how pipelines are connected
to AWS accounts uh from the Jenkin
itself sir once we got the country
integration process we will deploy the
code into the continuous delivery I mean
right now we are using doer Hub I mean
um complete overview of your pipeline I
I understood there are future branches
then you deploy from main branch in
production but how do you deploy from
future branch and do you deploy to
production from future Branch how how
does that branch and AWS environment
relationship happening so okay sir I
will tell the complete life cycle of our
pipeline so whenever developer a code uh
we'll be having web web web web hook
triggers so it trigger the jenin
pipeline once it trigger Jen pipeline
will start the pipeline so there will be
different stages like um first it will
be get checkout anyway it will be pass
because it is directly connected to the
G and then we have build stages it will
done through it actually Java based
project it will be done through the m
and then we have unit testing and then
we have code scanning through sonar and
then we will be building the docker
images with the doer and then we will be
push that image to the any private
registry previously we were using
dockerhub now we are pushing to the ECR
recently we migrated to ECR and then we
will deploy that to eks eks Cluster
that's how we um connected from Jun
pipeline to I mean AWS till uploading
the image to the docker Hub it is
continuous integration after then we
will be using contest deployment which
in the E right but you still didn't tell
me how do you differentiate between your
Dev q and fraud account how will your
pipeline know that I have to deploy from
a future branch and I have to deploy to
some devb account so actually there will
be different different pipelines right
whenever if you um um
generally feature Branch will only I
mean um Master Branch will have separate
pipeline every time and feature Branch
will have have separate pipeline
whenever U if you want to go for the
release then we will merch the feature
to the m Branch then it will go for the
production likewise okay so these are
different pipelines yes or is it same
pipeline but some sort of branching
awareness no sir completely different
repositor will be there um I mean um we
use Branch strategy right there will be
it will show branch and um our
repository name and there will be um in
one stage it will um in in the branches
it is showing feature branches mind
branch and any sub branches so it will
there okay all right and how do you
manage different AWS accounts do you
have your Dev cluster in and production
cluster in same AWS account uh I don't
have much experience with the production
account but right now we have the same
same one only sir so you are mainly as a
as a devops engineer for lower accounts
is it you don't deploy to production no
sir we rarely deploy to the production
we mainly focus on the dev environment
are you aware who does that and how that
process works
uh slightly aware sir actually our
senior Dev engineer will do that and we
we'll take care of the development
environments but do you never interact
with them or they don't tell you what
what's how do they merge and how do they
deploy uh I know the process like how
they create the Clusters and how do they
um deploy the into production servers we
use actually the fargate um we don't use
we use farget for the our containers I
mean which is serverless so with far
farget they will deploy the code I mean
they create Eng controllers there and
with Eng controller they have they will
deploy the code to the you use ECS
farget or you use eks uh eks only sir we
we are not using ECS why farget uh yeah
farget is a serverless thing and we
don't need to manage the containers
actually if you manage the containers
there will be we need to monitor
everything I mean to say if you use the
E2 instance instead of forgate we have
to scale up and scale down but if you
use the forgate we no need to take care
of that ec2 will sorry a will take care
of them scaling up and down based on the
traffic so you define certain policies
and based on that it yes sir yes now
tell me if one of our customers says
that my um fargate ECS eks cost is going
up dramatically what steps would you
take to control their
cost I'm not sure about it right no
awareness on cost optimization yeah cost
optimization we can do with the a Lambda
functions so we can write a Lambda
function based on we can integrate with
a Lambda with the cloud watch so we we
have to give some necessary AWS roles
there so based upon that AWS cloudwatch
always monitor so according to the
metrics and can everything that is in
eks uh your farget can that be converted
to AWS Lambda yeah maybe so I'm not sure
no worries just study about this aspect
because with the recession and all that
lot of customers will be looking for
cost optimization so need some
Consulting there what what strategies
are possible in the cost optimization
area yeah sure sir okay now uh tell me a
little bit about uh these public and
private subnets are you aware what they
are yeah yes sir actually um there will
be public and private subnet we will be
uh first of all there will be one V VPC
is there VPC is one of the service by
the AWS in the VPC what we do is we will
create private Network inside the public
Cloud so likewise if you want to deploy
any other applications or um our own
application we will first create VPC in
that VPC we will have Public Sub private
subet and mostly we will deploy our
application inside the private sub
because if you create if you create any
instance in the private subet it won't
get the public IP directly so there
won't be um the connection to the public
world to them our instance that's why we
use public and private IP sorry public
and private subnets and in private the
instan which are in the private subnet
don't have IP address if we want to
connect to the public private instances
we can use the Bastion host which is in
the public IP or else um um that two
that is very secure private IP we can do
the knackles at the end at the subnet
level it it will increase the security
level of to our instances and there will
be a n Gateway N Gateway used to traffic
Route the traffic
from I mean private trans to the outer
internet I mean it just it do the
network translation but how will the
users will hit my website if if
everything is private then how will they
get access to the instance yes sir there
will be n Gateway the N Gateway will
have the static IP and static users can
hit only static IP they they won't be
exposed with the IP of our private
instances and we will also configure the
load balanc in the public subnet we
don't configure load balance in the
private subnet and we will configure
Autos scaling group in the private
subnet with the Autos scaling group in
may come up I mean inst may vary
according to the traffic and load load
balancer will rout the traffic and if
you want to go to that instance we can
go to the elastic IP of the N Gateway so
it can um it is it is fixed IP it yeah
it is very secure compared to um
deploying instance in the from basan
host are there other options to connect
to private subnet instances um I think
that is is the only way we can connect
if if are if they are in the same VPC we
can connect create but I best in host is
like it's costly for me I need to have
another instance running all the time
just to connect to my instances I don't
want to pay for that machine okay then
no IDE no worries just understand these
there are additional options available
so you can can study about them yeah uh
you mentioned about autoscaling uh
situation U the machines get created
automatically destroyed automatically so
if my current situation is I have two E2
instances there is an autoscaling event
because of high traffic maybe you
configured your policies correct
everything fine it created 10 instances
then it uh something happened in your
environment some eight machines got
crashed or some error was there and then
you back to two machines how would you
investigate those eight machines what
happened because they are not available
anymore but even though they are not
available there will be Cloud watch
watches all the locks so we can go to
Cloud watches we can check what happened
to that instances generally if you
configure Autos scaling group it can
based on the topic it will definitely um
make instances scale automatically but
if if in such cases we can go to Cloud
watch and we can check what happened to
that instances based on that we can
debug and we we have we can do like um
we should not make it next time happen
so what what steps you would take to
make sure that the logs are available to
you I have my Docker application running
how do I get those logs or my my web
application is running on say Tom Cat
whatever situation how do I get my app
logs available whenever we are creating
instance they there we will get option
of enable Cloud log cloud cloud watch
lcks so we have to enable that otherwise
we we won't get logs so if you enable in
AWS cloudwatch logs how will that
cloudwatch log will know that I have a
Docker file I have an application
running on this this port and my logs
are going to my folder logs are going to
folder called
SLV SL Harish how would that cloudwatch
know that I have to pick up logs from
here every application is different
right yes yes sir logs are different
yeah how would you get those logs yeah
need to look into it so I have you
created AWS Lambda serverless uh yes sir
I created Lambda Services can you give
like when you needed the serverless
function and what steps you took uh
serverless generally there are two two
types of Computer Services server server
Computer Service and serverless and
server is um E2 and serverless is we
have um Lambda and forget so coming to
Lambda we can use it for the functions
which don't require any server I mean we
have if you want to perform any task
after that we want that instance to be
automatically down and go shut down at
that time we can do surl functions like
a Lambda we can do a Lambda for the cost
optimation thing or we can do any number
of things with the Lambda like if you
want to um do mainly we can use it for
the cost optimization for cost
optimation we can say like if there are
any unused EBS snapshot we can write a
simple function for that um a Lambda
function it will check for the all the
instances it will check for all the
volumes and it will check for the
snapshot and we can write the conditions
there if condition like if the snapshot
is not connected with the volume we can
delete it otherwise we can check like if
Snapchat is connect with the volume but
volume is not connected with the
instance then you can delete it likewise
we can make sure everything is deleted
this is one type of cost optimation and
we can even write Lambda function for
the S3 also if S3 none of the object is
dried from last um certain days we can
delete the S3 buckets so like like this
we caned this setup uh I I seen the
setup in my organization but I never
implemented any function you have
implemented yourself no sir no what sort
of language are possible what sort of
platform you would choose if you have to
write some function python but at least
I would suggest you should write at
least one Lambda function with your
hands it could be any use case that that
you have you can the use case you
explained is fine but at least build it
yourself yeah sure sir I will I will
definitely try uh can you tell me what
is what is high availability and how to
achieve that in AWS High a high
availability means we have to make sure
our application is available all the
time so to make sure uh we have the high
availability we will use the um option
of multiple region and multiple
availabilities there is regions in AWS
like AWS has their servers throughout
the world and for every region there
will be available certain multiple
availability Jones so if you want to
deploy our application we can make sure
that we will be deing in multiple
available even though when one of the
available goes down we can make sure our
our application is up and running from
the other availability Z likewise we can
make sure with the we can make sure the
high availability of to our application
okay can you explain me at least five
security best practices that you have
implemented in your current project yeah
sure sir for security um security to the
instance first we need to configure
security groups so security groups are
the one which control the incoming
traffic and outgoing traffic mostly we
will control the inbound traffic only we
will give the only required traffic
likewise if you want only 80 port number
18 we will just give yeah we will just
allow port number 80 and anyway we will
allow the SS connection so like this is
one type of security measures and one
more security we can do it at the subnet
level which is n network access control
list we can um um we can create the
certain rules like what are the IPS
which can come into or with the ma we
can also deny the deny rules will be
also available we can also deny what
should not be come to our instances even
though um if something is allowed in the
security level and if you blocked the
NSA level it will blocked at the NSA
level only so it will be like our any
extra layer to our instances and one
more thing is giving proper IM am
permissions so if you if you don't have
any proper IM am Services any other any
users can have the access to all the
services so we should not we should not
want that so we will give proper uh SEC
sorry we will give only proper some
permissions and we will give them proper
policies there will be if if you take
the S3 bucket we will be giving the
policy documents I mean bucket policies
like who can access the bucket even
though they have the access to the S3
but we can make sure like our bucket
won't be um deleted like we can create
the policies for particular bucket so
likewise we can make sure our security
in the AWS okay all right uh that's
about um the the security thing any any
other security product that you have
used V or no sir and one more thing if
if you want security for the users we
can use multiactor authentication that
is one type of security I forgot to tell
so even though our password username
password got compromised and we don't
let anyone to access our account because
we can we are enabling the multiactor
authentication with that they should
have the device so that otherwise they
can't log in any security product you
have implemented in the pipeline uh no
sir uh have you used sonar cube in the
pipeline yes sir yes sir yeah we use
sonar cube in the pipeline it is used to
um check the code quality and it it will
be checking the code quality and um it
will check for the any code bux there
and it is check for the code
vulnerabilities so this is one way to
check check the code can you give me
like two three examples what sort of
information you get from sonar analysis
yeah there are any duplication lines it
will tell we will we will be getting
like what are the problem with that
lines and if you have any bugs I mean
there will be option of quod Ms and all
those things we can check the um sonar
Cube analysis and then based on that we
will notify the developer if there is a
failure in any pipeline first we will
try to debug it and if it is any
intermed issue we will just reun the
build if it is um related to the code
according to the sonar C we will mention
to the developer he will take care of
that okay now in in one of the scenario
the customer comes back to you and says
the application is not performing well
in the production environment maybe it's
your database is not responding properly
what what options we have in terms of
scaling databases in AWS uh first
database we have RDS service so we can
use RDS service with the RDS service
inside that we can do uh we can use
MySQL um and mango s for example you
created an RDS cluster with say T3
medium instance your customers are
complaining now what options what things
you have
sorry sir no idea about okay so here is
the I I got some idea about your profile
and the feedback that I want to give you
is you have very high level
understanding of things you have only
understanding of the things that you
have worked on if it is fargate or some
pipeline you know about it but if we go
anything outside anything that you have
not worked um that's where you are
struggling yes so if it is you haven't
worked in production environment you
don't know about that you haven't
touched base with your seniors in the
same company you haven't understood the
complete process being in the same
organization for like more than two
years you should probably have a good
understanding of that if you have to
work on cost optimization or any other
security aspects in terms of Dev SEC Ops
you don't understand similarly with
autoscaling uh of databases or you you
are not aware of anything that you
haven't worked on are you certified on
any of the AWS
exams no sir any any terraform or
kubernetes certification h no sir so
explore that opportunity also because
that uh shows you multiple scenarios and
then tells you how you would solve that
in in real world so certifications are
are a good way uh don't use any dumps
don't use any shortcuts study about it
they are especially AWS certification
very well planned so uh so that's my
honest uh suggestion you need to go
beyond your role you need to go beyond
uh you know look for other options
especially when you are applying for
companies like infosis uh service
companies and the these are your target
even in future maybe you apply for banks
or somewhere you need to uh show your
skills as a consultant you need to
understand the situation you may not be
providing solution right away but you
should have the options in yes sir okay
this is your problem we have three
different problem solution for you uh I
will try all these things I I'll do POC
I'll do something and I'll I'll you know
make sure that the best solution gets
implemented in your case you are not
aware of those options at all just being
aware of options as a consultant is the
primary skill okay right so I'm sure you
will do well for this interview just go
for uh go for it uh in the current state
don't be like hesitant based on this
interview um look at these options like
how you uh you know improve your
Consulting skills how you can uh suggest
different options in the interview um
give give some sort of options to the
employer if you have to do some scaling
talk about say horizontal scaling
vertical scaling ask them questions okay
can I bring a downtime to the
application because if you can bring um
if you can uh if you have a downtime
window in case of database scaling you
can snapshot it and create a bigger
instance or something um you can go for
uh you know other scaling options
available in RDS which will be expensive
but it will be useful for your short um
scaling options so look at all aspects
and try to get certified yeah okay sir
understood any any questions you have
that you want to clarify yes sir
consultant service means basically we
need to give the suggestions right I
mean according to your answer I got this
like we need to we we should be in state
like we have to answer any situation
like we have to tackle any situation yes
we have to we can't say any situation
just think take your time ask questions
maybe further questions and then based
on that try to come up with solution if
you were a certified person or something
you would have probably got at least
some ideas because in the certification
they will bring such uh scenarios
scenarios yeah yeah and and try to build
projects or something outside of your
organization also don't rely on the
company to give you all the scenarios
and everything that that's possible try
to use certain other um res resources
also so some sort of complex projects
maybe cost optimization Dev cops or or
kubernetes or or anything that that you
can U do outside company also because
eventually uh you will be applying for
jobs in future also so having good
experience will will be help yeah got it
sir anything else before we U wrap up
yeah everything fine sir I prepare my
best do share us how you go with with
this current interview and um let's stay
connected yeah sure sir thank you sir
thanks bye bye
sir
تصفح المزيد من مقاطع الفيديو ذات الصلة
Excellent interview with a DevOps engineer with one year's experience
DevOps Interview For Experience : First Round Selected
AWS & Cloud Computing for beginners | 50 Services in 50 Minutes
How much it costs me to run my SaaS's in 2024
Cloud Networking Overview (Using AWS as reference)
Setup Codebase Gitpod AWS CLI
5.0 / 5 (0 votes)