Excellent Devops and Cloud Engineer Interview questions for ~2 Year Experience including feedback

Interviews - DevOps Engineer and Cloud Engineer
28 Dec 202325:42

Summary

TLDRIn this interview, Harish, a DevOps engineer with over two years of experience, discusses his role in a network communication project using AWS services and Jenkins pipelines. He shares insights on managing different AWS environments and touches upon security practices and cost optimization. The interviewer provides feedback on Harish's understanding and suggests exploring certifications and broader AWS knowledge to enhance his consulting skills for future roles.

Takeaways

  • 😀 Harish, a DevOps engineer from Hyderabad, has been working in the IT space for two and a half years and is currently working on a network communication project in a microservices architecture.
  • 📚 Harish graduated in 2021 and started his career with AWS training, progressing to a Junior DevOps engineer role, and is experienced with AWS services such as EC2, VPC, and S3.
  • 🛠️ Harish has practical experience in writing Docker files and Jenkins pipeline files, which are instrumental in building Docker images and automating the deployment process.
  • 🔄 The project Harish is involved in uses GitHub for data storage and Git as the version control system, emphasizing the importance of version control in DevOps practices.
  • 🤝 Harish has had limited direct interaction with clients, mainly participating in internal pipeline calls, indicating a need for more client-facing experience in a consulting role.
  • 🔍 The interview explores Harish's understanding of AWS environments and deployment strategies, revealing that he has primarily worked within development environments rather than production.
  • 🚀 Harish's role involves managing network elements with a GUI for network administrators, showcasing the significance of user interface design in system administration.
  • 💡 The interview highlights the need for cost optimization strategies, suggesting that Harish should explore AWS Lambda functions for potential cost savings.
  • 🔒 Security is a critical aspect of Harish's role, with the use of security groups, network ACLs, and IAM policies to ensure the protection of AWS resources.
  • 🔍 Harish has used SonarQube in the pipeline for code quality and vulnerability checks, underlining the importance of code quality in the development process.
  • 📈 The interview suggests that Harish should broaden his knowledge beyond his current role, consider obtaining AWS certifications, and gain a deeper understanding of various AWS services and best practices.

Q & A

  • What is Harish's current role in his IT career?

    -Harish is currently working as a DevOps engineer in a project that involves network communication and is based on microservices.

  • What AWS services is Harish experienced with?

    -Harish has expertise in various AWS services, including EC2, VPC, S3, and he has experience in writing Docker files and Jenkins pipeline files.

  • What is the project Harish is working on about?

    -The project Harish is involved in is a network communication project for a US-based company, which provides a UI for network administrators to manage customer services.

  • How does Harish's team manage different AWS environments for development, staging, and production?

    -They use a multi-branching strategy with different pipelines for each branch, and they use AWS to create instances for developers to test their code locally.

  • What version control system does Harish's team use?

    -Harish's team uses Git as their version control system, storing all data in GitHub.

  • What is the role of an AWS technical consultant at Infosys?

    -The role involves working with multiple customers, solving their problems, understanding their situations, identifying gaps in their systems, and implementing solutions based on automation or consulting skills.

  • How does Harish handle deployments from different branches in his current project?

    -Harish uses different pipelines for different branches, with the main branch having a separate pipeline for production and feature branches having their own pipelines.

  • What is Harish's experience with AWS Lambda and serverless functions?

    -Harish has created AWS Lambda serverless functions, primarily for tasks that don't require a server to remain active after execution, such as cost optimization.

  • What security measures does Harish implement in his project?

    -Harish implements security measures such as configuring security groups, network access control lists, proper IAM permissions, bucket policies, and multi-factor authentication.

  • What feedback does the interviewer provide to Harish regarding his understanding and experience?

    -The interviewer suggests that Harish has a high-level understanding of areas he has worked on but struggles with areas outside of his experience. The interviewer recommends exploring certifications and gaining a broader understanding of various scenarios.

  • What advice does the interviewer give Harish for improving his consulting skills?

    -The interviewer advises Harish to go beyond his current role, explore different options, get certified, and build projects outside of his organization to gain experience in various scenarios.

Outlines

00:00

👋 Introduction and Initial Discussion

The interview begins with a warm welcome to Harish, who has over two years of experience in the IT space. Harish introduces himself as a Hyderabad-based professional who graduated in 2021 and has been working as a DevOps engineer. He discusses his journey starting with AWS training, progressing to a Junior DevOps engineer, and currently working on a network communication project based in the US. The project is a microservices-based initiative involving network element management, with a focus on UI/GUI for network administrators. Harish also mentions his expertise in various AWS services, Docker, and Jenkins pipeline files.

05:01

🔍 Deep Dive into AWS Technical Skills

The conversation shifts to Harish's AWS skills, focusing on his management of different AWS environments such as Dev, Staging, and QA. He explains the use of Git branches, multi-branching pipeline strategies, and how AWS instances are created for developers. Harish also details the deployment process from Jenkins to AWS, including the use of Docker images, ECR, and EKS clusters. The discussion touches on the differentiation between development and production environments and the challenges of managing multiple AWS accounts.

10:04

🛡️ Security and Network Configuration

Harish discusses the importance of security in his projects, highlighting the use of security groups, network access control lists (NACLs), and proper IAM permissions. He explains the concept of public and private subnets within a VPC, detailing how private subnets enhance security by not having direct public IP access. The conversation also covers the use of Bastion hosts, NAT Gateways, and load balancers to manage traffic and maintain security. Harish admits to being less experienced in cost optimization and suggests using AWS Lambda for monitoring and controlling costs.

15:04

📈 Autoscaling and Monitoring

The discussion moves to autoscaling and monitoring, where Harish explains how autoscaling groups and CloudWatch are used to manage and monitor instances. He describes how logs are captured using CloudWatch and the importance of ensuring logs are enabled for applications running on Docker or Tomcat. Harish also talks about his experience with AWS Lambda, emphasizing its serverless nature and potential use cases, though he admits to not having implemented any Lambda functions himself.

20:05

🏗️ High Availability and Security Best Practices

Harish discusses high availability in AWS, explaining the use of multiple regions and availability zones to ensure application uptime. He also outlines five security best practices implemented in his projects, including configuring security groups, network access control lists, proper IAM permissions, bucket policies, and multi-factor authentication. The conversation includes the use of SonarQube in the pipeline for code quality and vulnerability checks, highlighting the importance of developer notifications based on SonarQube analysis.

25:07

💡 Feedback and Future Recommendations

The interviewer provides feedback to Harish, noting his high-level understanding of his work but a lack of broader knowledge outside his immediate responsibilities. The feedback emphasizes the need for Harish to expand his understanding of production environments, cost optimization, security, and autoscaling. The interviewer suggests obtaining AWS certifications and engaging in projects beyond his current role to enhance his consulting skills. Harish is encouraged to explore options and certifications to improve his knowledge and employability.

👋 Closing Remarks

The interview concludes with the interviewer wishing Harish the best for his current interview and suggesting they stay connected. Harish thanks the interviewer, and they both say their goodbyes, wrapping up the conversation.

Mindmap

Keywords

💡DevOps

DevOps is a set of practices that combines software development (Dev) and IT operations (Ops) to shorten the systems development life cycle and provide continuous delivery of high-quality software. In the video, Harish works as a DevOps engineer, which is central to the theme of IT infrastructure management and continuous integration/delivery pipelines.

💡AWS

AWS stands for Amazon Web Services, which is a comprehensive cloud computing platform provided by Amazon. It is relevant to the video as Harish mentions his experience with AWS services like EC2, VPC, and S3, indicating the reliance on cloud services for managing IT infrastructure and storage.

💡Microservices

Microservices is an architectural style that structures an application as a collection of loosely coupled services, which implement business capabilities. Harish discusses working on a microservices-based project, which is a key concept in modern software architecture allowing for scalable and maintainable applications.

💡Version Control

Version control is a system that keeps track of every modification to the code in a software project. In the context of the video, Harish mentions using Git as their version control system, which is essential for managing changes and collaborating in software development.

💡Docker

Docker is a platform for developing, shipping, and running applications in containers. Harish talks about writing Docker files, which are used to build Docker images, an important practice in containerization that allows for portable and consistent application deployment.

💡Jenkins

Jenkins is an open-source automation server that helps to automate parts of the software development process with continuous integration and facilitation of continuous delivery. The script mentions Jenkins pipelines, which are a series of steps that a software change must go through to be deployed, illustrating the CI/CD process.

💡

💡ECR

ECR stands for Amazon Elastic Container Registry, a service that allows storing and managing Docker container images. Harish mentions migrating to ECR from DockerHub for storing Docker images, highlighting the move towards more integrated and scalable cloud solutions.

💡EKS

EKS is Amazon Elastic Kubernetes Service, a managed service that makes it easy to run Kubernetes on AWS without needing to manage the infrastructure. The script refers to deploying Docker images to an EKS cluster, which is part of the video's theme of cloud-native application deployment.

💡Auto Scaling

Auto Scaling is a feature in AWS that automatically adjusts the number of EC2 instances in response to demand, ensuring applications run within a specified performance range. The video discusses auto-scaling groups, which are crucial for maintaining application performance and managing costs.

💡Security Groups

Security groups in AWS act as a virtual firewall for EC2 instances to control inbound and outbound traffic. Harish discusses configuring security groups to enhance security, which is a fundamental concept in cloud infrastructure management to protect resources.

💡Multi-factor Authentication

Multi-factor Authentication (MFA) is a security system that requires more than one method of authentication from independent categories of credentials to verify the user's identity for a login or other transaction. The video mentions MFA as a security measure to protect user accounts, emphasizing the importance of robust security practices.

💡SonarQube

SonarQube is an open-source platform for continuous inspection of code quality, which helps to detect bugs, vulnerabilities, and code smells in the code. The script refers to using SonarQube in the pipeline for code quality checks, which is a key part of ensuring the reliability and maintainability of the software.

💡High Availability

High availability is the characteristic of a reliable system being available to users at all times. In the video, Harish discusses achieving high availability in AWS by deploying applications across multiple regions and availability zones, which is a critical strategy for ensuring service continuity.

💡RDS

RDS stands for Amazon Relational Database Service, a managed database service that makes it easier to set up, operate, and scale a relational database in the cloud. The script mentions RDS as a solution for database performance issues, indicating the use of managed services for database management.

💡Terraform

Terraform is an infrastructure as code software tool that provides a consistent way to create, change, and improve infrastructure safely and efficiently. Although not directly mentioned in the script, the concept of infrastructure as code is implied in the discussion of provisioning resources like EC2 instances.

Highlights

Harish introduces himself as a DevOps engineer with two and a half years of experience.

Harish completed his graduation in 2021 from LLY professional University and has been working at Capgemini.

He started his career with AWS DevOps training and progressed to a Junior DevOps engineer.

Harish is currently working on a network communication project, which is a microservices-based US-based project.

The project involves managing network elements and providing a UI for network administrators to manage customer services.

Harish has expertise in various AWS services like EC2, VPC, and S3, and experience in writing Docker and Jenkins pipeline files.

He discusses the role of an AWS technical consultant, which involves working with multiple customers and solving their problems.

Harish admits he has not directly worked as a consultant but has had meetings with clients.

The company he works for, HSV, is US-based and focuses on network communication devices.

Harish explains the use of Git branches and multi-branching pipeline strategies in their development process.

He describes the process of deploying code from different branches to AWS environments using Jenkins pipelines.

Harish mentions the use of Docker images, ECR, and EKS clusters in their deployment process.

He discusses managing different AWS accounts and the use of public and private subnets for security.

Harish explains how autoscaling works and the use of CloudWatch for monitoring and logging.

He talks about the use of AWS Lambda for serverless functions and cost optimization.

Harish discusses high availability in AWS and the use of multiple regions and availability zones.

He outlines five security best practices implemented in his current project, including security groups and IAM permissions.

Harish mentions the use of SonarQube in the pipeline for code quality and vulnerability checks.

The interviewer provides feedback on Harish's understanding and suggests areas for improvement, including certifications and broader knowledge.

Harish is encouraged to explore beyond his current role and consider certifications and external projects to enhance his consulting skills.

Transcripts

play00:00

[Music]

play00:03

so hi Harish um welcome to this

play00:05

interview we will um record this and

play00:07

publish on social media okay sir good

play00:10

morning sir good morning so uh Harish

play00:13

you have around two and a half years of

play00:15

experience in it

play00:17

space can you brief about yourself um

play00:21

what sort of experience you have and

play00:23

then we can start discussion from there

play00:25

yeah sure sir thank you for giving me

play00:26

this opportunity to introduce myself I'm

play00:28

Harish I'm from Hyderabad I completed my

play00:30

graduation in 2021 in from LLY

play00:32

professional University since then I

play00:34

have been working in cap jimy as a deop

play00:36

engineer I started my journey as AWS

play00:38

deop training and then Junior devops

play00:40

engineer so I'm currently working in a

play00:42

project which is a network communication

play00:44

project it is a US based project and it

play00:46

is micros service project and in the

play00:48

project what we do is we will manage the

play00:50

network elements such as there will be W

play00:53

and devices so we will make sure that we

play00:55

will there a UI GUI for that and it is

play00:58

useful for the network ad administrators

play01:00

to manage the customer services like who

play01:03

has the which connection and it is a

play01:04

micros service based project and we

play01:06

store all the data in GitHub and we use

play01:09

G as our version control system and I

play01:11

have expertise in various a services

play01:13

like IM ec2 VPC um and um S3 all those

play01:18

things and um I have experience in um

play01:23

writing Docker files and Jen is pipeline

play01:25

files which are used to building and

play01:27

building doer image and pipeline okay

play01:30

and I have I have experience in Asel

play01:32

also yeah that's it sir so this role is

play01:35

that of AWS technical consultant with

play01:38

infosis uh just let me give you a brief

play01:42

about this role in this role you will be

play01:45

expected to work with multiple customers

play01:47

and solve their problems understand

play01:50

their situation find out certain gaps in

play01:52

their systems and then based on maybe

play01:55

automation or your Consulting skills you

play01:58

will be implementing those Sol Solutions

play02:00

as well have you done that that sort of

play02:02

work in your current job uh no sir

play02:05

actually we had meetings with the

play02:06

clients but uh I never this directly

play02:10

actually we our pipeline team will have

play02:12

every week call I mean internal call

play02:14

will be there every day but um we have

play02:16

pipeline call every Tuesday so we had

play02:18

meeting with the clients but not

play02:20

Consultants like this so um is it more

play02:23

of a product that one product that you

play02:26

are working on uh specifically yes sir

play02:29

it's a

play02:30

actually the company has the oalt and

play02:33

devices I mean it is an US based company

play02:36

HSV so and you have like only experience

play02:38

with one customer do you understand that

play02:40

different customers will have different

play02:42

uh requirements different yeah

play02:45

they different yes yeah you you go ahead

play02:49

yeah I know different customers have

play02:50

different requirement but from the

play02:52

beginning I've been the one only

play02:54

actually once I joined I got the

play02:56

training for this project only and I'm

play02:58

in continuing in the same project this

play03:00

project is going to end in the next

play03:02

march but um yeah after that maybe I can

play03:05

get new project if I stay here okay

play03:07

let's let's discuss about your AWS

play03:09

skills then uh because the role is uh

play03:11

mainly focusing on AWS Consulting uh

play03:14

role so yeah um what sort of how do you

play03:18

manage your different um AWS

play03:21

environments how do you manage your Dev

play03:23

staging QA Brad environments in your

play03:26

company so we have U we follow good

play03:29

branches in git so they will have

play03:31

different branches like de branch and

play03:34

mly one master branch is there so and

play03:36

then we have different feature branches

play03:38

and developers work on the different

play03:39

features and and the mind mind code will

play03:42

go to the production

play03:44

server sorry sir yeah mine will go to

play03:47

the production server and we have

play03:49

different pipelines for the each branch

play03:50

we use multi branching strategy multi

play03:52

branching pipeline strategy so whenever

play03:55

developer commits a code it will go to

play03:57

the complete life cycle and um we use um

play04:01

AWS to create the instances for the

play04:03

developers whenever they want to test

play04:05

their code locally we use the AWS and

play04:08

we'll be giving the IM permissions which

play04:10

are required for them and we also write

play04:12

terraform files to uh give them the I

play04:15

mean um E2 instances like that we manage

play04:19

a like this but then is it how do you

play04:22

deploy this how pipelines are connected

play04:25

to AWS accounts uh from the Jenkin

play04:27

itself sir once we got the country

play04:29

integration process we will deploy the

play04:31

code into the continuous delivery I mean

play04:34

right now we are using doer Hub I mean

play04:36

um complete overview of your pipeline I

play04:39

I understood there are future branches

play04:41

then you deploy from main branch in

play04:43

production but how do you deploy from

play04:45

future branch and do you deploy to

play04:47

production from future Branch how how

play04:49

does that branch and AWS environment

play04:53

relationship happening so okay sir I

play04:55

will tell the complete life cycle of our

play04:57

pipeline so whenever developer a code uh

play05:01

we'll be having web web web web hook

play05:03

triggers so it trigger the jenin

play05:06

pipeline once it trigger Jen pipeline

play05:08

will start the pipeline so there will be

play05:11

different stages like um first it will

play05:13

be get checkout anyway it will be pass

play05:14

because it is directly connected to the

play05:16

G and then we have build stages it will

play05:19

done through it actually Java based

play05:20

project it will be done through the m

play05:22

and then we have unit testing and then

play05:24

we have code scanning through sonar and

play05:27

then we will be building the docker

play05:28

images with the doer and then we will be

play05:31

push that image to the any private

play05:33

registry previously we were using

play05:34

dockerhub now we are pushing to the ECR

play05:37

recently we migrated to ECR and then we

play05:39

will deploy that to eks eks Cluster

play05:43

that's how we um connected from Jun

play05:45

pipeline to I mean AWS till uploading

play05:49

the image to the docker Hub it is

play05:51

continuous integration after then we

play05:53

will be using contest deployment which

play05:55

in the E right but you still didn't tell

play05:57

me how do you differentiate between your

play05:59

Dev q and fraud account how will your

play06:02

pipeline know that I have to deploy from

play06:05

a future branch and I have to deploy to

play06:08

some devb account so actually there will

play06:11

be different different pipelines right

play06:12

whenever if you um um

play06:15

generally feature Branch will only I

play06:17

mean um Master Branch will have separate

play06:20

pipeline every time and feature Branch

play06:22

will have have separate pipeline

play06:24

whenever U if you want to go for the

play06:27

release then we will merch the feature

play06:29

to the m Branch then it will go for the

play06:31

production likewise okay so these are

play06:33

different pipelines yes or is it same

play06:37

pipeline but some sort of branching

play06:39

awareness no sir completely different

play06:41

repositor will be there um I mean um we

play06:45

use Branch strategy right there will be

play06:47

it will show branch and um our

play06:49

repository name and there will be um in

play06:52

one stage it will um in in the branches

play06:54

it is showing feature branches mind

play06:56

branch and any sub branches so it will

play07:00

there okay all right and how do you

play07:02

manage different AWS accounts do you

play07:05

have your Dev cluster in and production

play07:08

cluster in same AWS account uh I don't

play07:10

have much experience with the production

play07:12

account but right now we have the same

play07:14

same one only sir so you are mainly as a

play07:16

as a devops engineer for lower accounts

play07:19

is it you don't deploy to production no

play07:21

sir we rarely deploy to the production

play07:23

we mainly focus on the dev environment

play07:26

are you aware who does that and how that

play07:28

process works

play07:29

uh slightly aware sir actually our

play07:31

senior Dev engineer will do that and we

play07:34

we'll take care of the development

play07:35

environments but do you never interact

play07:38

with them or they don't tell you what

play07:40

what's how do they merge and how do they

play07:42

deploy uh I know the process like how

play07:45

they create the Clusters and how do they

play07:48

um deploy the into production servers we

play07:51

use actually the fargate um we don't use

play07:54

we use farget for the our containers I

play07:56

mean which is serverless so with far

play07:58

farget they will deploy the code I mean

play08:00

they create Eng controllers there and

play08:03

with Eng controller they have they will

play08:05

deploy the code to the you use ECS

play08:07

farget or you use eks uh eks only sir we

play08:11

we are not using ECS why farget uh yeah

play08:14

farget is a serverless thing and we

play08:16

don't need to manage the containers

play08:18

actually if you manage the containers

play08:19

there will be we need to monitor

play08:21

everything I mean to say if you use the

play08:24

E2 instance instead of forgate we have

play08:26

to scale up and scale down but if you

play08:28

use the forgate we no need to take care

play08:30

of that ec2 will sorry a will take care

play08:33

of them scaling up and down based on the

play08:35

traffic so you define certain policies

play08:37

and based on that it yes sir yes now

play08:40

tell me if one of our customers says

play08:42

that my um fargate ECS eks cost is going

play08:47

up dramatically what steps would you

play08:49

take to control their

play08:51

cost I'm not sure about it right no

play08:54

awareness on cost optimization yeah cost

play08:57

optimization we can do with the a Lambda

play09:00

functions so we can write a Lambda

play09:02

function based on we can integrate with

play09:05

a Lambda with the cloud watch so we we

play09:07

have to give some necessary AWS roles

play09:09

there so based upon that AWS cloudwatch

play09:12

always monitor so according to the

play09:14

metrics and can everything that is in

play09:16

eks uh your farget can that be converted

play09:20

to AWS Lambda yeah maybe so I'm not sure

play09:24

no worries just study about this aspect

play09:27

because with the recession and all that

play09:30

lot of customers will be looking for

play09:31

cost optimization so need some

play09:34

Consulting there what what strategies

play09:36

are possible in the cost optimization

play09:38

area yeah sure sir okay now uh tell me a

play09:42

little bit about uh these public and

play09:45

private subnets are you aware what they

play09:47

are yeah yes sir actually um there will

play09:50

be public and private subnet we will be

play09:53

uh first of all there will be one V VPC

play09:55

is there VPC is one of the service by

play09:57

the AWS in the VPC what we do is we will

play10:00

create private Network inside the public

play10:03

Cloud so likewise if you want to deploy

play10:06

any other applications or um our own

play10:08

application we will first create VPC in

play10:11

that VPC we will have Public Sub private

play10:13

subet and mostly we will deploy our

play10:15

application inside the private sub

play10:17

because if you create if you create any

play10:19

instance in the private subet it won't

play10:21

get the public IP directly so there

play10:23

won't be um the connection to the public

play10:25

world to them our instance that's why we

play10:27

use public and private IP sorry public

play10:29

and private subnets and in private the

play10:33

instan which are in the private subnet

play10:34

don't have IP address if we want to

play10:36

connect to the public private instances

play10:38

we can use the Bastion host which is in

play10:39

the public IP or else um um that two

play10:43

that is very secure private IP we can do

play10:45

the knackles at the end at the subnet

play10:47

level it it will increase the security

play10:50

level of to our instances and there will

play10:52

be a n Gateway N Gateway used to traffic

play10:55

Route the traffic

play10:57

from I mean private trans to the outer

play11:00

internet I mean it just it do the

play11:02

network translation but how will the

play11:04

users will hit my website if if

play11:06

everything is private then how will they

play11:09

get access to the instance yes sir there

play11:11

will be n Gateway the N Gateway will

play11:13

have the static IP and static users can

play11:17

hit only static IP they they won't be

play11:19

exposed with the IP of our private

play11:22

instances and we will also configure the

play11:24

load balanc in the public subnet we

play11:26

don't configure load balance in the

play11:28

private subnet and we will configure

play11:30

Autos scaling group in the private

play11:31

subnet with the Autos scaling group in

play11:34

may come up I mean inst may vary

play11:36

according to the traffic and load load

play11:38

balancer will rout the traffic and if

play11:40

you want to go to that instance we can

play11:42

go to the elastic IP of the N Gateway so

play11:46

it can um it is it is fixed IP it yeah

play11:49

it is very secure compared to um

play11:51

deploying instance in the from basan

play11:53

host are there other options to connect

play11:55

to private subnet instances um I think

play11:58

that is is the only way we can connect

play12:00

if if are if they are in the same VPC we

play12:03

can connect create but I best in host is

play12:06

like it's costly for me I need to have

play12:08

another instance running all the time

play12:10

just to connect to my instances I don't

play12:12

want to pay for that machine okay then

play12:15

no IDE no worries just understand these

play12:19

there are additional options available

play12:21

so you can can study about them yeah uh

play12:24

you mentioned about autoscaling uh

play12:26

situation U the machines get created

play12:29

automatically destroyed automatically so

play12:32

if my current situation is I have two E2

play12:34

instances there is an autoscaling event

play12:37

because of high traffic maybe you

play12:39

configured your policies correct

play12:40

everything fine it created 10 instances

play12:44

then it uh something happened in your

play12:47

environment some eight machines got

play12:49

crashed or some error was there and then

play12:52

you back to two machines how would you

play12:54

investigate those eight machines what

play12:56

happened because they are not available

play12:57

anymore but even though they are not

play12:59

available there will be Cloud watch

play13:02

watches all the locks so we can go to

play13:04

Cloud watches we can check what happened

play13:05

to that instances generally if you

play13:08

configure Autos scaling group it can

play13:10

based on the topic it will definitely um

play13:12

make instances scale automatically but

play13:15

if if in such cases we can go to Cloud

play13:18

watch and we can check what happened to

play13:19

that instances based on that we can

play13:21

debug and we we have we can do like um

play13:24

we should not make it next time happen

play13:26

so what what steps you would take to

play13:28

make sure that the logs are available to

play13:30

you I have my Docker application running

play13:34

how do I get those logs or my my web

play13:36

application is running on say Tom Cat

play13:38

whatever situation how do I get my app

play13:41

logs available whenever we are creating

play13:44

instance they there we will get option

play13:46

of enable Cloud log cloud cloud watch

play13:49

lcks so we have to enable that otherwise

play13:51

we we won't get logs so if you enable in

play13:54

AWS cloudwatch logs how will that

play13:57

cloudwatch log will know that I have a

play14:00

Docker file I have an application

play14:02

running on this this port and my logs

play14:05

are going to my folder logs are going to

play14:07

folder called

play14:08

SLV SL Harish how would that cloudwatch

play14:12

know that I have to pick up logs from

play14:15

here every application is different

play14:17

right yes yes sir logs are different

play14:19

yeah how would you get those logs yeah

play14:22

need to look into it so I have you

play14:24

created AWS Lambda serverless uh yes sir

play14:28

I created Lambda Services can you give

play14:30

like when you needed the serverless

play14:33

function and what steps you took uh

play14:35

serverless generally there are two two

play14:36

types of Computer Services server server

play14:38

Computer Service and serverless and

play14:40

server is um E2 and serverless is we

play14:42

have um Lambda and forget so coming to

play14:45

Lambda we can use it for the functions

play14:47

which don't require any server I mean we

play14:49

have if you want to perform any task

play14:51

after that we want that instance to be

play14:53

automatically down and go shut down at

play14:56

that time we can do surl functions like

play14:58

a Lambda we can do a Lambda for the cost

play15:01

optimation thing or we can do any number

play15:03

of things with the Lambda like if you

play15:05

want to um do mainly we can use it for

play15:08

the cost optimization for cost

play15:11

optimation we can say like if there are

play15:12

any unused EBS snapshot we can write a

play15:16

simple function for that um a Lambda

play15:18

function it will check for the all the

play15:20

instances it will check for all the

play15:21

volumes and it will check for the

play15:22

snapshot and we can write the conditions

play15:25

there if condition like if the snapshot

play15:27

is not connected with the volume we can

play15:29

delete it otherwise we can check like if

play15:31

Snapchat is connect with the volume but

play15:33

volume is not connected with the

play15:34

instance then you can delete it likewise

play15:36

we can make sure everything is deleted

play15:38

this is one type of cost optimation and

play15:39

we can even write Lambda function for

play15:42

the S3 also if S3 none of the object is

play15:45

dried from last um certain days we can

play15:47

delete the S3 buckets so like like this

play15:50

we caned this setup uh I I seen the

play15:53

setup in my organization but I never

play15:55

implemented any function you have

play15:57

implemented yourself no sir no what sort

play16:00

of language are possible what sort of

play16:02

platform you would choose if you have to

play16:04

write some function python but at least

play16:06

I would suggest you should write at

play16:08

least one Lambda function with your

play16:09

hands it could be any use case that that

play16:12

you have you can the use case you

play16:15

explained is fine but at least build it

play16:17

yourself yeah sure sir I will I will

play16:19

definitely try uh can you tell me what

play16:21

is what is high availability and how to

play16:24

achieve that in AWS High a high

play16:27

availability means we have to make sure

play16:29

our application is available all the

play16:30

time so to make sure uh we have the high

play16:33

availability we will use the um option

play16:36

of multiple region and multiple

play16:38

availabilities there is regions in AWS

play16:41

like AWS has their servers throughout

play16:44

the world and for every region there

play16:46

will be available certain multiple

play16:48

availability Jones so if you want to

play16:50

deploy our application we can make sure

play16:52

that we will be deing in multiple

play16:53

available even though when one of the

play16:56

available goes down we can make sure our

play16:59

our application is up and running from

play17:01

the other availability Z likewise we can

play17:03

make sure with the we can make sure the

play17:05

high availability of to our application

play17:07

okay can you explain me at least five

play17:11

security best practices that you have

play17:13

implemented in your current project yeah

play17:17

sure sir for security um security to the

play17:20

instance first we need to configure

play17:22

security groups so security groups are

play17:25

the one which control the incoming

play17:26

traffic and outgoing traffic mostly we

play17:28

will control the inbound traffic only we

play17:30

will give the only required traffic

play17:32

likewise if you want only 80 port number

play17:35

18 we will just give yeah we will just

play17:37

allow port number 80 and anyway we will

play17:40

allow the SS connection so like this is

play17:42

one type of security measures and one

play17:44

more security we can do it at the subnet

play17:47

level which is n network access control

play17:49

list we can um um we can create the

play17:51

certain rules like what are the IPS

play17:53

which can come into or with the ma we

play17:56

can also deny the deny rules will be

play17:57

also available we can also deny what

play17:59

should not be come to our instances even

play18:02

though um if something is allowed in the

play18:04

security level and if you blocked the

play18:06

NSA level it will blocked at the NSA

play18:08

level only so it will be like our any

play18:11

extra layer to our instances and one

play18:13

more thing is giving proper IM am

play18:15

permissions so if you if you don't have

play18:17

any proper IM am Services any other any

play18:20

users can have the access to all the

play18:21

services so we should not we should not

play18:24

want that so we will give proper uh SEC

play18:27

sorry we will give only proper some

play18:29

permissions and we will give them proper

play18:32

policies there will be if if you take

play18:34

the S3 bucket we will be giving the

play18:36

policy documents I mean bucket policies

play18:38

like who can access the bucket even

play18:40

though they have the access to the S3

play18:42

but we can make sure like our bucket

play18:44

won't be um deleted like we can create

play18:48

the policies for particular bucket so

play18:50

likewise we can make sure our security

play18:52

in the AWS okay all right uh that's

play18:55

about um the the security thing any any

play18:59

other security product that you have

play19:00

used V or no sir and one more thing if

play19:04

if you want security for the users we

play19:06

can use multiactor authentication that

play19:08

is one type of security I forgot to tell

play19:10

so even though our password username

play19:12

password got compromised and we don't

play19:14

let anyone to access our account because

play19:16

we can we are enabling the multiactor

play19:17

authentication with that they should

play19:20

have the device so that otherwise they

play19:22

can't log in any security product you

play19:24

have implemented in the pipeline uh no

play19:27

sir uh have you used sonar cube in the

play19:30

pipeline yes sir yes sir yeah we use

play19:32

sonar cube in the pipeline it is used to

play19:34

um check the code quality and it it will

play19:37

be checking the code quality and um it

play19:40

will check for the any code bux there

play19:42

and it is check for the code

play19:43

vulnerabilities so this is one way to

play19:45

check check the code can you give me

play19:47

like two three examples what sort of

play19:49

information you get from sonar analysis

play19:51

yeah there are any duplication lines it

play19:54

will tell we will we will be getting

play19:56

like what are the problem with that

play19:57

lines and if you have any bugs I mean

play20:00

there will be option of quod Ms and all

play20:02

those things we can check the um sonar

play20:04

Cube analysis and then based on that we

play20:07

will notify the developer if there is a

play20:09

failure in any pipeline first we will

play20:10

try to debug it and if it is any

play20:12

intermed issue we will just reun the

play20:14

build if it is um related to the code

play20:17

according to the sonar C we will mention

play20:19

to the developer he will take care of

play20:21

that okay now in in one of the scenario

play20:23

the customer comes back to you and says

play20:26

the application is not performing well

play20:28

in the production environment maybe it's

play20:31

your database is not responding properly

play20:33

what what options we have in terms of

play20:35

scaling databases in AWS uh first

play20:39

database we have RDS service so we can

play20:42

use RDS service with the RDS service

play20:44

inside that we can do uh we can use

play20:46

MySQL um and mango s for example you

play20:50

created an RDS cluster with say T3

play20:53

medium instance your customers are

play20:55

complaining now what options what things

play20:57

you have

play20:59

sorry sir no idea about okay so here is

play21:01

the I I got some idea about your profile

play21:05

and the feedback that I want to give you

play21:07

is you have very high level

play21:09

understanding of things you have only

play21:11

understanding of the things that you

play21:12

have worked on if it is fargate or some

play21:15

pipeline you know about it but if we go

play21:18

anything outside anything that you have

play21:20

not worked um that's where you are

play21:22

struggling yes so if it is you haven't

play21:25

worked in production environment you

play21:27

don't know about that you haven't

play21:28

touched base with your seniors in the

play21:30

same company you haven't understood the

play21:32

complete process being in the same

play21:34

organization for like more than two

play21:36

years you should probably have a good

play21:38

understanding of that if you have to

play21:40

work on cost optimization or any other

play21:43

security aspects in terms of Dev SEC Ops

play21:46

you don't understand similarly with

play21:48

autoscaling uh of databases or you you

play21:52

are not aware of anything that you

play21:54

haven't worked on are you certified on

play21:56

any of the AWS

play21:58

exams no sir any any terraform or

play22:01

kubernetes certification h no sir so

play22:04

explore that opportunity also because

play22:06

that uh shows you multiple scenarios and

play22:10

then tells you how you would solve that

play22:12

in in real world so certifications are

play22:15

are a good way uh don't use any dumps

play22:18

don't use any shortcuts study about it

play22:20

they are especially AWS certification

play22:22

very well planned so uh so that's my

play22:24

honest uh suggestion you need to go

play22:27

beyond your role you need to go beyond

play22:30

uh you know look for other options

play22:31

especially when you are applying for

play22:32

companies like infosis uh service

play22:35

companies and the these are your target

play22:37

even in future maybe you apply for banks

play22:39

or somewhere you need to uh show your

play22:43

skills as a consultant you need to

play22:46

understand the situation you may not be

play22:48

providing solution right away but you

play22:49

should have the options in yes sir okay

play22:52

this is your problem we have three

play22:53

different problem solution for you uh I

play22:56

will try all these things I I'll do POC

play22:58

I'll do something and I'll I'll you know

play23:00

make sure that the best solution gets

play23:02

implemented in your case you are not

play23:04

aware of those options at all just being

play23:06

aware of options as a consultant is the

play23:09

primary skill okay right so I'm sure you

play23:13

will do well for this interview just go

play23:15

for uh go for it uh in the current state

play23:18

don't be like hesitant based on this

play23:20

interview um look at these options like

play23:23

how you uh you know improve your

play23:25

Consulting skills how you can uh suggest

play23:28

different options in the interview um

play23:31

give give some sort of options to the

play23:34

employer if you have to do some scaling

play23:36

talk about say horizontal scaling

play23:38

vertical scaling ask them questions okay

play23:41

can I bring a downtime to the

play23:43

application because if you can bring um

play23:45

if you can uh if you have a downtime

play23:47

window in case of database scaling you

play23:49

can snapshot it and create a bigger

play23:52

instance or something um you can go for

play23:55

uh you know other scaling options

play23:57

available in RDS which will be expensive

play24:00

but it will be useful for your short um

play24:03

scaling options so look at all aspects

play24:07

and try to get certified yeah okay sir

play24:09

understood any any questions you have

play24:11

that you want to clarify yes sir

play24:13

consultant service means basically we

play24:15

need to give the suggestions right I

play24:17

mean according to your answer I got this

play24:19

like we need to we we should be in state

play24:22

like we have to answer any situation

play24:24

like we have to tackle any situation yes

play24:26

we have to we can't say any situation

play24:28

just think take your time ask questions

play24:31

maybe further questions and then based

play24:33

on that try to come up with solution if

play24:36

you were a certified person or something

play24:38

you would have probably got at least

play24:40

some ideas because in the certification

play24:41

they will bring such uh scenarios

play24:44

scenarios yeah yeah and and try to build

play24:46

projects or something outside of your

play24:48

organization also don't rely on the

play24:50

company to give you all the scenarios

play24:52

and everything that that's possible try

play24:55

to use certain other um res resources

play24:58

also so some sort of complex projects

play25:01

maybe cost optimization Dev cops or or

play25:04

kubernetes or or anything that that you

play25:07

can U do outside company also because

play25:10

eventually uh you will be applying for

play25:12

jobs in future also so having good

play25:15

experience will will be help yeah got it

play25:17

sir anything else before we U wrap up

play25:20

yeah everything fine sir I prepare my

play25:22

best do share us how you go with with

play25:25

this current interview and um let's stay

play25:27

connected yeah sure sir thank you sir

play25:30

thanks bye bye

play25:40

sir

Rate This

5.0 / 5 (0 votes)

Ähnliche Tags
DevOpsAWSInterviewITMicroservicesCareerCloud ComputingNetworkingSecurityAutomation
Benötigen Sie eine Zusammenfassung auf Englisch?