DevOps Interview Questions and Answers | DevOps Tutorial | DevOps Training | Intellipaat

Hello everyone welcome to Intellipaat.
Today in this session we are going to discuss top DevOps interview questions
that can be asked to you in your next DevOps Interview. So let’s go
ahead and get started with this session with the first slide which talks about
the agenda. So basically we have divided all our interview questions under
these domains so those domains are continuous development then we have
virtualization and containerization continuous integration, configuration
management and continuous monitoring and then in the end we have continuous
testing so we gonna follow this sequence while we are discussing the
questions right so let’s go ahead and start with the first domain which is
continuous development and let’s see what our first question is. So our first
question asks us “can we explain the gate architecture” now this is fairly an
important question reason being only if you understand the underlying basics of
how gate works will you be able to troubleshoot a problem when you face it
and when you are working in a company as a devops engineer all right so let us
try to explain what git basically is and how its architecture is. Now most of you
might know that git is a distributed version control system. Now what is the
distributed version control system let us explain it using a diagram. In a
distributed version control system basically your repository it is
distributed among the people who are contributing to that repository
and that is why it is called distributed so that means that anyone who wants to
make a change in the code that is present in this repository has to first
copy this repository on his local system commit the changes to the local file
system of this repository and only then he can push this repository on to the or
push the code changes or the feature additions and everything to the remote
repository right? Nobody can work directly on the remote repository and
this is the main principle of how Git works
and that is the reason it is called a distributed version control system right?
If you were to talk about the lifecycle as to what are the steps to implement if
somebody wants to upload or change some code present in our remote
repository the first thing that I have to do is pull the repository from the
remote system. Once they pull the repository it becomes their local
repository, change whatever files they want to change and then once they have
done with the changes they will have to do a git commit or they’ll have to
commit the files to the local repository once the files have been committed they
will have to be pushed to the remote repository so that it becomes
visible to anyone and everyone who will pull this project the next time. All
right? and this is how the whole git architecture works. Now I hope you guys
understand what is the working of git and what exactly is the architecture of
it. Moving forward now let’s talk about the next question which says “In git how
can you revert a commit that hasn’t already been pushed and made public”
right so basically you have done some changes in the code you committed those
changes to your local repository and now you’ve also pushed the changes to the
github repository now if you have a CI CD pipeline in place which basically
means that the moment you commit to gIt it automatically takes the code and
deploys it on a server if that is the kind of configuration that you have done
then probably the code that you have pushed has also been deployed on a
server and that is when you come to sense that you
know the code is wrong and you quickly have to change the code so that
everything becomes working again right now this is a very hot fix or this is a
very quick fix that probably every DevOps engineer employs when they
whenever there is a problem in the production server right so what is that
quick fix the quick fix basically says that whatever commit whatever last
commit was working perfectly just roll back to that so that everything becomes
normal until unless you have fixed new commits that is the basic
the intention behind the revert procedure. Now how can you implement the
revert procedure it can be implemented using the git revert command and let me
show you a quick demo of the git revert command so you know how you can
implement it in computer alright so this is my terminal guys basically I will SSH
into an aws server and i am in. So I have a github repository that I have created
for demo purposes so like we discussed the first stage in the lifecycle of git
is to clone the repository so we’ll just copy this address
so we’ll just copy this address so we’ll just copy this address and then we’ll come here and we’ll type git clone
and then the address okay? Now this project is basically a website that I
created. It’s a small website that I created, now in order to see that website
we will have to paste this code inside our Apache folder so let us go inside an
Apache folder which is present in this directory alright? now I’ll do a quick
git clone along with the repository address hit enter and now if I do an LS
you can see that this is a folder called DevOps IQ which has been created inside
this folder. I will go inside DevOps IQ and do an LS and you can see there’s one
more folder called DevOps IQ alright let’s go inside that and now if
I do an LS these are the two files which are present inside my codebase okay? now
if you were to see what this website actually looks like right now I can just
go here and I can type in the IP address so it’s slash
DevOps IQ and slash DevOps IQ all right this is how the website looks like for
now. Now I have to make some changes so that the background becomes
a little more better so what I can do I’ll just go back here I’ll go do a nano
and change the code of the website and say I’ll change it to I have an image
in the images folder let me change it to one dot jpg all right? let us save it and
once you’ve saved it the next thing that you have to do is commit the changes to
your local repository and let us do that so first I’ll have to add the git files
to the repository now I’ll have to commit the changes and the message would
say changed background all right so the changes have been committed and
now I ll push these changes to the remote repository so it’s hshar and the
password is this now. Before making these changes let me quickly show you the code
that you are currently going to see before I push anything on this
repository. So can you see the code is images /2.jpeg I’ve changed the code
to be 1.jpeg so let me hit enter and let’s see if our code gets changed
over here. So now if I do a refresh let me do a refresh you can see the code has
been changed just now so it says 44 seconds ago the code was changed awesome
so now because my code has been changed if I go to this website now and hit
enter you can see the background is now changed it is now a different background
now what I want to do is I want to I realize that this changed Sarah dead is
probably wrong and I want to revert to a particular commit into the older comment
that was actually working all right so what I can do is I’ll just come back to
my terminal let me clear the screen first thing that you do is do
klog so now we get to get a log of all the comments that you have made to this
particular repository now this is the particular commit that you have
particularly applied right now and this is causing you a problem so just copy
the commit ID for this and now just go ahead and do a git revert so get the
word and then give this ID which you just copied okay and hit enter so once
you do that it’ll tell you the information about this particular commit
ID right so just review everything and then you can see that the comet has been
reverted so now I have not pushed the changes but then if I come back here and
if I hit enter you can see the older website comes back because the code has
now been changed and if I want to make these changes to the remote repository
as well all I have to do is git push origin master and you asked me the
credentials and the changes have been pushed right now if I come here and I
just see the code can if I do worry fresh you can again see that the code
has changed back to two dot jpg which was our earlier code which we made
changes to alright so guys this is how you can do a revert on on a basically a
commit and a push that you have made to your remote repository as well right so
if you encounter any problem during working a while working at the DevOps
engineer you should remember this session where I taught you how to divert
a particular commit all right so with that let’s move on to our next question
which says have you ever encountered failed deployments and how have you
handled them now see there any DevOps engineer in the world will have faced a
problem in which you know probably the things that he had planned the things
didn’t go according to his plan right absolutely happens and if somebody is
asking you in a DevOps interview have you committed mistakes so you should
just to impress them probably never say yes right so if you have never committed
mistakes that’s awesome right but then I know that every engineer or every DevOps
engineer was working the nursery would have faced a problem while working and
would basically account to a mistake that he made while deploying things all
right now the important thing or the important key takeaway from this kind of
a learning should be that whatever mistake you make you learn from it right
and you never commit it again and that is basically the intent behind this
question as well the interviewer would want to know if
you made the mistakes what did you learn from those mistakes okay so if you if an
interview was supposed to ask me this question also obviously I have
encountered failed deployments and what have I learned from them I’ll just give
you the best practices that I think are viable for any DevOps engineer who is
working in the industry so the first thing that every one should follow and
should make it a thumb rule is that you should automate code testing not only
does it save time because now you know your tester does not have to wait for
your developer to basically push the code and then check it the to Aleppo can
check it in real time because you have written a script for his web for his
application and all the major tests which are to be done which are pretty
common can be done using automated code testing right now like I said it’s not
only for time saving but also it removes the the part where and humor a human
error can occur right so if if you are working if you are now you know human
when you work with people people commit mistakes but if you can write a code
which will basically test each and every functionality that code will never make
mistake and that is why you should always automate things as far as
possible right so like in my example what happened was that there was a
basically commit to the repository which was basically a feature edition right
and the tester did not see the all the functionalities of for code to see some
of the functionalities that could impact the other components of my product and
because of that when it got pushed on to production basically disaster happened
everything stopped working right and that was only because the testing did
not happen properly right so for all the critical processes of your website or
your product you can basically create a code which will test that website and
basically that that would amount to that would be basically closed on most of the
dough’s to mistakes all right the next thing is you should always use docker
for same environment right and this is basically the ideology behind DevOps
that these kind of problems where you know a developer used to work and a
tester could not run his code on his computer but the developer said that
everything is working fine on a system docker basically solves that problem
right so use docker as much as you can for the same environment problem that
you might face then we should always use micro services now when you are working
in a company it could be that you know the product is in the legacy phase and
hence it’s on a model it’s kind of a thing but you should never encourage
this kind of an architecture right reason being say you did a bad commit or
you did a bad push on the production server but it should not impact the
other components of your product right if probably you have done something to
search and if it’s a bad commit or a bad push the only functionality that should
be impacted should be your search functionality and not the other
functionalities and then that is the sole reason behind why we should use
micro services that is we should divide our application into different small
products which we would deploy on servers and these products should be
independent of each other when you talk about the monolith
the architecture all these components are coinciding which other I have the
dependency upon each other but when you talk about a micro-services kind of an
architecture you remove that dependency so as to even if one component fails it
does not impact the whole application fourth point being you should always
overcome risks to avoid failures now this basically means that if there is a
code change or if there is a feature edition which works some time then
sometimes it does not and you’re not able to figure out why is exactly that
thing happening it is better to wait and troubleshoot it than to push it just to
meet your date’s right because the latter can cause you a big problem
in production when you are in a company like probably like ëtil or in a company
like Samsung or Ericsson where their products each second of their websites
uptime brings in money right so if your website is down for 30 seconds that
could amount to a huge loss and that will be on you right so – for you to not
face that kind of a situation always be 100% sure before you make a change or a
release on to the production server all right so this is the end to the domain
of continuous development let’s go ahead and now talk about virtualization and
containerization all right so let’s start with the first question of this
domain which says what is the difference between virtualization and
containerization now this is a very important question guys because most of
us get confused between virtualization and contained relations let’s see what
are the differences between these two things
so virtualization is nothing but installing a new piece of operating
system on top of a virtualized hardware what does that mean so basically there
are surface like hypervisor or any of the software which specializes in
virtualizing hardware so if you have a server which has around 64 gigs of ram
and thousand TB of hard disk space with a software like hyper hypervisor what
you can do you can take that space and divide it
among multiple operating systems right you can deploy multiple operating
systems on the same hardware by virtualizing the hardware so as to the
operating system will feel that say if you virtualized 1gb of ram from this
whole system and say around 100 gb of storage the operating system will think
that you know it only has 1 GB and hundred 1 gb of ram an hundred GB of
storage space available toward it cannot go beyond that reason being it does not
know of the hardware which is beyond or which is which is ahead of the
hypervisor software all right so in virtualization basically you have
an hypervisor which is on top of your visits on top of your operating system
and virtualizes the hardware beneath it right then you have a guest operating
system so basically once your virtualized to the hardware you install
guest operating systems on top of that for example the best example for this
would be VirtualBox right you install VirtualBox and then you can install
operating systems on the VirtualBox with a given spec that you will decide right
and once you have installed the guest operating systems on top of that they
would be at the boundaries or the libraries that you probably would be
downloading or that came with the operating system and on top of that you
have the applications which would be running right so the key takeaway from
virtualization should be that it’s the whole operating system is installed from
the kernel level to the application level everything is fresh everything is
new now let’s talk about containerization so the thing in
containerization is that the host on top of host operating system you installed a
software called the container engine now the container engine is just like any
other software like you have an eye provider you have a container engine now
the container engine does not encourage installing a whole operating system for
example if you want to run a container for Ubuntu on say a Mac machine you can
do that right but it will basically in that container
you will have basically the bare minimum libraries that amount to become the
Ubuntu operating system – the canal right so in a container you do not have
a kernel the kernel is always always used of the host operating system and
this is the main difference between virtualization and containerization that
in virtualization you have a separate kernel present of the virtual operating
system but in containerization you do not have that and that is the reason
that containers are very small they have the bare minimum libraries required for
that container to behave as a particular operating system but the container
itself does not contain any operating system it basically is based on the same
kernel on which the host operating system resides all right and this is the
basically the main difference between virtualization and containerization
moving forward now the next question says without using docker that is
without using docker to get into a container can we see the processes that
are running inside the container of the docker container engine all right so
this basically is relating to the same fact that if if you want to see the
processes of a container which are running inside the docker container
engine if you can see them from the outside basically that means that you
know the processes are running in the same kernel of the host operating system
right the processes that are running in the docker container engine would be
basically as an addition to whatever is running on the host operating system as
well and you can see that using the PS aux command right so for the host
operating system it’s just like any other software or any other process that
it has to run but for the container it it basically thinks that it is running
inside an operating system which it actually is not right so can you see the
processes so the answer is yes you can see the processes which are running in
docker container and how can you see that how can you basically see these
processes let me demonstrate it to you okay so we will we have come back to our
AWS so let me clear up scream alright so as if I do a doc appears right now you
can see that there are new containers which are running on this system as of
now now what I’ll do is I’ll run a container for open – so I’ll do a docker
run – IT and then – D and then open – all right this ran a container for me
and if I do docker PS now you can see that there’s a container running which
is basically of the ubuntu image so if I go inside this container now so I’ll do
a talker exec – IT and then bash docker it’s a – I T and then the sorry I forgot
the container ID so the container ID and then bash so if I do that I’m inside the
container right and there’s no process running inside this container as of now
now if I were to duplicate this let me quickly again do an SSH so I will do an
SSH into the same server again so that I am on Tolo okay great so if I do a PS
aux these are all the processes which are running inside the operating system
right now right but let us make it a little simpler what we can do is let me
see all the processes which has the word watch in them right so let me make it
more clear for you so these are the processes which have the watch keyword
inside of them okay so there are basically four processes which are
running and which have the keyword voice inside of them now what I wanna do
inside this container I’m gonna launch a watch process so what is that watch
process that watch process is basically going to watch a particular command in a
set interval of time and what is that command I basically want to say let’s
see the LS – L command okay so what is it doing it is keeping a watch on the
command LS – L in every one second right you can see the time over here it is
incrementing every second and basically it’s keeping a watch on all these files
which are there inside the container continuously okay now again so this is
the dollar prompt that says we are outside the container right now now if I
again do the same command that is again I search for processes which have the
word watch in it I can actually see that there is a new process which is running
over here and this process is running inside the container which I’m able to
see from the host operating system level right so the host operating system is
doing is basically treating this particular process as if it was running
on its own system that is the container and the host operating system because
they are sharing the same kernel the host operating system is taking this
process as if it was running inside of it right but if we if we look closely
this basically this watch command is running inside the container right let
me just quickly stop it you can see we are still inside the
container and we have stopped the watch command and if I go here and if i
refresh you can see that this voice command is again corn which was being
mentioned over here before and this is exactly what we wanted we basically
wanted to see a process which was running inside a container from outside
the container that is from the host operating system and that is exactly
what we just did alright so the question that without using docker can you see
the processes that angle that are running inside a country
so the answer is yes you can do it all right so the next question is what is
the doctor file used for what do you basically use a darker file form so a
doc file is nothing but it’s a it’s basically a text document to create a
image using an older image and adding some files into it all right so this
it’s basically like a script that you run in Linux which can do all the things
for you that are required for example I might need an Apache image and I want my
website to be put inside the VAR /ww slash HTML folder inside this particular
Apache container now in order to do that if I were to do it without a docker file
I would have to first download the Apache me so I would probably type
docker run – ID – ID and then Apache once I have done that I will exec into
the container and then go to the directory called Val www HTML probably I
will do a git clone of the website that I want and then my website will be
available in that container and hence I’ll be able to use it right this is one
way second way is I can create a docker file which would basically build this
image for me without me having to do all these things all these manual things
which I just told you all right so let’s see how we can do that so let me just
exit this container and let me remove the container which are just running
inside my system right now okay fed overtalk appears now it’s clean you now what I want to do is I want to run
this particular container docker run – 90 – P I want to basically expose the
port 83 to this containers port 80 and I want to run it as a demon so that it
runs in the background and there is it okay so I have the container running
which is this and what I want to do is I want to basically copy the website into
this container so let me do a darker exact into this container – 90 this is
the container ID and then container ID and then bash so I want to
go inside this particular folder so if I do an LS
over here you can see that there is an in extra HTML and then an extra PHP
which are running right so it’s on exposed on port 83 which basically means
if I go to a browser and if I quote with this or IP address on port 83 I should
be able to see this Apache page and this is basically the container which I just
ran over here okay what I want to do is inside this particular directory I would
be basically copying the code of my website now let’s see how we can do that
so let me just exit this container let me do a docker PS let me do a dock stop
to this particular container so what would this – so basically if my Apache
was running over here it should stop once I have stopped this container okay
so it’s stopped so if I do a refresh over here you can see the seaside can’t
be reached this is exactly what we want okay now let me do a git clone of my
github and we’ll get flown all right awesome now I’ll go inside
this folder and basically I want to copy this particular folder inside the
container all right so for doing that let’s
contain let us create a docker file and what I want to say is in the image Sh
har slash web app I want to add the folder DevOps
IQ and where do I want to add it inside the container I want to add it in this
particular directory okay this is where I’m going to add it and inside dev ops
IQ okay fair enough and that is it that is all you have to do I’ll just come out
of this editor and I’ll now do a docker build of this talk of file with the name
test so it says successfully built an image and it has been tagged as test
great now if I’d run this image now docker run – I T – PC I run it on port
80 for run it as a daemon and run the image okay great so if I could port 80
for now let’s see if the container is working first
so yes the container is working now if I go inside DevOps IQ what do I see great
so I can see the web so basically my website is now available inside the
container by simply writing a docker file to do that and this is exactly what
we wanted awesome guys so what is the docker file use for it is basically used
for creating an image without having to do all the manual stuff of adding your
files and everything all right now once this image of yours is ready you can
push it to docker hub and anybody in the world can download it and can basically
use your website on the local system great now the next question is explain
container orchestration okay so for so till now we have seen that you know we
can deploy a container we can use it we can probably deploy an application on it
and we can use it on the web browser right but it is not that simple when we
talk about a website like Amazon or a website like Google right it has a lot
of components with it for example on Amazon you would see that you have a
comment section then on the home page you see that there are a lot of products
which have the prices the ratings now each and every component the prices the
ratings the name of the product the image of the product the comment section
each and everything is basically a micro source it is a small part of an
application which is running independently of all the other parts of
the website right and all of this is possible using containers so basically
what they would have done is they would have run each and every component inside
a container now the problem over here is now when you have a website like Amazon
you would be dealing like you will be dealing with minimum like 10 or 11
containers for one particular copy of that website or one particular instance
of that website right now when you’re dealing with ten eleven containers these
containers have to be working in conjunction to each other they should be
in sync with each other they should be able to communicate with each other
right and they should also be able to we should also be able to scale a
particular container in in in case it goes down for example the comment
section container it goes down for some reason now if it goes down we have to
keep a watch on it and we have to redeploy it if it goes
and all of these activities which I just told you comes under container
orchestration right if you were to manually deploy these containers on
docker you will have to keep a manual check on all these containers but
imagine when you have thousands or ten thousands of containers that you are
dealing with in those kind of scenarios you need container orchestration now
container orchestration can be done using various software so you have a
software called cuban at ease and before that there was
a software called docker swarm which was which basically made a life easier
by doing all the manual work for us that is it will check the containers health
it could scale them in case they become unhealthy they could always also notify
you know the administrators by an email in case something happens right they can
also run a monitoring software for you or average which basically gives you a
report or the health status of all the containers which are running inside that
software so this is what this is a very small part of what a container
orchestration tool can do right and basically if you were to understand what
container orchestration is that is like I said when you work with multiple
containers you have to take take in note a lot of things and that is possible
using the container orchestration tools like humanities and docker swamp okay so
the next question is what is the difference between darkest moment
communities now they’re both container orchestration tools we just saw that but
why do we have to or if I were to choose between cuban at ease and aqus one which
should I choose all right so let’s look at the differences between each one of
them so the first difference which is probably the most important difference
or probably I’ll say is the deciding factor whether you know you should go
ahead with this tool given that you have a short deadline and you have to deploy
a project so installing docker swarm is very easy it comes prepackaged with the
docker software so if you are installed dhoka dhoka swarm is already installed
on your system you don’t have to worry about anything on the other hand
darling Cuban ”tis is a very tough job right there are a lot of dependencies
for Cuban at ease you’ll have to see the system you’ll have to see the operating
system on which it is running and a host of other things right it has a lot of
dependencies and hence it is very tough to install but the moment you install it
it becomes a very helpful that as Humanities becomes very helpful because
of the features that it offers which brings us to our second point docker
swamp is faster than Cuban it is reason being that it has less features than
Cuban at ease and therefore making it a very light software and hence faster
than Cuban at ease so if you want to use the Orcas form you should be reading
about what docker swarm does not offer and what cuban ities offers and if you
feel you do not need all the features that cuban ities is offering you can go
ahead with Dorcas form and deploy your application in a faster manner but like
I said Cuban ”tis it is is complex and does a lot of services and features
because of which it is its deployments are a little slower when we compare it
to Dockers one third point which is most important point is docker swarm does not
give you the functionality of water scaling meaning if your containers go
down or if your containers are basically performing at their peak capacity there
is no option in Dhaka swamp to scale those containers on the other hand
because of cuban it is monitoring services and the host of other features
you have that option of providing auto scaling to your containers which
basically means you can automatically scale the containers up and down as and
when they are required and this is an amazing thing that cuban ities handles
for us alright guys so these were the questions around the domain
virtualization and containerization so moving ahead now our next domain is
continuous integration so let’s shine a light on what continuous integration is
so a quest first question itself is what is continuous integration so containers
enter is basically a development practice or
I’ll say it’s a stage which basically connects all the other stages of the
DevOps lifecycle for example you you you push your code to get like we took an
example when you push the code to get you might have provisions which might
allow you that the the moment the code is pushed onto the remote repository it
automatically gets deployed on the servers as well well if that is the case
basically that would be possible using integration tools that would integrate
your git repository with your remote server and that is exactly what Jenkins
runs it’s a continuous integration tool which helps you which helps us integrate
different like develops life cycle stages together so that they worked like
an organism right this is what continuous integration means so because
we discussed about what continuous integration is an expression says create
a CI CD pipeline using Karen Jenkins to deploy a website on every commit on the
main branch so on every push that you make to the remote repository the code
should automatically get deployed on a remote server alright so this is
something that we’re gonna do just now right but before going ahead let’s see
what is the architecture for this kind of a thing alright so this is how the
whole thing is going to work basically the developer is going to commit the
code to his github the github basically once it sees a change in the branch that
we mentioned it is going to trigger Jenkins which in turn will integrate or
will take the website from the github repository and push it on to the build
server on which we want the website to be deployed all right sounds awesome
great now let’s go ahead and do this demo so for that we will have to SSH
into our server so let us do that okay so I’m in now let me clear the screen so
first let’s check if I Jenkins is running on this so so let me check the
status for Jenkins so if I do a service Jenkins status I can see that the
Jenkins service is active awesome so I’ll just go here and I’ll go to the
Jenkins website which is basically available on 8080
alright so I’ll enter my credentials and this is how the dashboard for Jenkins
look like now our questioners or our aim is to create a job which basically will
push a website that we are uploading to get up on a particular server alright so
let’s create a new job first so let’s call our job as demo job ok and let’s
name it as a freestyle project and click OK
so this will create a job in Jenkins for us alright so our job has now been
created so what we want to do is I want to take code from my github so I’ll have
to specify the github repository over here ok and similarly I will have to say
that I want to trigger the build the moment my anything is pushed on my
remote repository alright and this should be it great so I mentioned that anything that
is pushed on to my master should trigger a build on Jenkins okay and what should
this builder what set of commands do I want to run once build is triggered so
first I want to remove all the containers which are running inside my
system so I’m going to clean up right so for that I’ll say sudo Rock RM hyphen F
then our taller this basically is going to clean all the containers which are
running currently in the system once this is done I want to build my website
which is going or build my container which is going to have my website
alright now how can we do that for that I’ll have to push the code to my github
which will have the docker file as well okay so we created a docker file inside you okay so here it is so we have the our
docker file created in the DevOps IQ folder which was there in my home
directory now what I want to do is I want to push so what is there inside
this raqqa file we saw that we could create a dock or file using if we write
something like this in our dock file and this would basically create an image
with our code which is there on github alright so what we’ll do we’ll just push
this code to remote repository and let’s add the message that we have ordered a
taco fire crate and now let’s push it to our remote repo great so it hasn’t pushed to my remote
repo and now if I just go here and check if my changes have been done or not let
me just quickly refresh it so yes I have doctor file in my camp gate repository
right now which was committed 42 seconds ago awesome great so now what I’ll do is
I’ll come to my champions and I will say that builders who dodo car build the
dockerfile now where is that doc of El the dock of I will basically be
downloaded in the Jenkins wake workspace so that is in where lib Jenkins
workspace and then the name of the job which is demo show up and that is it
so inside this I will have my dhaka file and it will basically build it and name
it as say Jenkins Jenkins it’ll name it as Jenkins in the next
step what I’ll do I’ll do a pseudo docker run – IT and then – P and say I
want to deploy it on 84 or say 87 port okay and what do I want to deploy I want
to deploy Jenkins okay so this should do these this should basically do all the
stuff so in the first command basically we’re removing any container which is
running on the system in the second command what we’re gonna do is we’re
gonna build the docker file which is available in this workspace and this
workspace will basically have my github project and
the link I’ve specified over here so it’ll basically just copy or it will
pull the project and save it in the workspace of demo job so ended sandwich
of there is it aqua file so we are building this dacha file and we’re
naming this created image as Jenkins and then we are running this image and
exposing it to port 87 okay so let’s save it awesome now what we have to do
is I’ll have to go inside so if you want to configure a web hook the way to do
that when I say a web book basically you want your github to interact with your
champions whenever there is a push to a particular repository so in your
repository go to settings and then go to web books so this is a web book that I
created for my Jenkins right so let me create it again for you so all you have
to do is click on add web book right and enter the URL for your Jenkins over here
so in my case it is this I’ll just enter it over here followed by this keyword
which is github – web hook and that is it once you specify that and just go
down click on add web hook and this should basically send a request to
Jenkins and if everything goes well it will say last delivery was successful ok
so any changes that I make to my github now should trigger a change over here
now let me delete this job because I think even this job gets triggered when
my github any changes made to my github right so let me delete this project ok
great so I just have this job now awesome
now let us see how it actually works so what I’m going to do is I come back
my terminal do an LS and letting coincide DevOps IQ and let me do some
changes in the cold so today I’ll go into nano index.html so the first thing
that I do is I change the title of the website so I called it as Jenkins test
website okay and I change the image from two to one dot jpg and that is it let us
see if I just push or if I just push this website onto my server what will
happen so I do a git push sorry first I’d have to add these changed files into
my repository git push origin master sorry get commit and let me label this
commit as test push ok done now let’s push this to a remote repository git
push origin master and let’s give the credentials awesome now if you wait here
it should basically start a job so as you can see there is a job queued which
is for demo job and this code automatically triggered by my github
okay so let me refresh this okay so the moment it gives you a read that
basically means that your job has been filled so let’s see what has just
happened why our job God failed so if you go here you can see the console
output just like this okay so basically we is forward to add a sudo
here and that is causing us a prom okay so we can fix this by just going
down and adding a pseudo here save it and again we’ll have to change the code let’s call it us Jenkins test2 website
we’ll do a control XY and now let’s add up files to our local repository get add
now let’s commit it test push – and now let’s push this to our master I’ll enter
the credentials and this should be it okay so let’s see so our second job got
triggered automatically and it gives us a blue now blue means that your job was
executed successfully so let’s check what happened so we were deploying it on
port 85 so let’s check if it has been indeed deployed so it was on 485 and the
folder was DevOps IQ okay so let’s check I’m not sure if it was 485 let’s check
or the port that we have specified the port is 87 okay so let’s go to port
87 okay so it’s giving it a nursing unsafe port so for our troubleshooting
let’s check if the container is running so yes the container is running on port
87 but it says and unsafe port or what we can do is let us change it to say 82
and now let’s just try to build the job from here we’ll just click on build now
job has been completed and the port was 82 yes apache is working now let’s try
going inside DevOps IQ folder and there you go you have your website with this
title which you pushed on github now for one more time for testing purposes let
us push our code once more and see what happens so I will say that this website
is test 3 website and say the I change the image as well – 2 . JP eg ok save it
do a git add to commit and say call it test which three and now let’s get push
origin master enter the credentials great now let’s check what will happen
okay so our build has been started and it has been completed great so if i
refresh just now it says Jenkins test3 website and the background also has been
changed so congratulation guys we have successfully completed the demo so
basically if you change anything in your github the website is automatically
getting deployed on your build server right and on top of this just for making
it more interesting what we can do is we can do a get log and we can revert on
this commit that we saw earlier okay so let’s do a git revert and then paste it
agree to everything and then push to master and other credentials everything has
been pushed job is getting triggered job is completed and if I go here again my
website got reverted to a particular previous version awesome guys so we have
completed a demo which basically asked us to create a CDC ICD pipeline using
gate and Jenkins to deploy a website on every commit on the main branch so
you’ve done it successfully awesome let’s move on to our next domain which
talks about configuration management and continuous monitoring awesome so what is
configuration management and what is continuous monitoring let’s understand
it so what is the difference between ansible chef and puppet now before
understanding the difference between ansible chef and puppet these are
basically configuration management tools what is configuration management if you
have say around 200 servers and you want to install a particular software on each
of these servers what will you do what one way what you can do is you can
basically go to each and every of these servers run a script and that basically
will install software on that on the only source right the other way to do it
is install a configuration management software using which you can deploy or
you can install all these software’s or you can control the configuration of
these all these servers from one central place and that is exactly what
configuration management means right now in configuration management you have
many tools like ansible chef puppet etc but these are the three top tools which
are used in the industry now the question is what is the difference
between ansible chef and puppet alright so let’s go ahead and see their
differences all right so let’s first talk about ansible so ansible is very
easy to learn because it is based on python so you don’t have to sweat a lot
or you don’t have to sweat much on learning
the commands for ansible because it is based on Python so if you know Python
and symbol is going to be a cakewalk for you it is preferred for environments
which are designed to scale rapidly basically with ansible the thing is that
you don’t have to install the ansible client software on the op on the on the
systems on which you want to basically deploy the configuration and Sybil just
has to be installed on the master and that is it no other configuration
required you can directly control the configuration of the client server given
you have the access to it so it offers simplified orchestration reason being
like I just told you that you don’t have to worry about installing software’s on
the client machines ansible can stand alone or take care of all the
complications that come forward when you are dealing with deploying
configurations without installing a particular software on the client
machines this is a basically a disadvantage of ansible that it has very
underdeveloped in GUI that is you only get the CLI to work with right and it
has very limited features when we compare it with puppet and chef now
let’s talk about chef what Wow is chef different from ansible so it is Ruby
based and hence it’s difficult loss now Ruby is a language that not many people
are acquainted with and hence people might find it difficult to get versed
with the commands of Chef the initial setup is complicated when I compare it
to the ansible the setup was very easy because I just had to install ansible on
the host machine and on the client machine I didn’t have to install any
software so but with chef you have to do that and hence it becomes a little
complicated but once all the setup and everything is done chef is very stable
right it has it has been since it’s a community product and it has been well
contributed to it’s a very stable product and it offers you resiliency so
so of course if you when you’re working on production servers probably
working on chef would be a better idea dan ansible because ansible
does not have that has that create community when you compare it with chef
and of course chef is the most flexible solutions for whis and middleware
management now middle-way basically means the software management part chef
offers to be a great choice for configuration management reason being it
can it is very reliable and is very mature because it was probably among the
first configuration management tools to come out and because community has
contributed a lot to this project it is very mature in its development stages as
well now let’s talk about puppet so puppet can be tough for people who are
beginners in the DevOps world right because the it uses its own language
called puppet DSL right the setup part is smooth when you compare it with chef
but it’s a little harder than ansible because when you’re using puppet you use
we use a master and an agent as well so you will have to install puppet agent on
the client machine and only then puppet will be able to interact with the client
software right now it has a strong support for automation so if you are
planning to do some configuration management that you want to automate
puppet is very compliant in that part you can easily do the automation part
using puppet and it is not suitable for scaling any deployment so if you have
say around 50 or 60 servers then you plan to add more in the future probably
puppet would not be the right choice for that kind of an architecture it is good
good to have when you have a stable infrastructure very probably not adding
servers now and then but if you are working on cloud and you do not know the
capacity that you would be running probably puppet would not be the good
would not be a good idea to manage your configuration on your
clients okay our next question is what is the difference between asset
management and configuration management so asset management basically deals with
resources and deals with hardware which will have to plan so that our IT
workforce can work with maximum efficiency right so we’ll have to plan
the planning of your Hardware of how many resources a particular team might
need giving the right resources to the right people is what asset management
counts in when we talk about configuration management it basically
employs not the hardware but the software component of what all
software’s are required by a particular employee of a team or a particular
person in the team what’s officer required by that person and for other
person what software is required I mean rather than taking the radical approach
of installing every software and every machine which should not be done because
some software’s are licensed so configuration management basically means
installing the right software on the right system on which a particular
person or on which a particular workload is going to run so our next question is
what are n RPE plugins in Nagios okay so n RP plugins are basically extensions
to Nagios which help you monitor the local resources of the client machines
right so you don’t have to SSH into the client machines to see how much of
memory or how much of CPU is being used now yours being a monitoring tool you
just have to install and the NRP extension on the client machine and it
will give you a real-time data of the resources that are being consumed on
that particular client machine and obviously when you are working in
production environment you will be monitoring multiple machines and with
the NRP plugins installed on each of those machines you can easily monitor
the resources of them at one central place and that was exactly what n RP
guinness our next question is what are the difference between an active check
and a passive check in Nagios okay so in a goose if the the data the monitoring
log that you’re getting from your clients if it’s being delivered by an Ag
use agent in that case it is called an active check reason being nag use is
actively involved in taking all the data oral or in collecting all the data from
your clients but in case when you’re dealing with systems wherein it does not
allow you to install any other software or probably the software itself can
generate monitoring logs in those cases what happens is rather than Nagios the
software component pushes the logs to the Nagios master where it can take the
logs and probably create a graph or create the metric for you in the
dashboard so basically using those logs which are being published by some other
software Nagios will create a report of the health monitoring part of your
client systems right and that is why it is called a passive check reason being
nag knows is not involved on the client side at all it is basically the
software’s own services which are basically pushing the log to nag your
master and hence it’s called a passive check all right but if you talk about
the architecture or the working of the lifecycle of how this actually works
between on the master itself the logs which are published are actually
published to a queue right and whether it’s an active check or a passive check
the logs have to be published to that queue so that the Nagios master can pick
them up and create the monitoring metric which is required all right
so in a passive check and in an ad active check the queue is going to be
there but it’s only difference between the agents that is in an active check
the Nagios agents are involved but in a passive check third party software tools
are involved which publish the log to the Nagios master all right so our next
question says create an ansible playbook to
deploy Apache on a client server so basically we have to do configuration
management so as to without I mean going inside the the client system will have
to install a particular piece of software inside it okay so let me
quickly do an ssh into my AWS machine and what i’m gonna do is i have a slave
machine which i have already configured which can interact with my master that
is if i’d were ansible ping call you can see that there is a server one that has
I have configured which has successfully responded to my Master’s request okay
now let me show you the server which is basically working so this is the server
which is configured with my master right this is my client machine and on this
machine I’ll have to install Apache so if I right now go to the IP address of
this machine it says connection refused reason being
that there is nothing installed on the server or there is no Apache software
installed on this particular server right now all right so let’s install
Apache now to do that you will have to write a playbook now what is the
playbook a playbook looks something like this so it’s basically a yam L file that
you’ll have to create so I have created one for me so where do you want to
install the Apache Software is part where you will have to specify an hosts
so basically my my client machine is a part of a group called servers that I’ve
created right so the hosts are service and where can
you actually specify what part is your machine
or what which group is your machine a part of so that you can specify over
here so it’s in / EDC / ansible and / host ok so as you can see over here this
is the group name that is so us and inside suppose I have specified a server
one client machine which has the IP address this so this is the IP address
of my slave so if you can compare its 18.2 to 3 101 172 and if I compare it
with my slave this is the IP address of my slave
right and this has been configured over here so I can refer to my server 1 as
servers or I can refer it to as server 1 right so if I do let me do clear over
here I can say ansible – M pink and I can say so one is where it will reach
out to my cellphone or I can say service as well because
it’s part of the group service ok so this is how it works now I want to
install Apache so for installing Apache and have to write a playbook which looks
like this basically it’s a yam L file so you start
with the three dashed lines and then you specify hosts so host I’ve specified
every sword every machine which is inside the service group should install
Apache on it right and what is the task I want to install Apache – this is
basically a name you can specify anything over here then I’ve specified a
PT basically I want to use the apt package to install Apache to the latest
software ok now what I can will do I will type in ansible – playbook and I
will type in Apache not yell I’ll hit enter and now it has
started to install everything so it has it is installing on the server’s group
it is gathering the facts and it saw that it is able to communicate with
server one and now it is accomplishing the task of installing Apache
alright so it has been done successfully so if I go to my Chrome browser now and
if i refresh the address you can see that Apache is installed on this server
automatically so I didn’t have to basically SSH into the server
it all happened automatically and if I had like five six computers which were
running on which were running on AWS and if I wanted to install the software on
it using Apache are using ansible this would have been the same way it’s only
that in the server group I would have specified more IP addresses which my
ansible could talk to okay so this was tasks of basically deploying an
answerable playbook on a client server without SSH doing an ssh into that
client server and doing it from a central location all right so this is
done now let’s move on to our next two main which is continuous testing now
what is continuous testing so we talked about continuous development which is
done using github we talked about continuous integration which is possible
using jenkins who talked about continuous configuration management
which can be done using ansible and next online is we have continuous testing so
once the code has been deployed it has been integrated with Jenkins it has been
deployed on a server the next thing is automated testing that we discuss in the
best practices before right and it can be done using a tool called selenium a
software called selenium webdriver right so the first question is list out the
technical challenges with selenium so the selenium tool is used widely for
automatic testing or automated testing but what are the problems that you get
with selenium so if you’re using selenium mobile
testing cannot be done so if you have developed an application for your mobile
you cannot test it using selenium the reporting capabilities of selenium are
very limited if there is if the if your application or your web application
deals with pop-up windows or it gives pop-up windows selenium would not be
able to recognize those pop-up windows and work on them way back oh and again
selenium is only limited to web applications so if you have an
application that probably runs on desktop probably it’s a software that
you have designed you cannot test that software using selenium selenium is only
for those applications which can run inside a browser and if your if you want
to check whether there is some image in your web page and that image should have
some particular content it is a little difficult to implement it in selenium
all although it is not impossible it is possible you’ll have to import some
libraries and other things like that but natively selenium does not support image
testing you’ll have to work around will have to work around with selenium to
import some libraries which could do it for you
but like I said natively selenium does not support image testing so a next
question is what is the difference between verify and assert commands so
let us see the differences so if you’re using assert in selenium if the command
fails the whole execution comes to a halt
whereas in verify it does not come to a halt it keeps on continuing the rest of
the lines which are written in the code now why how can it be helpful how is it
helpful to basically put execution at halt whenever there is an error which
occurs for a particular line it is helpful when you’re dealing with
critical cases like for example if if there are five cases and say if case
three fails case four and five cannot execute because they have a dependency
on case three in those cases I would say that you would have to use assert with
the case three but in in the same example if we talk about case one
and case two they do not have a dependency or or they do not create a
dependency for any other test cases that have to run right for example case three
case four case five are not dependent in case one in case two in those cases we
can run the verify command which will not stop even if the the test case fails
right and this basically is done to basically see what all is working and
what all is not in one shot in our testing program and for those cases you
would use verify but in the cases where you are testing critical cases and you
do not want to waste your time testing other things if one of your case fails
in those cases you will use the assert test so like I said it is used to
validate critical functionality assert command and verify is used to very
validate functionality which is of the normal behavior kind of kind of sinner
which which comes into the normal behavior that that is it does not create
a dependency for other things to not work because it stopped working all
right so a next question is what is the
difference between set speed and sleep methods so set speed is basically used
for executing tasks at a particular interval that we specify for example say
I want to echo hello world at intervals of five seconds in that case I can
specify it using set speed but sleep method basically suspends the execution
of the whole program for a particular interval for example if you’re doing a
selenium web test and the webpage takes around three seconds to load and you
don’t want testing to happen just after each line you can specify a sleep method
of say around three seconds where it it waits for three seconds for the website
to load and only then it will start executing the tests which follow that
particular line right so this is the difference between set speed and sleep
all right guys so with that we come to an end
to our DevOps interview questions so guys I hope this session was useful to
you right we tried I try to give you
examples with hands-on and I hope that was helpful for you to understand
concepts better so with that note I would take leave from you guys I wish
you all the best for your future interviews that you’re gonna pair for
alright so have a great day ahead guys and all the best for your future
interviews thank you for watching the video you like and share it if you have
any questions comment below and we’ll respond to them as soon as possible also
do subscribe to Intel bat channel so that you can keep yourself updated on
the latest technologies you can also go through other related video set up
leaders and for more information visit our website keep learning keep improving

6 Replies to “DevOps Interview Questions and Answers | DevOps Tutorial | DevOps Training | Intellipaat”

  1. Do you have any questions on DevOps interview questions and answers? Please share your doubts in the comment section below and we will get back to you. Thank you for watching the video! For DevOps training & certification, call us at US: 1800-216-8930 (Toll Free) or India: +917022374614. You can also write us at [email protected]

Leave a Reply

Your email address will not be published. Required fields are marked *