diff --git a/Days/day1.md b/Days/day1.md
new file mode 100755
index 0000000..a592a37
--- /dev/null
+++ b/Days/day1.md
@@ -0,0 +1,22 @@
+On the first day, I learned the following things about Git.
+
+- `git init` will only track a particular directory in which git is initialized.
+- `git status` will show the status of the files that are newly created, modified or deleted.
+- `git add filename` OR `git add .` will add a particular or all the files into the staging area and it can now be tracked them before committing them into Git.
+- `git commit -m "add a message"` will commit the changes in the git.
+- `git commit -am "add a message"` will add the files into the staging area and commit it also.
+- `git restore --staged filename` will move the files from the staging area to the unstaging area. In this way, the data will be reverted back.
+- `git log` will show the history of all the git commits.
+- `git reset hash value` will move the data from the committing area to the unstaging area. Provide the previous hash value if you want to delete the next one.
+- `git stash` will store the data temporarily somewhere in the memory. The changes won't be committed. Instead it has to be added to the staging area and then it can go to the stashing area.
+- `git stash push -m "add a message"` will store the data temporarily in the git stash.
+- `git stash list` will show the list of data that are temporarily stored.
+- `git stash clear` will delete the list of data that are present in the git stash.
+- `git stash apply index number` will call the stashing in which you want to make changes.
+- `git stash drop index number` will delete a specific data from the stashing list.
+- `git stash pop index number` will transfer the specific index number data out of the stashing area so that it can be further committed. It means that the data is now able to be added into the staging area and committed also.
+- `rm -rf .git` will unintialize the git. It means that all the branches will be deleted from Git.
+
+## **Explaining it in a video**
+
+Here you can get an explanation in a video. [1/60 Day of DevOps Challenge]()
\ No newline at end of file
diff --git a/Days/day10.md b/Days/day10.md
new file mode 100755
index 0000000..619304e
--- /dev/null
+++ b/Days/day10.md
@@ -0,0 +1,9 @@
+On the tenth day, I learned the following things about Networking.
+
+Click Here:
+
+- 🌐 [Day No. 10 of Learning Networking](../PDFs/Computer-Networking-7.pdf)
+
+## **Explaining it in a video**
+
+Here you can get an explanation in a video. [10/60 Day of DevOps Challenge]()
\ No newline at end of file
diff --git a/Days/day11.md b/Days/day11.md
new file mode 100755
index 0000000..db00d7e
--- /dev/null
+++ b/Days/day11.md
@@ -0,0 +1,9 @@
+On the eleventh day, I learned the following things about Networking.
+
+Click Here:
+
+- 🌐 [Day No. 11 of Learning Networking](../PDFs/Computer-Networking-8.pdf)
+
+## **Explaining it in a video**
+
+Here you can get an explanation in a video. [11/60 Day of DevOps Challenge]()
\ No newline at end of file
diff --git a/Days/day12.md b/Days/day12.md
new file mode 100755
index 0000000..5412a2c
--- /dev/null
+++ b/Days/day12.md
@@ -0,0 +1,48 @@
+On the twelveth day, I learned the following things about Linux.
+
+- The command line interface will communicate with the kernel and it will give the input to the kernel to do a certain task and as a result the kernel will perform that operation.
+- In the command prompt, the path will be given to you that has two parts.
+
+ **1.** The first is the user part and
+
+ **2.** the second is the host part. In between them, there is a separater **"@"**
+
+
+
+
+
+- `where filename` will show you the list of directories in which a file is present.
+- `open .` will open all the files present in a specific directory.
+- `echo $PATH` When one types a command to run, the system looks for it in the directories specified by PATH. It will display the files and folders paths by the difference of colon **:** It will check the executable command in one of these paths.
+- `echo "Hey" > file.txt` will override the text in a file.
+- `echo "Hey" >> file.txt` will append the text in a file.
+- `export MY_PATH="Bilal"` will create another path that will contain the string. But this is not permanent
+
+- `pwd` will show the present working directory in which you're currently present.
+- `ls` will show you the list of all the files present in a specific directory.
+- `ls -a` will show you the list of all the hidden files present in a specific directory. Hidden files are starting from dot **.**
+- `ls -l` will show you the list of files with long details present in a specific directory.
+- `ls -la` will show you the list of more files including hidden files with long details present in a specific directory.
+- `ls -R` will find all the folders and sub folders and so on recursively.
+- Dot **.** means current directory, **..** double dot means previous directory.
+- `cd `(change directory) will change the path location from one directory to another.
+- `cat filename `(concatenate) will print all the content of a file in a standard output.
+- `cat > filename` will create a new file if it is not present and allow us to enter the text also.
+- `tr` will translate the characters from one string to another string.
+- `cat lower.txt | tr a-z A-Z > upper.txt`, the output of the first command is the input of the second command.
+- `man command-name` will show you the details of a specific command.
+- `mkdir directory-name` will create a new directory.
+- `mkdir -p random/middle/hello` will create a middle directory b/w two directories. `-p` is used for parent's command.
+- `touch filename` will create a new file.
+- `cp file.txt copy_file.txt` will make a copy of the file.txt
+- `cp -R test random` will copy the test directory into the random directory.
+- `mv file.txt random` will move the file.txt to the random folder.
+- `mv file.txt new_file.txt` will rename the file.txt to new_file.txt.
+- `mv test renamedTest` will rename the directory.
+- `rm file.txt` will remove a file from your computer permanently.
+- `rm -R directory-name` will remove the directory recursively.
+- `rm -rf directory-name` will forcefully remove the directory.
+
+## **Explaining it in a video**
+
+Here you can get an explanation in a video. [12/60 Day of DevOps Challenge]()
\ No newline at end of file
diff --git a/Days/day13.md b/Days/day13.md
new file mode 100755
index 0000000..308ebc3
--- /dev/null
+++ b/Days/day13.md
@@ -0,0 +1,76 @@
+On the thirteenth day, I learned the following things about Linux.
+
+- `sudo` (super user do) will be used when some commands require administrative permission. If you want to access some files that require admin permission then you will use `sudo` to enter the password and give the permssion.
+- `df` is used to find the disk storage capacity.
+- `df -m` will show the data size in MBs.
+- `df -mh` will show the data in MBs and in human readable format.
+- `du` will estimate the disk space usage statistics.
+- `du -h` will give the file usage space in human-readable format.
+- `head` will print the first few lines of a file.
+- `head -n 4 file-name` will print the first four lines of a file.
+- `tail` will print the few lines from the bottom of a file.
+- `diff` will compare content of the files line by line and see if there is any difference.
+- `locate` will find the files by name.
+- `find` search all the files present in a directory.
+- `find . -type d` will only find the directories that are present in a current directory.
+- `find . -type f` will only find the files that are present in a current directory.
+- `find . -type f -name "file-name"` will only find the files that are present in a current directory and the name of the file is given.
+- `-name` is case sensitive but if you don't want to use the case sensitive then use `-iname`.
+- `mmin` (last modified n minutes ago) `find . -type f -mmin -20` will show only the files that are modified in a current directory less than 20 minutes ago.
+- `find . -type f -mmin +15` will show only the files that are modified in a current directory more than 15 minutes ago.
+- `find . -type f -mmin +2 -mmin -10` will show only the files that are modified in a current directory more than 2 minutes ago and less than 10 minutes ago.
+- `find . -type f -mtime -10` will show only the files that are modified in a current directory less than 10 days ago.
+- `find . -type f -maxdepth 1` will only show the files that are present in the first directory. It is not going recursively.
+If you want to recursively search then make the maxdepth equal to 2, 3, and so on.
+- `find . -size +1M` will find all the files that has a size more than 1 MB.
+- `find . -empty` will find all the empty files.
+- `find . -perm 777` will find the files that has 7(read), 7(write), and 7(execute) permission.
+
+
+If you find the long detail of a file, the read, write and execute permissions have 3 hyphens. The first hyphen shows the user permission which has a value 4. The second hyphen shows the group permission which has value 2 and the third one shows the permission of all the other users and it has a value 1. `0` stands for no permission.
+
+Let's say if a file has only read and write permission then it will be equal to 4+2=6.
+
+You can change the permissions by writing
+- `chmod u=rwx,g=rx,o=r file-name`
+
+If I write `777`, it will assign read, write, and execute permission to every hyphen of a file.
+- `chmod 777 file-name`
+
+If I write `577`, it will assign read, and execute permission to the first three hyphens and read, write and execute permission to the remaining hyphens of a file.
+- `chmod 577 file-name`
+
+How to perform a functionality on many files?
+Let's you want to find multiple files and delete them.
+
+- `find . -type f -name "*.txt" -exec rm -rf {} +` will find and remove all the files. `{}` will contain a list of all the files.
+
+- `whoami` will print the person username who is logged in.
+
+**grep(global regular expression print)**
+
+- `grep "text" filename` will find the text in a file and it is case-sensitive.
+
+- `grep -w "text" filename` will find the complete word written in a file.
+
+- `grep -i "text" filename` will find the text in a file even if it is not case-sensitive.
+
+- `grep -n "text" filename` will find the text with a line number in a file.
+
+- `grep -B 3 "text" filename` will find the lines that came before the given text.
+
+- `grep -rwin "text" .` will find the text recursively in a current directory.
+
+- `grep -wirl "text" .` will find all the files that contain the given text.
+
+- `grep -wirc "text" .` will count the files that contain the given text.
+
+- `history` will show the history of all the commands that has been used.
+
+- `history | grep "ls"` will show history of all the **ls** command that are used.
+
+- `grep -P "regular expression" filename` will find the specific text in a file by using regex.
+
+## **Explaining it in a video**
+
+Here you can get an explanation in a video. [13/60 Day of DevOps Challenge]()
\ No newline at end of file
diff --git a/Days/day14.md b/Days/day14.md
new file mode 100755
index 0000000..2ba3f4c
--- /dev/null
+++ b/Days/day14.md
@@ -0,0 +1,74 @@
+On the forteenth day, I learned the following things about Linux.
+
+- In Linux, an alias is **a shortcut that references a command.**
+Aliases are mostly used to replace long commands, improving efficiency and avoiding potential spelling errors.
+
+- `alias` will show you all the aliases that are generated.
+
+- `alias =""` will make a new alias and after that, if you type a key, a functionality according to a value will be performed.
+
+- `unalias ` will remove an alias from the memory and after that, the key will not be functional.
+
+**Q. What if you want to create multiple aliases at once?**
+
+**A.** You can store them in a file.
+
+- `nano ~/.zshrc` will open up a *zsh* file in which you can store the aliases.
+
+- `alias =""` is the format of giving an alias to a file.
+
+- `source ~/.zshrc` will activate the alias that you saved in a file. Without activation you can't execute the alias.
+
+**Note:** The same pattern will be applied to the bash terminal also. You just need to write `~/.bashrc` instead of `~/.zshrc`.
+
+- `Ctrl+A` will move you to the first point of the command.
+- `Ctrl+E` will move you to the end point of the command.
+- `Ctrl+U` will delete everything that you entered.
+- `Ctrl+K` will delete the command from the back of the cursor.
+- `Ctrl+R` will search for the previous command.
+- `TAB button` will auto complete the command without writing it as a whole.
+- `!` will take a number from the history and run it to access that command.
+- `!` will take a command from the history that was used last time and run it to access that command.
+- `;` will help you to add multiple commands in one line.
+- `sort file-name` will sort the data in an alphabetical order.
+- `sort -r file-name` is the reverse of the sort.
+- `sort -n file-name` will return the data in a numerical order.
+- `jobs` will display all the current processes that are running.
+- `ping website` will display all the data packets from a particular server.
+- `wget ` will download files from the internet.
+- `wget -O file-name ` will save the downloaded file under a different name.
+- `top` will show that how many processes are running and how much CPU usage is consuming?
+- `kill process ID` will close the process.
+- `uname` will display the kernel name.
+- `uname -o` will print the operating system.
+- `uname -m` will print the architecture.
+- `uname -v` will print the kernel version.
+- `cat /etc/os-release` will give the information of your operating system.
+- `lscpu` will give you the CPU details.
+- `free` will show you the free memory.
+- `vmstat` will display the virtual memory state.
+- `id` will print the IDs, group ids and stuff.
+- `getent group username` will look up the user details on Linux.
+- `zip zip-file.zip text-file.txt` will zip the *text-file.txt* into *zip-file.txt*.
+- `unzip zip-file.zip` will unzip the file.
+- `hostname` will show the hostname.
+- `hostname -i` will show the ip-address.
+- `useradd username` will add a new user.
+- `passwd username` will give a password to the user.
+- `userdel username` will delete a username.
+- `lsof` will list all the open files.
+- `lsof -u username` will list the files that are opened by a username.
+
+- `nslookup website` will give the ip-address of a website.
+- `netstat` will give the details of all the active ports.
+- `ps aux` will give a snapshot of the current processes.
+- `cut -c 1-2 file-name` will remove sections from each line of files.
+- `ping website1 & ping website2` **"&"** operator will fetch multiple sites data.
+- `echo "first" && echo "second"` **"&&"** operator will say if the first command is executed then execute the second command.
+- `echo "hey" && {echo "hi"; echo "I am good"}` will say that if the first echo is executed then execute the commands in curly braces.
+- `echo "first" || echo "second"` **"||"** operator will say if the first command is not executed then execute the second one.
+- `| (pipe)` will send the output from the first command to the second command.
+
+## **Explaining it in a video**
+
+Here you can get an explanation in a video. [14/60 Day of DevOps Challenge]()
\ No newline at end of file
diff --git a/Days/day15.md b/Days/day15.md
new file mode 100755
index 0000000..c9948d6
--- /dev/null
+++ b/Days/day15.md
@@ -0,0 +1,9 @@
+On the fifteenth day, I learned the following things about YAML.
+
+Click Here:
+
+- ⌨️ [Day No. 15 of Learning YAML](../PDFs/YAML-1.pdf)
+
+## **Explaining it in a video**
+
+Here you can get an explanation in a video. [15/60 Day of DevOps Challenge]()
\ No newline at end of file
diff --git a/Days/day16.md b/Days/day16.md
new file mode 100755
index 0000000..38cbc2c
--- /dev/null
+++ b/Days/day16.md
@@ -0,0 +1,124 @@
+On the sixteenth day, I learned the following things about YAML.
+
+## Starting and ending point
+
+ '---' is the starting point
+ '...' is an ending point
+
+**Note: YAML doesn't support multi-line comment. It only supports '#' to be written with each line like this**
+
+ # This is the first line
+ # This is the second line
+ # This is the third line
+
+## Write key value pair
+
+**Syntax:** ***key: value***
+
+ ---
+ name: Bilal Khan
+ 1: This is a number
+ {fruit: mango, age: 12}
+ ...
+
+## Write a list
+ ---
+ - apple
+ - mango
+ - orange
+ - Apple
+ ...
+
+## Write data in block style
+
+ ---
+ cities:
+ - city1
+ - city2
+ - city3
+
+ cities: [city1, city2, city3]
+ ...
+
+## String values
+**YAML provide 3 types of string values**
+
+ ---
+ name: Bilal Khan
+ fruit: "this is a mango"
+ job: 'software engineering'
+ ...
+
+## Write data in separate lines by inserting '|' sign
+ ---
+ bio: |
+ "My name is Bilal"
+ "I am a developer"
+ ...
+
+## Write single line in multiple lines by inserting '>' sign
+
+ ---
+ message: >
+ this is a single
+ line that I am
+ writing in multiple
+ lines
+ ...
+
+## YAML will automatically detect the data type
+
+ ---
+ number: 43
+ float: 34.65
+ boolean: Y # y, true, True, n, N, false, False
+ ...
+
+## Write data types
+**Integer data type**
+
+ ---
+ zero: !!int 0
+ positiveNum: !!int 54
+ negativeNum: !!int -54
+ binaryNum: !!int 0b1011
+ octalNum: !!int 05346
+ hexaDecNum: !!int 0x54
+ commaVal: !!int 25_000 # is equal to 25,000
+ exponentialVal: !!int 6.02E54
+ ...
+
+**Float data type**
+
+ ---
+ marks: !!float 54.65
+ infinite: !!float .inf
+ not a num: .nan
+ ...
+
+**String and boolean data type**
+
+ ---
+ string: !!str "this is a string"
+ ---
+ boolean: !!bool True
+ ...
+
+**Null data type**
+
+ ---
+ null: !!null Null # !!null Null ~
+ ~: this is also a key
+ ...
+
+**Date data type**
+
+ ---
+ date: !!timestamp 2002-12-14
+ pakistan time: 2001-12-15T02:59:43.10 + 8:13
+ not a time zone: 2001-12-15T02:59:43.1Z
+ ...
+
+## **Explaining it in a video**
+
+Here you can get an explanation in a video. [16/60 Day of DevOps Challenge]()
\ No newline at end of file
diff --git a/Days/day17.md b/Days/day17.md
new file mode 100755
index 0000000..a4e0b6d
--- /dev/null
+++ b/Days/day17.md
@@ -0,0 +1,154 @@
+On the seventeenth day, I learned the following things about YAML.
+
+## Write a sequence
+
+ ---
+ student: !!seq
+ - mark
+ - name
+ - roll_no
+ ...
+
+## Empty sequence will be called sparse sequence
+
+ ---
+ sparse seq:
+ - how
+ - where
+ -
+ - Null
+ - sup
+ ...
+
+## Write a nested sequence
+
+ ---
+ -
+ - mango
+ - orange
+ - apple
+ -
+ - car
+ - truck
+ - bus
+ ...
+
+## Key value pairs are called maps
+ !!map
+
+## Write nested mappings: map within a map
+
+ ---
+ name: Bilal Khan
+ roles:
+ age: 12
+ job: software engineer
+ ...
+
+### **same like this**
+
+ ---
+ name: Bilal Khan
+ roles: {age: 12, job: software engineer}
+ ...
+
+## Write pairs which means that one key may have multiple values
+
+ !!pairs
+
+### **Example**
+
+ ---
+ pair example: !!pairs
+ - job: student
+ - job: teacher
+ ...
+
+### **same like this**
+
+ ---
+ pair example: !!pairs [job: student, job: teacher]
+ ...
+
+### **this will be an array of hashtables**
+
+## Set will allow you to have unique values
+
+ ---
+ names: !!set
+ ? Bilal
+ ? Ali
+ ? Ahmed
+ ...
+
+## Write a dictionary that will represent sequence as a value
+
+ !!omap
+
+### **Example**
+
+ ---
+ people:
+ - Bilal:
+ name: Bilal Khan
+ age: 25
+ role: software engineer
+ - Ali:
+ name: Ali Ahmed
+ age: 19
+ role: data scientist
+ ...
+
+## Re-use some properties using anchors
+
+ likings: &fruitsChoice
+ like: mango
+ dislike: banana
+
+ person1:
+ name: Bilal
+ <<: *fruitsChoice
+
+ person2:
+ name: Hamza
+ <<: *fruitsChoice
+
+ person3:
+ name: Ali
+ <<: *fruitsChoice
+
+### **Result**
+
+ person1:
+ name: Bilal
+ like: mango
+ dislike: banana
+
+## If you want to override some data. The data will be replaced with a new one
+
+ person1:
+ name: Bilal
+ like: mango
+ <<: *fruitsChoice
+ dislike: orange
+
+### **Result**
+
+ person1:
+ name: Bilal
+ like: mango
+ dislike: orange
+
+## Example of multiple nested sequence
+
+ Schools:
+ - name: DPS
+ principal: Someone
+ students:
+ - roll_no: 12
+ name: Bilal Khan
+ marks: 5
+
+## **Explaining it in a video**
+
+Here you can get an explanation in a video. [17/60 Day of DevOps Challenge]()
\ No newline at end of file
diff --git a/Days/day18.md b/Days/day18.md
new file mode 100755
index 0000000..3e81219
--- /dev/null
+++ b/Days/day18.md
@@ -0,0 +1,9 @@
+On the eighteenth day, I learned the following things about Docker and Containers.
+
+Click Here:
+
+- ⌨️ [Day No. 18 of Learning Docker and Containers](../PDFs/Docker-1.pdf)
+
+## **Explaining it in a video**
+
+Here you can get an explanation in a video. [18/60 Day of DevOps Challenge]()
\ No newline at end of file
diff --git a/Days/day19.md b/Days/day19.md
new file mode 100755
index 0000000..a5558ca
--- /dev/null
+++ b/Days/day19.md
@@ -0,0 +1,95 @@
+On the nineteenth day, I learned the following things about Docker and Containers.
+
+
+
+
+
+## **DevOps**
+
+In the **Dev** part, you
+1. Create an application
+2. Create and write a Dockerfile
+3. Create an image
+4. Create a container
+
+In the **Ops** part, you
+1. Download an image
+2. Run that image
+3. Operate on it
+
+
+
+
+
+## **Commands**
+
+- `docker run hello-world` will run the hello-world image.
+
+It has three parts.
+1. `docker` is a Docker CLI.
+2. `run` will run an image to create a container.
+3. `hello-world` is an image that is taken from the online cloud docker registry called docker hub.
+
+When you first run the docker image, it will take sometime to download and run it. After downloading it, it will run fast.
+
+- `docker run -it ubuntu` will run an ubuntu image. `it` is for interactive environment that will take you straight inside the container.
+
+- `docker images` will show you all the images that are present on the local machine.
+
+**Note:** When running an image, it also contains the operating system files and dependencies(like mini OS), so that an application works in an isolated environment without interacting with an operating system outside the container.
+
+- `docker pull ubuntu` will only download an image without running it.
+- `docker pull ubuntu:20.04` will download the specified version of an image.
+
+- `ps aux` will show you the processes that are currently running.
+
+- `docker ps` is a Docker command to list the running containers.
+- `docker container ls` is a Docker command to list the running containers.
+
+- `docker container exec -it container_id bash` execute an interactive bash shell on the container. It will allow multiple terminals to run on one container.
+
+- `docker start container_id` will stop the container.
+
+- `docker stop container_id` will stop the container.
+
+- `docker ps -a` will show the list of stopped containers.
+
+- `docker rm stopped_container_id` will remove the specified stopped container.
+
+- `docker container prune -f` will the delete all the stopped containers. `f` means by force. Don't ask again.
+
+- `docker inspect image_name/container_id` will give all the information about the container.
+
+- `docker run alpine ping www.google.com` will ping the website. Alpine is relatively lower in size and it provides almost all the functionalities that an Ubuntu Image can.
+
+If the containers are running for a long time and running in the background then servers are the best solution for it. You can use Alpine for it and once the Alpine is stopped, the container will be removed.
+
+- `docker run -d alpine ping www.google.com` will run the container in the detached mode which means while running the command, there is no need to be present on the terminal.
+
+- `docker run ubuntu echo Hey` will print "Hey" on the terminal.
+
+- `docker logs full_container_id` will show the history of all the activities that you have done.
+
+**FFI = first_few_characters_from_the_container_id**
+
+- `docker logs FFI` will give you the history of that container.
+
+- `docker logs --since 5s FFI` will show the first 5 seconds history of the container.
+
+- `docker stop FFI` will stop the container.
+
+- `docker rm FFI` will remove the stopped container.
+
+- `docker rmi image_name -f` will remove an image.
+
+- `docker run -d -p 8080:80 nginx` will give you access on your local port. `-d` is for detach, `-p` is for port.
+
+This will be used to access something in a computer that is present inside the container of a computer. Whatever traffic is on the port :80 or inside the nginx container, forward all the requests that you're making to the localhost 8080 inside the container's port :80.
+
+Whenever you'll go to the localhost 8080, it will forward it inside the container's :80. Here 8080 is the **host port**, this port you will be able to access in your machine, whereas port 80 is **container port**, this will not be accessible outside the container.
+
+**Port forwarding** or port mapping redirects a communication request from one IP address and port number combination to another.
+
+## **Explaining it in a video**
+
+Here you can get an explanation in a video. [19/60 Day of DevOps Challenge]()
\ No newline at end of file
diff --git a/Days/day2.md b/Days/day2.md
new file mode 100755
index 0000000..b0b5f42
--- /dev/null
+++ b/Days/day2.md
@@ -0,0 +1,11 @@
+On the second day, I learned the following things about Git.
+
+- `git branch` will show the list of branches of Git. Branch is created to divide the data functionalities or use cases.
+- `git branch branchname` will create a new branch.
+- `git checkout branchname` will help you switch to another branch.
+- `git rebase -i hash value` will merge multiple commits into a single commit.
+- `git merge branchname` will merge the branch into another. Here the name of the branch would a branch that you want to merge into another branch.
+
+## **Explaining it in a video**
+
+Here you can get an explanation in a video. [2/60 Day of DevOps Challenge]()
\ No newline at end of file
diff --git a/Days/day20.md b/Days/day20.md
new file mode 100755
index 0000000..07010bb
--- /dev/null
+++ b/Days/day20.md
@@ -0,0 +1,55 @@
+On the twenteeth day, I learned the following things about Docker and Containers.
+
+- `docker commit -m "add a message here" container_id new_image_name` will commit and transfer a container’s file data or settings into a new image.
+
+- `docker images -q` will give you the IDs of docker images.
+
+- `docker images -q --no-trunc` will list the hash values of all the docker images.
+
+- `docker rmi $(docker images -q) -f` will remove all the images at once. It won't delete the running container.
+
+- Images are build in layers. Each layer is immutable(can't be changed or renamed), but image is a collection of files and directories.
+Each image may contain common files that are present in other images also. So instead of installing common files with each image, it will skip those files and only download the uncommon files that are not present in other images. These common files will be used in all the images and it will make the process fast.
+
+- You can create your own image by simple creating a Dockerfile. Inside the Dockerfile, write the following things.
+
+
+
+
+
+The following points need to be noted about the above file −
+
+- The first line "#This is a sample Image" is a comment. You can add comments to the Docker File with the help of the # command.
+
+- The next line has to start with the FROM keyword. It tells docker, from which base image you want to base your image from. In our example, we are creating an image from the ubuntu image.
+
+- The next command is the person who is going to maintain this image. Here you specify the MAINTAINER keyword and just mention the email ID.
+
+- The RUN command is used to run instructions against the image. In our case, we first update our Ubuntu system and then install the nginx server on our ubuntu image.
+
+- The last command is used to display a message to the user.
+
+- `.dockerignore` will ignore the files that are not required when creating an image.
+
+- `docker build -t myimage:1.01 .` will create your own docker image.
+
+
+## **Docker Engine**
+
+ |-----------------------------------------------------------------------------------------------------|
+ | ----------------- ---------- -------------- | => Shim | => runc | => container |
+ | | Docker Client | >> | Daemon | >> | Containerd | >> | => Shim >> | => runc >> | => container |
+ | ----------------- ---------- -------------- | => Shim | => runc | => container |
+ |-----------------------------------------------------------------------------------------------------|
+
+- Docker Daemon talks to Containerd via grpc procotol.
+- If the Daemon is stopped, all the running containers will be stopped. Shim will avoid this situation and containers will be running even if the daemon is stopped.
+- If the functionalities are performed by containerd then why the daemon is present? Daemon also performs some functionalities like managing images, networking etc.
+- Daemon has a binary file **dockerd**
+- Containerd has a binary file **docker-containerd**
+- Shim has a binary file **docker-containerd-shim**
+- runc has a binary file **docker-runc**
+
+## **Explaining it in a video**
+
+Here you can get an explanation in a video. [20/60 Day of DevOps Challenge]()
\ No newline at end of file
diff --git a/Days/day21.md b/Days/day21.md
new file mode 100755
index 0000000..8186d6f
--- /dev/null
+++ b/Days/day21.md
@@ -0,0 +1,9 @@
+On the twenty-first day, I learned the following things about Kubernetes.
+
+Click Here:
+
+- ☸ [Day No. 21 of Learning Kubernetes](../PDFs/Kubernetes-1.pdf)
+
+## **Explaining it in a video**
+
+Here you can get an explanation in a video. [21/60 Day of DevOps Challenge]()
\ No newline at end of file
diff --git a/Days/day22.md b/Days/day22.md
new file mode 100755
index 0000000..6c19803
--- /dev/null
+++ b/Days/day22.md
@@ -0,0 +1,9 @@
+On the twenty-second day, I learned the following things about Kubernetes.
+
+Click Here:
+
+- ☸ [Day No. 22 of Learning Kubernetes](../PDFs/Kubernetes-2.pdf)
+
+## **Explaining it in a video**
+
+Here you can get an explanation in a video. [22/60 Day of DevOps Challenge]()
\ No newline at end of file
diff --git a/Days/day23.md b/Days/day23.md
new file mode 100755
index 0000000..c5fa503
--- /dev/null
+++ b/Days/day23.md
@@ -0,0 +1,163 @@
+On the twenty-third day, I learned the following things about Kubernetes.
+
+
+
+
+
+
+
+
+
+- First update the machine by writing `sudo apt-get update`.
+
+- Once the instance is created, write `sudo su` to go to the root user.
+
+- First of all install the docker by writing `sudo apt update && apt -y install docker.io`
+
+- Then install the kubectl by writing `curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl && chmod +x ./kubectl && sudo mv ./kubectl /usr/local/bin/kubectl`
+
+- Then install the minikube by writing `curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 && chmod +x minikube && sudo mv minikube /usr/local/bin`
+
+- Once the minikube is installed, start it by first writing `apt install conntrack`
+
+- After that, type `minikube start --vm-driver=docker` and it will give you the following error.
+
+
+
+
+
+- It is saying that docker should not be used with root privilege. To solve this problem, type `CTRL+D` to go out from root.
+
+- After that, again type `minikube start --vm-driver=docker` and it will give you another following error.
+
+
+
+
+
+- To solve this problem, type the following commands.
+
+ sudo groupadd docker
+ sudo usermod -aG docker $USER
+ newgrp docker
+
+- If it is still not working recovering then visit this [website](https://linuxhandbook.com/docker-permission-denied/#:~:text=deal%20with%20it.-,Fix%201%3A%20Run%20all%20the%20docker%20commands%20with%20sudo,the%20Docker%20daemon%20socket%27%20anymore.) that will show you more ways.
+
+- Once the commands are executed successfully, you will get the following result.
+
+
+
+
+
+- If you type `minikube status`, it will show you running status.
+
+**Note:** The above screenshots are executed by using the AWS instance but if you run these commands in your local machine then they will be successfully executed because I installed minikube on my local machine also using these commands.
+
+- You can get more info about **kubectl** installation by visiting this [page](https://kubernetes.io/docs/tasks/tools/).
+
+- You can get more info about **minikube** installation by visiting this [page](https://minikube.sigs.k8s.io/docs/start/).
+
+- `minikube version` will show the version of minikube.
+
+- `minikube dashboard` will show you the minikube dashboard in your browser.
+
+- `minikube docker-env` will give some environment variables that will help you to communicate with remote servers.
+
+- `minikube ssh` will take you inside the minikube.
+
+- `docker container ls` will show you the list of containers that are required for Kubernetes.
+
+- `docker ps` will show you the list of containers that are required for Kubernetes.
+
+- `kubectl get pods` will show you the pods that are running.
+
+- `kubectl get nodes` will show you the nodes that are running.
+
+- `kubectl describe node node-name` will show you the information about a particular node.
+
+**Data in YAML file**
+
+ kind: Pod
+ apiVersion: v1
+ metadata:
+ name: testpod
+ spec:
+ containers:
+ - name: c00
+ image: ubuntu
+ command: ["/bin/bash", "-c", "while true; do echo Hello-Bilal; sleep 5; done"]
+ restartPolicy: Never #Defaults to Always
+
+- `kubctl apply -f pod.yml` will run the comands that are present in the yaml file.
+
+- `kubectl get pods -o wide` will show you the exact location of the pods with their ip addresses.
+
+- `kubectl describe pod pod-name` OR `kubectl describe pod/pod-name` will show each and every detail of a pod.
+
+- `kubectl logs -f pod-name` will show you the information of the container(s) in a specific pod.
+
+- `kubectl logs -f pod-name -c container-name` will show you the information of a specific container in a specific pod.
+
+- `kubectl exec pod-name -it -c container-name -- hostname -i` will show you the ip the address of the pod that contain these containers.
+
+- `kubectl delete pod pod-name` OR `kubectl delete -f pod.yaml` will delete a specific pod. A pod can either be deleted by pod-name or a filename that contains the pod information.
+
+- Now write annotations for the description of a pod.
+
+ kind: Pod
+ apiVersion: v1
+ metadata:
+ name: testpod
+ annotations:
+ description: Our first test pod is created.
+ spec:
+ ...
+
+**Data in YAML file for multiple containers**
+
+ kind: Pod
+ apiVersion: v1
+ metadata:
+ name: testpod2
+ spec:
+ containers:
+ - name: c00
+ image: ubuntu
+ command: ["/bin/bash", "-c", "while true; do echo Hello-Bilal; sleep 5; done"]
+ - name: c01
+ image: ubuntu
+ command: ["/bin/bash", "-c", "while true; do echo Hello-Khan; sleep 5; done"]
+
+- `kubectl exec pod-name -it -c container-name -- /bin/bash` will move you inside the container.
+
+- `ps -ef` will show you the things that are running inside the container.
+
+**Writing environment variables in YAML file**
+
+ kind: Pod
+ apiVersion: v1
+ metadata:
+ name: environment
+ spec:
+ containers:
+ - name: c00
+ image: ubuntu
+ command: ["/bin/bash", "-c", "while true; do echo Environment-variables; sleep 5; done"]
+ env:
+ - name: MYNAME
+ value: Bilal
+
+- After going inside the container by this command `kubectl exec pod-name -it -c container-name -- /bin/bash`, type `env` to get the environment variables.
+
+- `echo $MYNAME` to get the environment variable value.
+
+- `kubectl config view` will show you the information about the cluster.
+
+- `kubectl config current-context` will display the current context.
+
+- `kubectl get all` will show you the pods, services, deployment, replicaset etc.
+
+- `minikube stop` will stop the minikube.
+
+## **Explaining it in a video**
+
+Here you can get an explanation in a video. [23/60 Day of DevOps Challenge]()
\ No newline at end of file
diff --git a/Days/day24.md b/Days/day24.md
new file mode 100755
index 0000000..f504996
--- /dev/null
+++ b/Days/day24.md
@@ -0,0 +1,193 @@
+On the twenty-forth day, I learned the following things about Kubernetes.
+
+## **Labels and Selectors**
+
+- Labels are attached to an object and it will identify it while the selector selects an object to fetch it with the help of commands.
+
+- Labels are the mechanism you use to organize the kubernetes objects.
+
+- A label is a key-value pair without any predefined meaning that can be attached to any object not just pod but node also.
+
+- It is used for quick reference.
+
+- You're free to choose labels as you need it to refer an environment which is used for development, testing, or production. Refer a product group like DevelopmentA, DevelopmentB, CompanyC, etc.
+
+- Multiple labels can be added to a single object.
+
+## **There are two methods for writing a label**
+
+### 1. **Declarative method:** You can write a label inside your manifest file.
+
+ kind: Pod
+ apiVersion: v1
+ metadata:
+ name: testpod
+ labels:
+ env: developments
+ class: pods
+ spec:
+ containers:
+ - name: c00
+ image: ubuntu
+ command: ["/bin/bash", "-c", "while true; do echo Hello-Bilal; sleep 5; done"]
+
+
+- After creating a pod, apply it in kubectl by writing `kubectl apply -f pod.yml`, and write `kubectl get pods --show-labels`. It will show you the labels of the pod.
+
+### 2. **Imperative method:** If you want to add a label to an existing pod without updating the manifest file then type in the terminal `kubectl label pods pod-name label-key=label-value`. After that, run `kubectl get pods --show-labels` to show the labels.
+
+- `kubectl get pods -l label-key=label-value` will give you the list of pods that are having the specified key and value.
+
+- `kubectl get pods -l label-key!=label-value` will give you the list of pods that are not having the specified value.
+
+- `kubectl delete pod -l label-key!=label-value` will delete the pods based on labels. It won't delete the pods that are the specified.
+
+- Unlike name/UIDs, labels do not provide uniqueness, as in general we can expect many objects to carry the same label.
+
+- Once labels are attached to an object, we would need filters to narrow down and these are called as label selectors.
+
+- The API currently supports two types of selectors. Equality based selector and set based selector.
+
+ **Example of equality based selector:** [=, !=]
+
+ - ` kubectl get pods -l key-label=value-label` will find a pod according to the given key and value.
+ - `kubectl get pods -l key-label!=value-label` will not find the pods according to the given key and value.
+
+ **Example of set based selector:** [in, notin, exists]
+
+ - `kubectl get pods -l 'key-label in (value-label1, value-label2, ...)'` will the find all the pods relevant to the specified keys and values. Even if a single value is found in a pod, it will fetch that pod and show you.
+ - `kubectl get pods -l 'key-label notin (value-label1, value-label2, ...)'` will ignore all those pods that do not contain these values.
+
+### **Node selector**
+
+- Generally the schedular will do a reasonable placement to place a pod to any random node but you can specify the pod to run on a specific node based on a label.
+
+- Once the node is labelled, you can use the label selector to specify the pods to run on a specific node.
+
+- We can use labels to tag nodes.
+
+- First give a label to the node. Then use a node selector to run a pod on a specific worker node.
+
+ kind: Pod
+ apiVersion: v1
+ metadata:
+ name: nodelabels
+ labels:
+ env: developments
+ spec:
+ containers:
+ - name: c00
+ image: ubuntu
+ command: ["/bin/bash", "-c", "while true; do echo Hello-Bilal; sleep 5; done"]
+ nodeSelector:
+ hardware: t2-medium
+
+- `kubectl describe pod nodelabels` will give you the information of a node.
+
+- Check the kubectl nodes by typing `kubectl get nodes`
+
+- `kubectl label nodes node-name node-key=node-value` will give a label to the node and the pod will be assigned to that node due to the label.
+
+- `kubectl get pods` will show you the pods that are running.
+
+## **Scaling and Replication**
+
+- By default, kubernetes does not provide scaling and replication feature. But we do have the objects that give us this kind of functionality.
+
+- The old pod won't restart after termination. Instead a new pod will be created with the same features of the previous pod.
+
+- Kubernetes was designed to orchestrate multiple containers and replication.
+
+- Need for multiple containers/replication helps us with
+
+- **Reliability:** If one pod is failed then another will be automatically created to make the process reliable.
+
+- **Load Balancing:** If one node is overloaded then the remaining pods will transferred to another node to balance the load.
+
+- **Scaling:** If a movie is played and many users came then an application requires more containers to manage an application. In this case, kubernetes will automatically create the containers and if the movie came to an end and the users are gone then the kuberentes will delete those containers. It will also create the nodes/instances if required so that other pods are adjusted there.
+
+### **Replication Controller**
+
+- A replication controller is an object that enables you to create multiple pods, so that the pods always exist. If you set `replicas=2` and one pod is crashed then it will automatically create a new pod and maintain the state of the pod equal to 2.
+
+- If a pod is created using RC, then it will automatically replace other pods if they crashed, failed, or terminated.
+
+- It is not a default object of kubernetes. You have to manually write it yourself in your YAML file.
+
+- RC should be minimum set to 1 or 2(so that a copy is always present) and you can extend the number of it also.
+
+**Example**
+
+ kind: ReplicationController ---> It creates an object of replication type.
+ apiVersion: v1
+ metadata:
+ name: myreplica
+ spec:
+ replicas: 2 ---------> It defines the desired number of pods.
+ selector: -----------> It selects and tells the controllers, which pods to watch/belong to this RC.
+ myname: Bilal -----> It watches the labels.
+ template: -----------> It defines a template to launch a new pod.
+ metadata:
+ name: testpod2
+ labels: -----------> The above selector value needs to be matched with this label value.
+ myname: Bilal
+ spec:
+ containers:
+ - name: c00
+ image: ubuntu
+ command: ["/bin/bash", "-c", "while true; do echo Hello-Bilal; sleep 5; done"]
+
+- `kubectl get rc` will give you the details of replication controller like name, desired, current etc.
+
+- `kubectl describe rc replica-name` will show you the brief details of replica set.
+
+- By writing `kubectl get pods`, you will get 2 pods because `replicas:2`. If you delete any of the pod by writing `kubectl delete pod pod-name`, another one will be automatically created.
+
+- `kubectl scale --replicas=5 rc -l myname=label-name` will scale up/increase the size of replicas to 5.
+
+- `kubectl scale --replicas=1 rc -l myname=label-name` will scale down/decrease the size of replicas to 1.
+
+- You can't delete the replica pod, because it wlll automatically create a new pod. Instead you have to delete the file by typing `kubectl delete -f file-name.yml`.
+
+### **Replica Set**
+
+- Replica set is the advanced version of the replication controller.
+
+- The replication controller only supports equality-based selector whereas the replica set supports both equality-based and set-based selectors i.e. filtering according to the set of values.
+
+**Example**
+
+ kind: ReplicaSet
+ apiVersion: apps/v1
+ metadata:
+ name: myrs
+ spec:
+ replicas: 2
+ selector:
+ matchExpressions: --> these must match the labels
+ - {key: myname, operator: In, values: [Bilal, Khan]}
+ - {key: env, operator: NotIn, values: [production]}
+ template:
+ metadata:
+ name: testpod4
+ labels:
+ myname: Bilal
+ spec:
+ containers:
+ - name: c00
+ image: ubuntu
+ command: ["/bin/bash", "-c", "while true; do echo Hello-Bilal; sleep 5; done"]
+
+- `kubectl get rs` will give you the details of replica set like name, desired, current etc.
+
+- After creating the pod, delete it and you will see another pod automatically created.
+
+- `kubectl describe rs replicaset-name` will show you the brief details of replica set.
+
+- `kubectl scale --replicas=2 rs/myrs` will scale up and down the replicas.
+
+- `kubectl delete rs/myrs` will delete the replica set.
+
+## **Explaining it in a video**
+
+Here you can get an explanation in a video. [24/60 Day of DevOps Challenge]()
\ No newline at end of file
diff --git a/Days/day25.md b/Days/day25.md
new file mode 100755
index 0000000..90425ad
--- /dev/null
+++ b/Days/day25.md
@@ -0,0 +1,116 @@
+On the twenty-fifth day, I learned the following things about Kubernetes.
+
+## **Deployment and Rollback**
+
+ --------------
+ | Deployment |
+ --------------
+ |
+ |
+ -------------------------------------------
+ | | |
+ | <<<<<<<<<<<<<<<<<< | <<<<<<<<<<<<<<<<<< |
+ ---------- ---------- <<<<<<<<< ----------
+ | RS | | RS | | RS |
+ | V1 | | V2 | | V3 |
+ ---------- ---------- ----------
+ /\ /\ /|\
+ / \ / \ / | \
+ / \ / \ / | \
+ -------- -------- -------- -------- -------- | --------
+ | Pod1 | | Pod2 | | Pod1 | | Pod2 | | Pod1 | | | Pod2 |
+ -------- -------- -------- -------- -------- | --------
+ |
+ --------
+ | Pod3 |
+ --------
+
+- Replication control and replica set is not able to do updates and rollback apps in the cluster.
+
+- A deployment works as a supervisor for pods, giving you control over how and when a new pod is updated or rolled back to the previous state.
+
+- When using deployment object, we first define the state of an app then K8s cluster schedules an app onto specific individual nodes.
+
+- K8s monitors if the node goes down or pod is deleted then the deployment controller replaces it.
+
+- A deployment provides declaration updates for pods and replicaset. Deployment will send a request to the replicaset and replicaset will implement the functionality on pod.
+
+- **Note:** If the deployment rolled back from the version 3 replicaset to the version 2 replicaset, then the version 3 replicaset pods will be present in the version 2 replicaset. Although the code is of version 2 but the pods are of version 3.
+
+## **Use cases of Deployment**
+
+- Deployment rollsout the replicaset. The replicaset creates pods in the background and check the status of the rollout to see if it is succeeded or not.
+
+- If a new version of the replicaset is created then the previous version replicaset will stop working and the pods of that previous version will be deleted. If the previous version replicaset activated again then the pods will be newly created.
+
+- Scale up and scale down the deployment to facilitates more load.
+
+- Pause the deployment if you want to remove some errors and then resume back to start a new rollout.
+
+- Clean up older replicaset that you don't need anymore.
+
+- If there are problems in the deployment, kubernetes will automatically go to the previous version, however you can also explicitly rollback to a specific version.
+
+ kind: Deployment
+ apiVersion: apps/v1
+ metadata:
+ name: mydeployment
+ spec:
+ replicas: 2
+ selector: # tell the controller which pods to watch/belongs to
+ matchLabels:
+ name: deployment
+ template:
+ metadata:
+ name: testpod
+ labels:
+ name: deployment
+ spec:
+ containers:
+ - name: c00
+ image: ubuntu
+ command: ["bin/bash", "-c", "while true; do echo Bilal Khan; sleep 5; done"]
+
+- **Note:** The format of the replicaset is always formatted as [deployment-name]-[random-string].
+
+- `kubectl get deploy` will show you the list of deployments and their status, whether they're created or not.
+
+- `kubectl describe deploy deployment-name` will check that how deployment creates RS and pods.
+
+- `kubectl get rs` will give the replicaset.
+
+- `kubectl scale --replicas=1 deploy deployment-name` will scale up or scale down the deployment.
+
+- `kubectl logs -f podname` will check what is running inside the containers.
+
+- Change the image in a file to centos and apply again to get the new replicas created.
+
+- After changing the image in a file, to check which OS image, you can write `kubectl exec deployment-pod-name -- cat /etc/os-release`.
+
+- `kubectl rollout status deployment deployment-name` will give you the current status of the roll out.
+
+- `kubectl rollout history deployment deployment-name` will give you the history of deployment.
+
+- `kubectl rollout undo deploy/deployment-name` will help you to rollback to only one previous version.
+
+- `kubectl rollout undo deploy/deployment-name --to-version` will help you to rollback to a specific version by specifying it with `--to-version`.
+
+**Deployment failure**
+
+Your deployment may get stuck trying to deploy its newest replicaset without ever completing. This can occcur due to some following factors.
+
+- Insufficient Quota(Insufficient space in node)
+
+- Readiness probe failures(pods were not ready to be readed)
+
+- Image pull errors(Image is not addressed in manifest file)
+
+- Insufficient permission(No permission to fetch)
+
+- Limit ranges(Limits are exceeded)
+
+- Application runtime configuration(App didn't run)
+
+## **Explaining it in a video**
+
+Here you can get an explanation in a video. [25/60 Day of DevOps Challenge]()
\ No newline at end of file
diff --git a/Days/day26.md b/Days/day26.md
new file mode 100755
index 0000000..ea8791b
--- /dev/null
+++ b/Days/day26.md
@@ -0,0 +1,359 @@
+On the twenty-sixth day, I learned the following things about Kubernetes.
+
+## **Kubernetes Networking**
+
+Kubernetes networking addresses four concerns.
+
+- Containers within a pod use networking to communicate via loopback.
+
+- Cluster networking provides communication b/w different pods.
+
+- The container in a pod of node1 will not talk to the container in a pod of node2. Both the containers should be in the same pod to talk to each other.
+
+- The service lets you expose an application running in pods to be reachable from outside cluster/browser or internet.
+
+- You can also use service to publish services only for consumption inside your cluster.
+
+### **Container to container communication**
+
+- Container to container communication on the same pod happens through localhost within the containers.
+
+ Localhost
+ |
+ --------------------------|---------------------------
+ | ----------------------|----------------------- |-------> Node
+ | | ------------- | ------------- |---|-------> Pod
+ | | | C00 | <-------> | C01 |----|---|-------> Container
+ | | ------------- ------------- | |
+ | ---------------------------------------------- |
+ ------------------------------------------------------
+
+
+- One pod containers will not be present in different nodes. Instead they will be in the same node.
+
+ kind: Pod
+ apiVersion: v1
+ metadata:
+ name: testipod
+ spec:
+ containers:
+ - name: c00
+ image: ubuntu
+ command: ["/bin/bash", "-c", "while true; do echo Hello-Bilal; sleep 5; done"]
+ - name: c01
+ image: httpd
+ ports:
+ - containerPort: 80
+
+- After creating the pod, type `kubectl exec pod-name -it -c container-name -- /bin/bash`. It will take you inside the container that is present in a pod.
+
+- `apt update && apt install curl` will update and install the curl package inside your container.
+
+- `curl localhost:80` will show you a message that it works. It will show you that two containers are communicating successfully.
+
+### **Pod to pod communication**
+
+- Pod to pod communication on the same worker node happens through ip address.
+ - If you provided a pod A ip address, you'll communicate with pod A and vice versa.
+ - If container A wants to access container B in another pod, then to go inside the container A, give an ip address and it will redirect you to container B.
+
+- By default, pod's ip address will not be accessible outside the node.
+
+
+ IP address IP address
+ | |
+ ------------|-----------------------------|------------
+ | --------|---------- ----------|-------- |-------> Node
+ | | ------------- | | ------------- |---|-------> Pod
+ | | | C00 | | <-----> | | C01 |--|---|-------> Container
+ | | ------------- | | ------------- | |
+ | ------------------- ------------------- |
+ -------------------------------------------------------
+**First file**
+
+ kind: Pod
+ apiVersion: v1
+ metadata:
+ name: testpod1
+ spec:
+ containers:
+ - name: c01
+ image: nginx
+ ports:
+ - containerPort: 80
+
+**Second file**
+
+ kind: Pod
+ apiVersion: v1
+ metadata:
+ name: testpod2
+ spec:
+ containers:
+ - name: c02
+ image: httpd
+ ports:
+ - containerPort: 80
+
+- After making pods, type `kubectl get pods -o wide` to take the ip address.
+
+- `kubectl exec pod-name -it -c container-name -- /bin/bash` will take you inside the container that is present isn a pod.
+
+- `apt update && apt install curl` will update and install curl incase if it is not installed in the pod terminal.
+
+- `curl ip-address:80` will give you the details of that ip address. Type this command inside the pod terminal.
+
+### **Node to node communication**
+
+**Service object**
+
+- When using RC, pods are terminated and created during scaling or replication operations.
+
+- When using deployments while updating the image version, the pods are terminated and new pods take the place of other pods.
+
+- Pods are dynamic. i.e. They come and go on the k8s cluster and on any of the available nodes and it would be difficult to access the pods as the pods IP changes once it is recreated.
+
+ - **Problem**
+
+ Each pod gets its own ip address, however in a deployment the set of pods running could be different in one moment from the set of pods running in another moment. This leads to a problem that if a pod & it's ip address is not permanent to its place and is changing everytime, how would another pod keep track of the ip address of the target pod?
+
+ - **Solution**
+
+ - The solution is the service object that is a logical bridge b/w pods and the end users which provides virtual ip address(VIP) and with the help of VIP, the communication will happen even if the pod's ip addresses are changing.
+ - Each object will have its own virtual ip address.
+ - Service allows clients to reliably connect to the containers running in the pod using the VIP.
+ - The VIP is not an actual IP address connected to the network interface but the purpose is to forward traffic to one or more pods.
+ - Kube proxy will keep the mapping b/w the VIP and pod's ip address. If a pod new ip address is created, then kube proxy will map it with the VIP to build the connection.
+
+ - **Problem**
+
+ Although each pod has a unique ip address, these ip addresses are not exposed outside the cluster. They can only communicate inside the cluster.
+
+ - **Solution**
+
+ - Services help to expose the VIP(that is mapped to the pods) and allow application to receive traffic outside the cluster (browser etc).
+ - You have to define labels on both pod side and services side that will select the specific pods(from thousands of pods) to put under a service.
+ - Creating a service will create an endpoint that will access the pods/application in it.
+ - Services can be exposed in four different ways by specifying a type in the service specification.
+ 1. Cluster IP
+ 2. NodePort
+ 3. LoadBalancer
+ 4. Headless
+ - By default, service can only run b/w ports 30,000 - 32,767.
+ - The set of pods targeted by a service is usually determined by a selector.
+ - NodePort is an upper layer of the cluster ip and Load balancer is the upper layer of the nodeport and headless is top part.
+
+**1. Cluster IP**
+
+ Cluster IP: ip-address:port-number
+ ---------------------------------------------------------------
+ | -------------------- -------------------- |----> Cluster
+ | | ------------ | | ------------ |---|----> Node
+ | | | |---|---------------|-->| |---|---|----> Pod
+ | | | Pod1 | | | | Pod2 | | |
+ | | | |<--|---------------|---| | | |
+ | | ------------ | | ------------ | |
+ | -------------------- -------------------- |
+ ---------------------------------------------------------------
+
+- The nodes will be mapped with a VIP(it is fixed) so that the pod A from the node A can communicate with the pod B of the node B without needing to identify the specific ip addresses of each pods.
+- Cluster IP exposes VIP to be reachable only from within the cluster. VIP can't be accessed outside the cluster.
+
+**deployment.yml**
+
+ kind: Deployment
+ apiVersion: apps/v1
+ metadata:
+ name: mydeployments
+ spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ name: deployment
+ template:
+ metadata:
+ name: testpod2
+ labels:
+ name: deployment
+ spec:
+ containers:
+ - name: c00
+ image: httpd
+ ports:
+ - containerPort: 80
+
+- After creating a pod, write `kubectl get pods -o wide` to see the ip address of the pod.
+
+- `kubectl exec pod-name -it -c container-name -- /bin/bash` will take you inside the container that is present in a pod.
+
+- `apt update && apt install curl` will update and install curl incase if it is not installed in the pod terminal.
+
+- `curl ip-address:80` will give you the details of that ip address. Type this command inside the pod terminal.
+
+**service.yml**
+
+ kind: Service --> Defines to create service type object
+ apiVersion: v1
+ metadata:
+ name: demoservice
+ spec:
+ ports:
+ - port: 80 --> Containers port exposed
+ targetPort: 80 --> Pods port
+ selector:
+ name: deployment --> Apply this service to any pods which has the specific label
+ type: ClusterIP --> Specifies the service type i.e. ClusterIP or NodePort
+
+- `kubectl apply -f file-name.yml` will apply the changes.
+
+- After creating a pod, type `kubectl run any-pod-name --image=curlimages/curl -i --tty -- sh`. It will fetch the curl image from the DockerHub.
+
+- `kubectl exec -i --tty pod-name -- sh` will execute and take you inside the curl terminal if you exit from it.
+
+- Take the cluster ip and type `curl ip-address:port` in the curl terminal to show you the success message.
+
+- `kubectl get services` OR `kubectl get service` OR `kubectl get svc` will give the list of services and the Cluster IP that is basically a virtual ip address.
+
+- `kubectl describe services service-name` will show you the details of the service.
+
+**2. NodePort**
+
+ ------------
+ |-------------------------> | Internet |
+ | ------------
+ |
+ | Cluster IP: ip-address:port-number
+ httpd | ---------------------------------------------------------------
+ | | -------------------- -------------------- |----> Cluster
+ | | | ------------ | | ------------ |---|----> Node
+ | | | | | | ------------> | | |---|---|----> Pod
+ ----|---| | Pod1 | | | | Pod2 | | |
+ | | | | | <------------ | | | | |
+ | | ------------ | | ------------ | |
+ | -------------------- -------------------- |
+ ---------------------------------------------------------------
+
+- Nodeport access a service from outside the cluster via internet.
+
+- Attach a port number that is assigned to you with public DNS and that port number is attached to the virtual ip address and the VIP will contact the pod inside a node. As a result, the container inside pod will be shown to us via internet.
+
+- First take the deployment.yml file from above and apply it in the kubectl. Then apply the service.yml file below.
+
+**service.yml**
+
+ kind: Service --> Defines to create service type object
+ apiVersion: v1
+ metadata:
+ name: demoservice
+ spec:
+ ports:
+ - port: 80 --> Containers port exposed
+ targetPort: 80 --> Pods port
+ selector:
+ name: deployment --> Apply this service to any pods which has the specific label
+ type: NodePort --> Specifies the service type i.e. ClusterIP or NodePort
+
+- Make a deployment file and inside it, write the data that is written above in the deployment.yml file.
+
+- NodePort will make the pod accessible to the internet or outside the cluster.
+
+- After making a pod of this file, type `kubectl get svc`, you'll be provided a port number like this `80:/TCP`.
+
+- `kubectl describe svc demoservice` will show you the details of the service.
+
+- Browser will access the VIP that is attached to the port number. VIP will will access a pod and an application inside it.
+
+- `minikube ip` will give the ip address that will be used in the browser to work on.
+
+- After getting the minikube ip address and the port number from `kubectl get svc`, write `http://:` in the browser. It will give you the success message.
+
+- If you're using a cloud platform, you can take the DNS link and attach it with port-number to make it work.
+
+## **Volumes**
+
+- Containers are short lived in nature.
+
+- All the data stored inside a container is deleted if the container crashes. However the kubelet will restart it with a clean state, which means that it will not have any of the old data.
+
+- To overcome this problem, volume comes into picture and kubernetes uses it. A Volume in Kubernetes represents a directory with data that is accessible across multiple containers in a Pod.
+
+- In kubernetes, a volume is attached to a pod and shared among the containers of that pod.
+
+- The volume has the same life span as the pod and it will not be affected if the containers crashed. If new containers are created, then they will take the data from the volume that is already present.
+
+- If the pod is crashed then the volume will also be crashed.
+
+ Pod
+ --------------------------------
+ | --------- |
+ | | Vol | |
+ | --------- |
+ | / \ |
+ | --------- --------- |
+ | | C00 | | C01 | |
+ | --------- --------- |
+ --------------------------------
+
+### **Volume Types**
+
+- A volume decides the properties of the directory/pod like size, content etc. Some of the volume types in which you can store the data are.
+
+- None-local such as EmptyDir or host path.
+
+- File sharing type such as NFS.
+
+- Cloud provider such as AWSElasticBlockStore, AzureDisk etc.
+
+- Distributed filesystem types, e.g. glusterfs or cephfs.
+
+- Special purpose types like secret, gitrepo.
+
+- **EmptyDir**
+
+ - When a pod is newly created and assigned to a node then an emptydir volume is also created and exist as long as that pod is running on that node.
+
+ - Use emptydir if we want to share content b/w multiple containers on the same pod and not to the host machine.
+
+ - As the name says, it is initially empty.
+
+ - After the containers are created, they will be mounted/attached with same volume.
+
+ - When a pod from a node is deleted, the data in the emptydir will be deleted forever.
+
+ - A container crashing does not remove a pod from a node. The data in an emptydir volume will be safe if the container crashes.
+
+ kind: Pod
+ apiVersion: v1
+ metadata:
+ name: myvolemptydir
+ spec:
+ containers:
+ - name: c1
+ image: centos
+ command: ["/bin/bash", "-c", "sleep 15000"]
+ volumeMounts: ---> Mount definitions inside the containers
+ - name: xchange
+ mountPath: "/tmp/xchange"
+ - name: c2
+ image: centos
+ command: ["/bin/bash", "-c", "sleep 10000"]
+ volumeMounts:
+ - name: xchange
+ mountPath: "/tmp/data"
+ volumes:
+ - name: xchange
+ emptyDir: {}
+
+ - After creating a pod, type `kubectl exec pod-name -it -c c1 -- /bin/bash`. It will take you inside the pod container 1 terminal.
+
+ - `cd tmp/xchange` will take you inside the xchange directory. Create any kind of file and write something in it.
+
+ - Type `kubectl exec pod-name -it -c c2 -- /bin/bash`. It will take you inside the pod container 2 terminal.
+
+ - `cd tmp/data` will take you inside the data directory. If you type `ls`, it will show you the same file that was created in the first container.
+
+ - If you modified it, it will show you the changes in another container also.
+
+## **Explaining it in a video**
+
+Here you can get an explanation in a video. [26/60 Day of DevOps Challenge]()
\ No newline at end of file
diff --git a/Days/day27.md b/Days/day27.md
new file mode 100755
index 0000000..00ba683
--- /dev/null
+++ b/Days/day27.md
@@ -0,0 +1,197 @@
+On the twenty-seventh day, I learned the following things about Kubernetes.
+
+# **Jobs, Init containers and Pod lifecycle**
+
+## **Jobs**
+
+- Jobs is another object but it's purpose is different from pod.
+
+- We have replicasets,, daemonsets, and deployments etc. They all share one common property. They always make sure that the pod is running. The controller restarts or reschedules the pod to make sure the application in a pod always keeps running.
+
+- There is another object that is called Jobs. It is not bound to be running every time. Instead, if the task is completed, the jobs will stop. The pod can be recreted again if required but the jobs will stop if the work is finished.
+
+- You can schedule it and it will be created and deleted multiple times.
+
+- `sleep 5` will stop the container after 5 seconds. It will not delete the job.
+
+ apiVersion: batch/v1
+ kind: Job
+ metadata:
+ name: myjob
+ spec:
+ template:
+ metadata:
+ name: myjob
+ spec:
+ containers:
+ - name: c01
+ image: centos:7
+ command: ["bin/bash", "-c", "echo Bilal Khan; sleep 5"]
+ restartPolicy: Never
+
+- `watch kubectl get pods` will give you the jobs with Ready 1/1 but after 5 seconds completion, it will show you the Ready 0/1 because the container is now deleted and it is not created again and the status is completed.
+
+- Jobs will not get deleted by itself. You have to delete it by writing `kubectl delete -f file.yml`.
+
+### **Parallelism**
+
+- It will create multiple pods and run them parallely and delete them after some given time.
+
+ apiVersion: batch/v1
+ kind: Job
+ metadata:
+ name: testjob
+ spec:
+ parallelism: 5
+ activeDeadlineSeconds: 10
+ template:
+ metadata:
+ name: testjob
+ spec:
+ containers:
+ - name: c01
+ image: centos:7
+ command: ["bin/bash", "-c", "echo Bilal Khan; sleep 30"]
+ restartPolicy: Never
+
+- `parallelism: 5` will run 5 pods parallely.
+
+- `sleep 30` will terminate the containers after 30 seconds.
+
+- `activeDeadlineSeconds: 10` will delete the pods after 40 seconds. The container will be deleted after 30 seconds and after 10 seconds of container deletion, the pods will also be deleted.
+
+- After applying, if you type `watch kubectl get pods`, you will see that all the pods are deleted after 40 seconds.
+
+### **CronJob**
+
+ apiVersion: batch/v1
+ kind: CronJob
+ metadata:
+ name: bilal
+ spec:
+ schedule: "* * * * *"
+ jobTemplate:
+ spec:
+ template:
+ spec:
+ containers:
+ - image: ubuntu
+ name: bilal
+ command: ["/bin/bash", "-c", "echo Bilal Khan; sleep 5"]
+ restartPolicy: Never
+
+- The `* * * * *` in the `schedule` shows the minutes. Each star represents one minute. It means that new pod will be created after every one minute and the container inside that pod will be terminated after every 5 seconds.
+
+- After applying this file, type `watch kubectl get pods` and you will see that new pod is created after every one minute and from 0 to 5 seconds the Ready state will be equal to 1/1 but after 5 seconds, the Ready state will be equal to 0/1. It means that the container is deleted after 5 seconds.
+
+## **Init Container**
+
+- Init container is an initialized or the starting container that will run and is required before running the main container. Let's say that the main container is an application but Init container is the process to first install that application, log in and then run that application in the main container.
+
+- You can specify the processes in init container before creating the main container. When the main container is created, the init container will be deleted.
+
+- If the init container fails then kubernetes will repeatedly create it again until it is succeeded.
+
+### **Use cases**
+
+- Making a format of the database before inserting the values in it.
+
+- Delaying the applications to launch until the dependencies are ready.
+
+- Clone the git repository into the volume to get the necessary files.
+
+ apiVersion: v1
+ kind: Pod
+ metadata:
+ name: initcontainer
+ spec:
+ initContainers:
+ - name: c1
+ image: centos
+ command: ["/bin/sh", "-c", "echo LIKE AND SUBSCRIBE BILAL KHAN > /tmp/xchange/testfile; sleep 30"]
+ volumeMounts:
+ - name: xchange
+ mountPath: "/tmp/xchange"
+ containers:
+ - name: c2
+ image: centos
+ command: ["/bin/bash", "-c", "while true; do echo `cat /tmp/data/testfile`; sleep 5; done"]
+ volumeMounts:
+ - name: xchange
+ mountPath: "/tmp/data"
+ volumes:
+ - name: xchange
+ emptyDir: {}
+
+- After applying this file, type `watch kubectl get pods`. It will show you `init:(0/1)`. It will show you that the container is first initializing. Then it will give you the Ready message `1/1` that the container is running.
+
+- After that, type `kubectl logs -f pods/pod-name` to print the message after every 5 seconds.
+
+- Type `kubectl describe pod/initcontainer`, you can get the condition of the pod that first this happened and then this happened and so on.
+
+- Type `kubectl describe pod/initcontainer | grep -A 5 Conditions` will only show you the conditions.
+
+## **Pod lifecycle**
+
+There are different phases of a pod.
+
+- Pending
+- Running
+- Succeeded
+- Failed
+- Unknown
+- Completed
+
+### **Pending**
+
+- The pod has been accepted by k8s system but it is in the process and not running yet.
+
+- One or more of the container images are still downloading.
+
+- If the resources are not found then it will search them and give you the success or failure message.
+
+### **Running**
+
+- If the pod is successfully created in the node.
+
+- All the containers have been created.
+
+- Atleast one container is still running or is in the process of starting or restarting.
+
+### **Succeeded**
+
+- All the containers have successfully completed their required task and are going to be terminated and will not be restarted.
+
+### **Failed**
+
+- All the containers in the pod have been terminated and atleast one container has terminated.
+
+- The container either exited with non-zero status or was terminated by the system.
+
+### **Unknown**
+
+- The master does not know the state of the pod.
+
+- Typically due to an error in network or communicating with the host of the pod.
+
+### **Completed**
+
+- The pod has run and completed the task and there is no need to keep it running.
+
+## **Pod conditions**
+
+These are possible types.
+
+- **PodScheduled** - The master has scheduled a pod to a node.
+
+- **Ready** - The pod is created successfully and the container(s) is also running in it.
+
+- **Initialized** - All init containers have started successfully.
+
+- **Unscheduled** - The scheduler cannot schedule the pod right now due to the lack of resources.
+
+- **ContainerReady** - All containers are ready in the pod.
+
+## **Explaining it in a video**
+
+Here you can get an explanation in a video. [27/60 Day of DevOps Challenge]()
\ No newline at end of file
diff --git a/Days/day28.md b/Days/day28.md
new file mode 100755
index 0000000..c25af2f
--- /dev/null
+++ b/Days/day28.md
@@ -0,0 +1,107 @@
+On the twenty-eighth day, I learned the following things about Datree.
+
+# Datree
+
+- Datree will help developers to prevent misconfigurations in files before going to the production.
+
+- Before causing failures, datree will check the file and after checking out, it will forward it.
+
+- It will first perform validation before going to production.
+
+## Get started
+
+- Visit https://www.datree.io/ to get started with datree.
+
+- Click on the Quick start button, it will lead you to the https://hub.datree.io/.
+
+- Signup to the account. You can signup with GitHub or Google.
+
+- After signup, open https://hub.datree.io/ again and install the Datree CLI.
+
+- The installation commands varies according to the operating system but for Linux write this command in the terminal `curl https://get.datree.io | /bin/bash`.
+
+- You will get the following output in your CLI:
+
+
+
+
+
+- After installing datree, type ` cat ~/.datree/k8s-demo.yaml`. It will show you demo kubernetes file.
+
+- `datree test ~/.datree/k8s-demo.yaml` will show you the validation.
+
+## How to handle multiple rules?
+
+- In kubernetes, there are so many rules in kubernetes like for containers, deployment, cron jobs, networking etc.
+
+- Datree provides built-in rules that will be checked in the file. Now the question arises that how to avail those rules.
+
+**Below are the steps:**
+
+- After the account creation, there will be a blank page and it will ask you to click on the Setup button.
+
+- After clicking on the setup button, you will see a pop up window showing you three commands to run. First is to install the datree as I have done it previously. The second command is the below that I am going to run and the third command is to run the CLI.
+
+- To check it, write the second command `datree test ~/.datree/k8s-demo.yaml`. It will give you the result as it is present in the above picture.
+
+- There are some checks that is run by the test command. If you take a look at them, you will see that YAML is validated, and kubernetes schema is also validated but the 4 policies checks are failing.
+
+- If you close the popup as I have told you earler and refresh the datree page, you will see the image in the history of datree page.
+
+- The first error is `❌ Ensure each container image has a pinned (tag) version [1 occurrence]`. It means that the nginx inside the `~/.datree/k8s-demo.yaml` file does not have a specific version. Instead only the latest version is mentioned.
+
+- The second error is `❌ Ensure each container has a configured memory limit [1 occurrence]`. It means that in the demo file, the memory limit is not provided.
+
+- The third error is `❌ Ensure each container has a configured liveness probe [1 occurrence]`. It means that livenessProbe is not present. Although the readinessProbe is present but it is desiring livenessProbe.
+
+- The fourth error `❌ Ensure workload has valid label values [1 occurrence]`. It means that owner does not have a value.
+
+## Publish the rules and share them
+
+- Go to the policies page and find the `pinned (tag)`. You will see the pinned tag option. Now close this option and this time, if you run the `datree test ~/.datree/k8s-demo.yaml`. It will show you 3 failures instead of four.
+
+- You can enable the rule by enabling the checkbox there and it will be appeared again. This is the first method.
+
+
+
+
+
+- The second method is that if you want to enable that rule and show it again, go to the settings and enable the policy as code check box and then download the policies.yaml.
+
+
+
+
+
+- In the *policies.yaml* file, uncomment any of the rule and then pulish it by writing, `datree publish policies.yaml`.
+
+You can share your policy with others also. After downloading the file, give it to others and comment and uncomment the data that you want to show them.
+
+## Make changes in YAML file
+
+- First edit the file by writing `sudo vi ~/.datree/k8s-demo.yaml` and then add a curly braces or made other changes.
+
+- After writing `datree test ~/.datree/k8s-demo.yaml`, you will see that a new error is generated.
+
+## Create your own policy
+
+- Creating your own policy is useful for different stages of deployment like for testing environment you want particular kind of policy check etc.
+
+- Click on the Create policy and give it a name.
+
+- Initially it will contain no rules and you can give rules by enabling it.
+
+- After enabling the rules, type `datree test ~/.datree/k8s-demo.yaml -p `. It will give you all the errors that has occurred.
+
+## What is token?
+
+- Token will build a connection b/w CLI and the GUI of datree. Whenever something happens in the CLI, it will be updated in the Datree dashboard.
+
+- To access the token, click on the settings and open the token management. You will see a hidden token and you can copy it.
+
+- You can also access it using CLI by writing, `cat ~/.datree/config.yaml` and it will show you the token.
+
+- You can change the token by typing `datree config set token ` and it will be set.
+
+## **Explaining it in a video**
+
+Here you can get an explanation in a video. [28/60 Day of DevOps Challenge]()
\ No newline at end of file
diff --git a/Days/day29.md b/Days/day29.md
new file mode 100755
index 0000000..569fcd0
--- /dev/null
+++ b/Days/day29.md
@@ -0,0 +1,89 @@
+On the twenty-ninth day, I learned the following things about Lens.
+
+# Lens
+
+- Lens is an open source application that will help you in managing and monitoring clusters in real time.
+
+- It is a powerful IDE for people to deal with clusters on their daily basis. Otherwise you will have to use command line tools and big YAML files.
+
+- With lens, you can see the setup, configuration and increase the visibility that what is going on inside the cluster. You can get the statistics and add your dashboard.
+
+## Installation
+
+- You can download it from the Lens [website](https://k8slens.dev/). The site will suggest the appropriate download for your system—Mac, Windows, or Linux.
+
+- When you first open the application, it will prompt for your Len ID.
+
+
+
+
+
+- Choose Lens ID if you already have a Lens ID or need to create one. Alternatively, you can select Activation Code to proceed with an air-gapped installation, if you have already set up an activation code.
+
+**Note:** If you wish to perform an air-gapped installation but don’t have an activation code yet, you will need to create a Lens ID on an internet-connected device—you can do that on the Lens ID site, following the instructions below for new account creation.
+
+- On the next page, either log in or select Create your Lens ID.
+
+
+
+
+
+- You will need to enter a username, password, and email. Alternatively, you can authenticate with a GitHub or Google account.
+
+
+
+
+
+- You will need to verify your email, then select Add Lens Subscription.
+
+Note: From this Lens ID management page, you will also be able to create an activation code for air-gapped installation.
+
+
+
+
+
+- Choose a Lens Personal or Lens Pro subscription. (A 30-day free trial of Lens Pro is available).
+
+
+
+
+
+- Now you're ready to get started with Lens!
+
+
+
+
+
+- Select Open Lens Desktop to open Lens. The application will check for updates, and then you’ll be ready to get started.
+
+## Connecting to a cluster
+
+Lens will search common directories for kubeconfig files. If you click Browse clusters in catalog on the welcome page (or select the catalog icon in the upper right-hand corner), you may already find some clusters listed—local development clusters, for example. You can simply click on these clusters to connect to them with Lens.
+
+
+
+
+
+- If the minikube cluster is not present, then it is the problem of configuration file.
+
+- Open the *~/.kube* directory and inside it, open the *config* file. If the data inside the config file is like the below data then it means that the minikube won't appear.
+
+ apiVersion: v1
+ clusters: null
+ contexts: null
+ current-context: ""
+ kind: Config
+ preferences: {}
+ users: null
+
+- The solution for this is to copy the data of the *admin.conf* file and paste it into the conf file. *admin.conf* file will be present in the $HOME directory.
+
+## Create a resource
+
+- Click on the plus button to open a terminal and click to create a resource.
+
+- After the terminal is opened, just select the type of resource that you want.
+
+## **Explaining it in a video**
+
+Here you can get an explanation in a video. [29/60 Day of DevOps Challenge]()
\ No newline at end of file
diff --git a/Days/day3.md b/Days/day3.md
new file mode 100755
index 0000000..cf015af
--- /dev/null
+++ b/Days/day3.md
@@ -0,0 +1,30 @@
+On the third day, I learned the following things about GitHub.
+
+- Click on the plus button to create a new repository.
+- `git remote add origin repo-copy/link.git` will add the local storage or the origin into the remote server.
+- `git remote -v` will give the links of all the repositories from where the data was added or is to be added.
+- When the repository is first created, `git push origin branchname` will push the data from the branchname and upload it on GitHub.
+- Click on the commit link to see the history of commits.
+- Click on the fork button to make a copy of someone else's repository in your own account.
+- `git clone repository-link.git` will download the repository data into your local storage.
+
+**Origin Repository**
+
+The copy of the repository that is forked from someone else's account and now it is present in your own account will be called origin.
+
+**Upstream Repository**
+
+Upstream is the original repo that you have forked from an original account.
+
+- `git remote rm origin` will delete the origin data form your local repository.
+- `git remote add upstream original-repo/link.git` will add the data of the original repository.
+- `git push origin branchname` will send a pull request to the upstream account from the origin account so that upstream could merge those changes into itself.
+- In the pull request section, *Merge pull request* option will be appeared so that the owner of the repository could merge the requested data into itself.
+- `git fetch --all --prune` will fetch all the data from the upstream or the original account and transfer it to the origin account or a person's repository who forked it. *Prune* means that only the relevant data will be fetched.
+- `git reset --hard upstream/main` will delete all your local changes to main.
+- `git pull upstream main` will fetch the data from the upstream and delete all the local changes also.
+- Merge conflict will arise if there are multiple changes are committed on the same line.
+
+## **Explaining it in a video**
+
+Here you can get an explanation in a video. [3/60 Day of DevOps Challenge]()
\ No newline at end of file
diff --git a/Days/day30.md b/Days/day30.md
new file mode 100755
index 0000000..305080c
--- /dev/null
+++ b/Days/day30.md
@@ -0,0 +1,55 @@
+On the thirtith day, I learned the following things about Monokle.
+
+# Monokle
+
+- Monokle will manage and debug your manifests before you deploy them to your cluster.
+
+- It will help you identify multiple kubernetes objects connection and their place in the manifests.
+
+- It is a great tool to inspect your Kubernetes manifests with an easy interface showing you how your manifests are connected to each other and how they translate to your existing cluster. It also allows your team to avoid drifts between your manifests and clusters as you keep adding more and more components.
+
+## Where monokle lies?
+
+- Monokle lies in b/w Dev and Ops. Let's take a look at this picture.
+
+
+
+
+
+- After development, the manifest will be given to the monokle for debugging and then it will be deployed.
+
+## Installation
+
+- Go to this [website](https://github.com/kubeshop/monokle) and download the monokle according to your operating system.
+
+- After installation, open the monokle. It will show you this screen.
+
+
+
+
+
+- Click on open a new/empty project and give it a name.
+
+## Work with YAML file
+
+- After insallation, open the documentation [website](https://kubeshop.github.io/monokle/) and click on working with Kustomize and it will lead you [here](https://github.com/argoproj/argo-rollouts/tree/master/manifests).
+
+- Click on the [argo-rollouts](https://github.com/argoproj/argo-rollouts) and copy the HTTPS link so that you can clone it.
+
+- Open the terminal and type `git clone https://github.com/argoproj/argo-rollouts.git` to clone the repository.
+
+- Open the documentation again and now click on working with Helm and it will lead you [here](https://github.com/emissary-ingress/emissary/tree/master/charts/emissary-ingress).
+
+- Click on the [emissary](https://github.com/emissary-ingress/emissary) and copy the HTTPS link so that you can clone it.
+
+- Open the terminal and type `git clone https://github.com/emissary-ingress/emissary.git` to clone the repository.
+
+## Initial page
+
+
+
+
+
+## **Explaining it in a video**
+
+Here you can get an explanation in a video. [30/60 Day of DevOps Challenge]()
\ No newline at end of file
diff --git a/Days/day31.md b/Days/day31.md
new file mode 100755
index 0000000..818d748
--- /dev/null
+++ b/Days/day31.md
@@ -0,0 +1,97 @@
+On the thirty first day, I learned the following things about Kubescape.
+
+# Kubescape
+
+- Kubescape scans K8s clusters, Kubernetes manifest files (YAML files, and HELM charts), code repositories, container registries and images, detecting misconfigurations according to multiple frameworks (such as the NSA-CISA, MITRE ATT&CK®), finding software vulnerabilities, and showing RBAC (role-based-access-control) violations at early stages of the CI/CD pipeline. It calculates risk scores instantly and shows risk trends over time.
+
+- You can create your own frameworks to secure the clusters.
+
+- It provides both the CLI and the GUI format to make the usibility easy.
+
+- Read the detailed article about Kubescpe [here](https://www.armosec.io/blog/kubescape-the-first-tool-for-running-nsa-and-cisa-kubernetes-hardening-tests/).
+
+## Installation
+
+- Go to this GitHub [repo](https://github.com/kubescape/kubescape) and install the kubescape according to your operating system.
+
+## Create YAML files
+
+- Open the monokle application and click on new project from template.
+
+- Give a project name and click on the create button.
+
+- Click on the advanced pod template button, click on the start, give it a name, namespace, image, click on the submit and done.
+
+- Once it is created, save and deploy it.
+
+- Open the Lens application, go to your catalog, click on the clusters and open the minikube node.
+
+- Once it is opened, click on the plus button on the side with terminal and click on the create resource.
+
+- Select a Deployment template and click on the create and close button on the upper right side of the terminal.
+
+- After creating the deployment, if you write, `kubectl get pods`, it will show you the pods running under the deployment.
+
+- If you type `kubectl get deployment`, it will show you the deployment also.
+
+## Performing scan
+
+- Once the kubescape is installed, pod and deployment are created, the next step is to type `kubescape scan --submit --enable-host-scan --verbose` to perform scanning.
+
+- Once the scanning is done, you will a lot of data in terminal showing you where the scanning is failed and passed.
+
+- It will also show you the severity of the data, whether it is high severity, medium or low.
+
+- At the end, it will show you the resource summary by pointing you the risk score and the number of resources.
+
+## Opening the GUI
+
+- At the end, it will give you a link and that link will lead you to the armosec.
+
+- After opening the link, the kubescape cloud page will be like this:
+
+
+
+
+
+- if you scroll down, there will be a bunch of failed statuses, their IDs, descriptions.
+
+- If you click on any of the failed status, it will give you a pop page like this one.
+
+
+
+
+
+- As you can see that these are the pods, deployments and other things that are present in my minikube.
+
+- You can also open and close the boxes if you want. It means that they will be ignored.
+
+- The wrench that you see attached with each object will take you to the file and show you the lines in which the errors are present.
+
+## Making your own framework
+
+- You make framework because you want your own rules to be present in the project. They are the custom rules like deployment, testing etc.
+
+- Click on the settings on the above right side of the kubescape.
+
+- On the side menu, under the posture heading, click on the frameworks and it will open a page for you.
+
+- Click on create a new framework. Give framework a name and a description and check some errors.
+
+- Open this [website](https://cloud.armosec.io/repositories-scan) and click on the repositories scan option.
+
+- Take a copy of the second option that is **Scan a cloned repository from a local directory**. The will be the copy `kubescape scan framework --submit --account `.
+
+- After running the above code, you will see the details of errors of your newly created framework.
+
+## Scan a YAML file
+
+- Go to the directory in which the YAML file is present and open the terminal there.
+
+- Write `kubescape scan filename.yaml` to scan one YAML file or if you want to scan all the files, write `kubescape scan *.yaml`.
+
+- Write `kubescape scan --submit` to scan the whole repository.
+
+## **Explaining it in a video**
+
+Here you can get an explanation in a video. [31/60 Day of DevOps Challenge]()
\ No newline at end of file
diff --git a/Days/day32.md b/Days/day32.md
new file mode 100755
index 0000000..ed79766
--- /dev/null
+++ b/Days/day32.md
@@ -0,0 +1,126 @@
+On the thirty second day, I learned the following things about GitHub Actions.
+
+# GitHub Actions
+
+- GitHub actions will automate the software development workflows right in your repository.
+
+- Let's say you pushed some changes in a branch to GitHub and you want those changes to be merged but the new code that you have added, you do not want to break the existing code with it. In this way, GitHub actions come into picture.
+
+- It will run some checks before the code is pushed or pulled so that after only running those checks, the code will be pushed.
+
+- You can read the documentation related to workflow, events, runner etc in this [page](https://docs.github.com/en/actions/learn-github-actions/understanding-github-actions).
+
+## Quickstart
+
+- Visit this [website](https://docs.github.com/en/actions/quickstart) and you will see all the steps in order to automate the workflow.
+
+- First create a directory and inside this directory initialize the git by writing `git init`.
+
+- Once the git is initialized, create the subdirectory by the name of github and command is `mkdir .github`.
+
+- If you want to check, type `ls -a`. It will show you *.git* and *.github* directories.
+
+- Create subdirectory inside the github directory by the name of the workflows by typing the command `mkdir .github/workflows`. Type `ls .github`. It will show you the workflow inside the *.github* directory.
+
+- After creating the *workflows* directory, create a file in it by writing `touch .github/workflows/github-actions-demo.yml`.
+
+- Open this newly created file and copy the data from the [website](https://docs.github.com/en/actions/quickstart) and paste it inside the file.
+
+ name: GitHub Actions Demo
+ run-name: ${{ github.actor }} is testing out GitHub Actions 🚀
+ on: [push]
+ jobs:
+ Explore-GitHub-Actions:
+ runs-on: ubuntu-latest
+ steps:
+ - run: echo "🎉 The job was automatically triggered by a ${{ github.event_name }} event."
+ - run: echo "🐧 This job is now running on a ${{ runner.os }} server hosted by GitHub!"
+ - run: echo "🔎 The name of your branch is ${{ github.ref }} and your repository is ${{ github.repository }}."
+ - name: Check out repository code
+ uses: actions/checkout@v3
+ - run: echo "💡 The ${{ github.repository }} repository has been cloned to the runner."
+ - run: echo "🖥️ The workflow is now ready to test your code on the runner."
+ - name: List files in the repository
+ run: |
+ ls ${{ github.workspace }}
+ - run: echo "🍏 This job's status is ${{ job.status }}."
+
+- This entire data inside this file is called a workflow.
+
+- The jobs that you see above will only run if a particular event is occured.
+
+- The `on` tag shows the event that if the code is pushed then run these jobs.
+
+## Work on the file
+
+- Make *names.txt* file in the workflow directory by writing `touch names.txt`.
+
+- Write `git status` to check the status.
+
+- Add all the data in the git by writing `git add .`
+
+- Commit it in the git by writing `git commit -m `.
+
+- Create a new public repository on GitHub.
+
+- Once the repo is created, write `git remote add origin https://github.com//.git` to add the repo in the origin.
+
+- After committing the data, write `git push origin master` to push the data in the repository.
+
+- After pushing the data, if you go to the Actions tab in the repository, you will se that the github actions are performed.
+
+- Write something in the *names.txt* and push it again by creating another branch and creating a pull request. If you went back to the Actions tab, you will again see that the github actions are performed.
+
+## Create another file
+
+- Create a file by the name *deployment.yml* and write the following data in it.
+
+ apiVersion: apps/v1
+ kind: Deployment
+ metadata:
+ name: nginx-deployment
+ labels:
+ app: nginx
+ spec:
+ replicas: 3
+ selector:
+ matchLabels:
+ app: nginx
+ template:
+ metadata:
+ labels:
+ app: nginx
+ spec:
+ containers:
+ - name: nginx
+ image: nginx:1.14.2
+ ports:
+ - containerPort: 80
+
+- Create another file in the workflows directory by the name *kubescape-demo.yml* and write the following data in it.
+
+ name: Kubescape
+
+ on:
+ push:
+ branches: [ master ]
+ pull_request:
+ branches: [ master ]
+
+ jobs:
+ nsa-security-check:
+ runs-on: ubuntu-latest
+
+ steps:
+ - name: Checkout
+ uses: actions/checkout@v2
+
+ - name: Install Kubescape
+ run: curl -s https://raw.githubusercontent.com/kubescape/kubescape/master/install.sh | /bin/bash
+
+ - name: Scan YAML files
+ run: kubescape scan framework nsa *.yml
+
+## **Explaining it in a video**
+
+Here you can get an explanation in a video. [32/60 Day of DevOps Challenge]()
\ No newline at end of file
diff --git a/Days/day33.md b/Days/day33.md
new file mode 100755
index 0000000..a19ad43
--- /dev/null
+++ b/Days/day33.md
@@ -0,0 +1,103 @@
+On the thirty third day, I learned the following things about Prometheus.
+
+# Prometheus
+
+- Prometheus is a free software application that is used for monitoring and alerting.
+
+- The record(metrics) that is monitored is going to be stored in a time-series database and that record is built using a HTTP pull model.
+
+- Prometheus will hit the target to collect the metrics, show the data or even alert if some threshold are met.
+
+## Why Prometheus?
+
+- It contributes to the DevOps system by monitoring cloud native applications, infrastructure and hundreds of microservices.
+
+- It fits both machine centric monitoring as well as monitoring of highly dynamic service orientated architecture.
+
+- It is designed for reliability. It will quickly diagnose the problems.
+
+- Each prometheus server is standalone. It means that it is not dependent on network storage or other remote services.
+
+- You don't need an extensive infrastructure to use it.
+
+- The most important thing that differentiate prometheus from other time-series databases is that it is pull-based tool unlike nagios, amazon cloud watch, new relic etc.
+
+- When you're working with many microservices and each service is pushing their metrics to the monitoring system, it creates a high load of traffic within your infrastructure. The infrastructure is overloaded with constant push requests.
+
+- It pulls the targets in order to retrieve metrics for them. E.g: Node exporter or application exporter.
+
+- Prometheus retrieves the metrics via HTTP call. Node exporter and application exporter will listen to particular pod and then a prometheus server will initiate a HTTP call to this particular exporter and fetch system or appliction metrics from the end.
+
+## Continuous monitoring in prometheus
+
+- Monitoring applications and application servers is an important part of DevOps culture and process.
+
+- You continuously want to monitor applications and servers for application exception, server CPU, memory usage or storage spikes.
+
+- Prometheus will also give the notification if the cpu or memory goes up or down so that you can perform appropriate actions.
+
+- That's why prometheus is used in continuous monitoring.
+
+## Prometheus architecture
+
+- The core prometheus has a main component called prometheus server that does the actual monitoring service.
+
+- Prometheus server is made of three parts.
+
+ 1. Retrieval - It pulls the metric data
+ 2. Storage - It stores the metric data
+ 3. HTTP Server - It accept the queries
+
+- **Target:** The things that prometheus monitors are called targets. In short the prometheus server monitors the target.
+
+- **Metric:** Each unit of target such as current cpu status, memory usage or any other specific unit that you want to monitor is called a metric.
+
+- Prometheus server collects metric from the target over HTTP, stores them localy or remotely and then displays them back in the prometheus server.
+
+- Prometheus server scrapes a target for a specific interval and after that, it will store it in a time-series database.
+
+- You define the targets to be scraped and also define the time interval for scrapping metrics in the *prometheus.yml* configuration file.
+
+- You get the metric details by querying from the prometheus time-series database where the prometheus stores metrics and it uses a query language PromQL in the prometheus server to query metrics about the target.
+
+- In other words, you ask the prometheus server via PromQL to show us a status of a particular target at a one particular time.
+
+- **Retrieval:** Prometheus has a data retrieval type that is responsible for getting or pulling the metrics from applications, services, servers and other target resources and storing and pushing them into the database.
+
+- **Storage:** Prometheus has the time-series database that stores all the metrics data like the current cpu usage or the number of excptions in an application.
+
+- **HTTP Server:** Prometheus accepts the queries for the stored data, web server or the server api that is used to display it on a dashboard either through prometheus dashboard or other data visualization tool called Grafana.
+
+- Prometheus provides client libraries in a number of languages that you can use to provide health status of your application.
+
+- Here is the link of the [client library](https://prometheus.io/docs/instrumenting/clientlibs/).
+
+- Prometheus is not only about application monitoring. You can use exporter to monitor third-party systems also.
+
+- Exporter is a piece of software that gets existing metrics from a third-party system and eventually export them to metric format that the prometheus server can eventually understands.
+
+- Prometheus has a list of exporters for different services that you can find them [here](https://prometheus.io/docs/instrumenting/exporters/).
+
+## Prometheus metrics and its types
+
+- Each unit of a target such as the current cpu status, memory usage or any other specific unit that you want to monitor is called a metric.
+
+- Server collects the metrics and stores them locally or remotely and displays them on the prometheus graphical user interface.
+
+- Metrics has four different types
+
+ **1. Counter Type:** A counter is a cumulative metric that represents a single value that can only increase or be reset to zero on restart. E.g. To represent the number of request served, tasks completed or errors.
+
+ Don't use a counter to expose a value that decreases or don't use a counter for the number of currently using processes.
+
+ **2. Gauge Type:** A gauge is a metric that represents a single numerical value that can go up and down. It is used to measure temprature, current memory usage.
+
+ **3. Histogram Type:** It takes many measurements of a value to later calculate averages or percentile. You know what the range of values will be up front, so you can define your own. E.g: How long something took or how big the size of the request was.
+
+ **4. Summary Type:** It is similar to a histogram but you don't know what the range of the values will be up front, so you cannot use histogram.
+
+Read in detail [here](https://prometheus.io/docs/concepts/metric_types/).
+
+## **Explaining it in a video**
+
+Here you can get an explanation in a video. [33/60 Day of DevOps Challenge]()
\ No newline at end of file
diff --git a/Days/day34.md b/Days/day34.md
new file mode 100755
index 0000000..696d34c
--- /dev/null
+++ b/Days/day34.md
@@ -0,0 +1,130 @@
+On the thirty forth day, I learned the following things about Prometheus.
+
+## Prometheus Installation
+
+- Go to this [website](https://prometheus.io/download/) and download the prometheus according to your operating system.
+
+- For linux type, `wget ` and it will download the tar file for you.
+
+- Untar the file by writing, `tar -xvf ` and then enter the directory using `cd`.
+
+- Open the *prometheus.yml* file using the `cat` command and it will show the targets that needs to be scraped and also the time interval for scrapping metrics.
+
+- Once you open the rules, it will show you the scrape_interval, alertmanager configuration, the rules that you can setup yourself, and the scrape configuration that is available on a particular port number.
+
+- By default, scrape configuration contains only one endpoint to scrape. It means that the prometheus itself will be scraped on a given port number.
+
+- `./prometheus` will execute prometheus. After activation, if you type `localhost:9090` in the browser, it will redirect you to the prometheus dashboard.
+
+- To run prometheus as a service, first type `sudo cp -r . /usr/local/bin/prometheus`. It will copy the prometheus directory data to the `/usr/local/bin/prometheus` directory.
+
+- After that create a *.service* file inside */etc/systemd/system* directory. Simply write `sudo vi /etc/systemd/system/prometheus.service` in the terminal and enter the following data in it.
+
+ [Unit]
+ Description=Prometheus Service
+ After=network.target
+
+ [Service]
+ Type=simple
+ ExecStart=/usr/local/bin/prometheus/prometheus --config.file=/usr/local/bin/prometheus/prometheus.yml
+
+ [Install]
+ WantedBy=multi-user.target
+
+- Save the data inside the file and start the service file by writing `sudo service prometheus start`.
+
+- After that, check the status of the service by typing `sudo service prometheus status`. It will show you that the service is running.
+
+- If you write `localhost:9090` in the browser, you will see that the prometheus dashboard is opened.
+
+- You can click on the check boxes to enable the local time or enable query history and much more.
+
+- You will see the search bar in which you will type the expression that you would like to execute.
+
+- The expression will be executed in the table and in the graph form.
+
+- In the end of the page, you will see a button by the name **Add Panel**, through which you can add multiple panels if you want. In this way, you can execute different queries all at once.
+
+- On the search bar, if you move to right side, you will see the small world map. If you click on it, it will give you a pop window of metrics explorer. Scroll down and click on the **up** option. It will give you this message `up{instance="localhost:9090", job="prometheus"}`.
+
+- You can check the data both in the form of table and graph.
+
+- If you type **go_info** in the search bar, you will you will get the result with the go version also `go_info{instance="localhost:9090", job="prometheus", version="go1.19.2"}`.
+
+- If you click on alerts, it will show you nothing because we don't have one. Alerts are the conditions that you have to specify. If the conditions are satisfied, then the alert will be shown to the maintainer/manager.
+
+- Inactive - If the condition is not satisfied.
+- Pending - If the condition is satisfieid.
+- Firing - If it exceeds the critical limit.
+
+Click on the status button. It will show seven different options.
+
+**1. Runtime and build information:** It will show the information of prometheus.
+
+**2. TSDB status:** TSDB is the time-series database. This option allows to check the databases statuses like the number of series or the number of chunks. You can take the name and search it as a query
+
+**3. Command line flags:** These are the flags that you can use to make changes in your service file and show the information of the service file.
+
+**4. Configuration:** Configuration contains the data that is present in the *prometheus.yml* file.
+
+**5. Rules:** Rules are the conditions that you specify.
+
+**6. Targets:** At first, only one endpoint `http://localhost:9090/metrics` is getting monitored. If you write `http://localhost:9090/metrics` in the browser, you will see nice format of the prometheus metrics.
+
+- **Help** is the description of what the metric is. It helps in the readibility.
+- **Type** will give you the type of a metrics.
+
+**7. Service Discovery:** It will show you the endpoint that is scraped and discover the labels.
+
+## Prometheus Node exporter
+
+- Node exporter is a way to measure various machine resources. Machine resources could be your memory, disk, cpu utilization.
+
+- Go to this [website](https://prometheus.io/download/) and scroll down to find the node exporter. Copy the tar file link and write `wget `.
+
+- After downloading the file, untar the file by writing `tar -xvf `.
+
+- List all the files inside that tar file and you will see that *node_exporter* file there. Copy the *node_exporter* file to the */usr/local/bin* directory by typing `sudo cp node_exporter-1.4.0.linux-amd64/node_exporter /usr/local/bin`.
+
+- Create a service for the node exporter also by typing `sudo vi /etc/systemd/system/node-exporter.service` and type the following data inside that file.
+
+ [Unit]
+ Description=Prometheus Node Exporter Service
+ After=network.target
+
+ [Service]
+ Type=simple
+ ExecStart=/usr/local/bin/node_exporter
+
+ [Install]
+ WantedBy=multi-user.target
+
+- After that, reload the system by typing `systemctl daemon-reload`.
+
+- Once the system is reloaded, type `sudo service node-exporter start` to star the node exporter. You can check the status of it by writing `sudo service node-exporter status`.
+
+- Type `http://localhost:9100/` in the browser because the node runs in the *9100* port.
+
+- You can check the metrics by typing `http://localhost:9100/metrics` in the browser.
+
+- After the node exporter is running, let's configure the node exporter into our *prometheus.yml* file.
+
+- Type `cd /usr/local/bin` to go inside the *bin* directory and then go to the *prometheus* directory. Now edit the *prometheus.yml* file by typing `sudo vi prometheus.yml` and the add the following job in it and save it.
+
+ - job_name: "node-exporter"
+ static_configs:
+ - targets: ["localhost:9100"]
+
+- After saving the file, type `sudo service prometheus restart` to restart prometheus and then check the status of it by typing `sudo service prometheus status`.
+
+- Go to the browser and type `http://localhost:9090/`. Go to the status and open the Service Discovery. You will see that the node-exporter is added.
+
+- Go to the configuration and check the configuration file. After that check out the metrics in the search bar. You will see that new metrics are added by the name of node. Click on any of the node metrics and it will show you the details.
+
+- You can stop the node-exporter by typing `sudo service node-exporter start`
+
+- You can stop the prometheus service by typing `sudo service prometheus stop`.
+
+## **Explaining it in a video**
+
+Here you can get an explanation in a video. [34/60 Day of DevOps Challenge]()
\ No newline at end of file
diff --git a/Days/day35.md b/Days/day35.md
new file mode 100755
index 0000000..7189aec
--- /dev/null
+++ b/Days/day35.md
@@ -0,0 +1,96 @@
+On the thirty fifth day, I learned the following things about Terraform.
+
+## DevOps tasks before automation
+
+- In the past, when you had tasks before automation and you wrote an application and wanted to deploy it on the server, for that you have to do many things like:
+
+ - Get the servers and set them up
+ - Configure networking on those servers.
+ - Create route tables.
+ - Install necessary softwares.
+ - Configure the software.
+ - Install database etc.
+
+- All these things were manually done by system administrators and as a result, there were more human resources cost and more time and effort.
+
+- The above points were just the setup phase. After that, you had to maintain them like update the versions, deploy new release of an application, DB backups and updates, recover app and servers after crash and add the new servers also, etc.
+
+## DevOps tasks after automation
+
+- After the tasks were automated using DevOps, you can now automate all the process with infrastructure-as-code.
+
+- Infrastructure as code automate all the tasks instead of doing them manually.
+
+- All the knowledge and expertise of system administrators and operations team are packed into various programs and applications that carry out all these tasks.
+
+- IaC is a concept but there are IaC tools and programs(Terraform, Ansible, Chef, etc) that carry out these tasks.
+
+### Why are there so many tools? Can't we have just one tool?
+
+- Currently there is no tool that is doing all the tasks. Instead different tools are doing different tasks and each of them is good in that specific area.
+
+## Main categories.
+
+- There are 3 main categories of such tasks
+
+1. Provisioning of infrastructure
+
+ - Spinning(Twisting) up new servers
+ - Doing network configuration
+ - Creating load balancers
+ - Configuring all the stuff on the infrastructure level
+
+2. Configuring already provisioned infrastructure
+
+ - Installing applications on the servers
+ - Managing those applications
+ - This step is required to prepare the infrastructure or servers with all the necessary stuff to deploy your application.
+
+3. Deployment of application on the configured infrastructure
+
+ - With docker, the configuration and deployment are merged together.
+ - You package the configured application in a container and deployment them on a server.
+
+- Infrastructure as code automate the tasks in different categories for different phases.
+
+- You will use the combination of 2 or more IaC tools to automate the whole process.
+
+
+
+
+
+- Terraform is used for provision and configure the infrastructure and it is made specifically for the infrastructure.
+
+- Ansible and other tools are used to install and deploy applications on that provision infrastructure and they are made specifically for the configuration.
+
+# Terraform
+
+- Terraform is an open-source Infrastructure-as-code(IaC) tool developed by HashiCorp and it helps companies with infrastructure-as-code and automation.
+
+- It is used to define and provision the complete infrastructure using an easy-to-learn language HCL(HashiCorp configuration Language).
+
+- You can write your infrastructure as code on any cloud platform. It means that you're not dependent on a specific cloud provider like AWS, Azure, GCP, etc. Terraform will work for all.
+
+## Installation
+
+- Visit this [website](https://developer.hashicorp.com/terraform/downloads) and download and install terraform according to your operating system.
+
+- Once the terraform is installed, check its version by typing `terraform --version`.
+
+## Hello World Terraform Configuration
+
+- Create a directory by the name of **terraform** and inside that directory, create a subdirectory by the name of **hello-world**.
+
+- Inside the **hello-world** directory, create a *first.tf* file by typing `vi first.tf`.
+
+- Write the following data inside the file.
+
+ output hello {
+ value = "Hello World! Enjoy"
+ }
+
+- After writing and saving the data, type `terraform plan` and it will show the key and values that you have printed and it will show you the result.
+
+## **Explaining it in a video**
+
+Here you can get an explanation in a video. [35/60 Day of DevOps Challenge]()
\ No newline at end of file
diff --git a/Days/day36.md b/Days/day36.md
new file mode 100755
index 0000000..4c3024c
--- /dev/null
+++ b/Days/day36.md
@@ -0,0 +1,111 @@
+On the thirty sixth day, I learned the following things about Terraform.
+
+## Terraform Configurations in JSON Format
+
+- First create a directory by the name of **hello-world-json**.
+
+- Get into it by writing `cd hello-world-json` and create a file by the name of *first.tf.json* in it.
+
+- Inside that file, write the following data in the JSON format.
+
+ {
+ "output": {
+ "hello": {
+ "value": "Hello Bilal, nice meeting you."
+ }
+ }
+ }
+
+- After writing and saving the data, write a command `terraform plan` and it will show the keys and values as a result that you have written in a file.
+
+- Terraform also works with the JSON format to write the infrastructure.
+
+## Write Multiple Blocks in Single Terraform File
+
+- Create a directory by the name of **hello-world-multi-block**.
+
+- Get into it by writing `cd hello-world-multi-block` and create a file by the name of *first.tf* in it.
+
+- Inside the *first.tf* file, write the following data.
+
+ output "firstoutputblock" {
+ value = "this is the first hello world block"
+ }
+
+ output "secondoutputblock" {
+ value = "this is the second hello world block"
+ }
+
+ output "thirdoutputblock" {
+ value = "this is the third hello world block"
+ }
+
+- After writing and saving the data, write a command `terraform plan` and it will show the keys and values as a result that you have written in a file.
+
+## Write Multiple Terraform files in the Same Directory
+
+- Create a directory by the name of **hello-world-file-destructure**.
+
+- Get into it by writing `cd hello-world-file-destructure` and create a file by the name of *first.tf* in it.
+
+- Inside the *first.tf* file, write the following data.
+
+ output "firstoutputblock" {
+ value = "this is the first hello world block"
+ }
+
+- Create another file by the name of *second.tf* and write the following data in it.
+
+ output "secondoutputblock" {
+ value = "this is the second hello world block"
+ }
+
+- Create another file by the name of *third.tf* and write the following data in it.
+
+ output "thirdoutputblock" {
+ value = "this is the third hello world block"
+ }
+
+- After writing and saving the data, write a command `terraform plan` and it will show the keys and values as a result that you have written in the files.
+
+- The result will be loaded in an alphabetical sequence according to the given output.
+
+## Create a variable in a file
+
+- Create a directory by the name of **hello-variable** and get into by writing `cd hello-variable`.
+
+- After that, create a file by the name of *hello-variable.tf* and write the following data into it.
+
+ variable username {}
+
+ output printname {
+ value = var.username
+ }
+
+- After writing and saving the data, write a command `terraform plan` and it will ask you for the username and then show the keys and values as a result that you have written in a file.
+
+- If you want to write something more with the user name then write it inside the commas like this:
+
+ variable username {}
+
+ output printname {
+ value = "Hello, ${var.username}"
+ }
+
+- After writing and saving the data, write a command `terraform plan` and it will ask you for the username and then show the keys and values as a result that you have written in a file.
+
+- Now separate the variables and the outputs in different files by simply creating another file by the name of *variable.tf* inside the **hello-variable** directory.
+
+- Cut the `variable username {}` from the *hello-variable.tf* and paste it in a newly created file *variable.tf*.
+
+- After writing and saving the data, write a command `terraform plan` and it will ask you for the username and then show the keys and values as a result that you have written in a file.
+
+## Enter a variable value in the command
+
+- We have seen that in order to enter the username, you first have to write `terraform plan` and then it will ask you for the value to be printed.
+
+- But you can give a value in the command also by simply writing `terraform plan -var "username=Bilal Khan"` and it will print the data for you without asking to enter the value.
+
+## **Explaining it in a video**
+
+Here you can get an explanation in a video. [36/60 Day of DevOps Challenge]()
\ No newline at end of file
diff --git a/Days/day37.md b/Days/day37.md
new file mode 100755
index 0000000..00501f6
--- /dev/null
+++ b/Days/day37.md
@@ -0,0 +1,139 @@
+On the thirty seventh day, I learned the following things about Terraform.
+
+## Set the default value
+
+- If you want to set the value to be default and not want to get asked every time then you can do it by making changes in the given code.
+
+**Change this**
+
+ variable username {}
+
+**To this**
+
+ variable username {
+ default = "World"
+ }
+
+- Now the username is set to *World*. If you type `terraform plan`, it won't ask you to enter the value.
+
+- Instead it will show you the message. If you want to give it a value, you can do this through the command `terraform plan -var "username=Bilal Khan"`.
+
+## Multiple variables
+
+- You can give multiple variables also if you want to. The process is really simple and you can print them at the same time.
+
+- In the *variable.tf* file, under the username variable, type another variable like this:
+
+ variable age {
+ default = "25"
+ }
+
+- Make changes in the *hello-variable.tf* file like this:
+
+ output printname {
+ value = "Hello, ${var.username}. Your age is ${var.age}"
+ }
+
+- Now if you type `terraform plan`, it won't ask you for your username and age. It will give you the values.
+
+- If you removed the default value of age, then it will ask you for the entry when you type `terraform plan`.
+
+- You can then give the age by writing this command in the terminal `terraform plan -var "age=24"`.
+
+- If you want to give both the username and age on the terminal without being asked, you can do this by writing `terraform plan -var "username=Bilal Khan" -var "age=24"`.
+
+## Variable Types
+
+- In the block of data, you can set the type also. If you want to check the list of multiple types, you can visit this [website](https://developer.hashicorp.com/terraform/language/expressions/types).
+
+- Add a type in a variable like this:
+
+ variable age {
+ type = number
+ default = "25"
+ }
+
+- If you type `terraform plan -var "age=fds"`, it will give you an error by pointing the invalid type.
+
+## List Variables
+
+- Create a directory by the name of **list-variable** and get into by writing `cd list-variable`.
+
+- After that, create a file by the name of *first.tf* and write the following data into it.
+
+ variable users {
+ type = list
+ }
+
+ output printfirst {
+ value = "first user is ${var.users[0]}"
+ }
+
+- In the above code, first the values will be taken in list and then the first value is going to be printed.
+
+- Save the data in a file and run `terraform plan`. It will ask you for the values to enter. You can enter the values in a list like this: `["bilal", "ali", "khan"]`.
+
+- You can also give a list of values in the terminal by typing `terraform plan -var 'users=["bilal", "ali", "khan"]'`
+
+- You can set the default values by making the changes in a file like this:
+
+ variable users {
+ type = list
+ default = ["bilal", "ali", "khan"]
+ }
+
+- Now just the `terraform plan` command and the result will be shown to you without asking for entry.
+
+## Functions in Terraform
+
+- The Terraform language includes a number of built-in functions that you can call from within expressions to transform and combine values.
+
+- Go to this [website](https://developer.hashicorp.com/terraform/language/functions) and take a look at all the functions.
+
+### Join function
+
+- Let's take a look at an example by using some functions.
+
+ output printfirst {
+ value = "first user is ${join(", ", var.users)}"
+ }
+
+- If you save it and run the `terraform plan` command, you will get the following result.
+
+ printfirst = "first user is bilal, ali, khan"
+
+### Upper function
+
+- Write the data in an upper using the `upper` function.
+
+- Make another output by the name of *UpperFunc* and convert the first user into upper case.
+
+ output UpperFunc {
+ value = "${upper(var.users[0])}"
+ }
+
+- Write `terraform plan` in the terminal and you will get the upper-case value.
+
+### Lower function
+
+- Make another output by the name of *LowerFunc* and convert the first user into lower case.
+
+ output LowerFunc {
+ value = "${lower(var.users[0])}"
+ }
+
+- Write `terraform plan` in the terminal and you will get the lower-case value.
+
+### Title function
+
+- Make another output by the name of *TitleFunc* and convert the first alphabet of a word into an upper case.
+
+ output TitleFunc {
+ value = "${title(var.users[0])}"
+ }
+
+- Write `terraform plan` in the terminal and you will get the first alphabet in an upper-case.
+
+## **Explaining it in a video**
+
+Here you can get an explanation in a video. [37/60 Day of DevOps Challenge]()
\ No newline at end of file
diff --git a/Days/day38.md b/Days/day38.md
new file mode 100755
index 0000000..6a05655
--- /dev/null
+++ b/Days/day38.md
@@ -0,0 +1,123 @@
+On the thirty eighth day, I learned the following things about Terraform.
+
+## Map Variable
+
+- Create a directory by the name of *map-variable* and get inside it by typing `cd map-variable`.
+
+- After creating a directory, make a file in it by the name of *variable.tf* and write the following data inside it.
+
+ variable "userage" {
+ type = map
+ default = {
+ bilal = 25
+ ali = 20
+ }
+ }
+
+ output "userage" {
+ value = "my name is bilal and my age is ${lookup(var.userage, "bilal")}"
+ }
+
+- First a variable is declared by the name of *userage* and inside this variable, there is a type map and values are set to default. The default ages are of bilal and ali.
+
+- After declaring the variables, let's print the output to show the age of a particular key using the `lookup` function.
+
+## Use map variable Dynamically
+
+- Instead of changing the key like bilal or ali every time in a file, how to dynamically use the map variables.
+
+- Create another variable like this and write the following data into it.
+
+ variable "username" {
+ type = string
+ }
+
+- After the variable is created, make some changes in the output like this:
+
+ output "userage" {
+ value = "my name is ${var.username} and my age is ${lookup(var.userage, "${var.username}")}"
+ }
+
+- Now if you write `terraform plan`, it will ask you for a key to enter. Once you enter a key, it will show you the age according to that key.
+
+- You can also write a key in the terminal like `terraform plan -var "username=bilal"` and it will show you the value according to it.
+
+## TFVARS files in Terraform
+
+- You may face a problem that every time you want a person's data, the command line will ask you everytime to enter the key so that it brings you the value.
+
+- You can automate this process by making a file and write the age and username there.
+
+- To make this happen, first make a directory by the name of **tf-var** and get into it by using the `cd` command.
+
+- After that, make a file by the name of *first.tf* and write the following data into it.
+
+ variable "username" {
+ type = string
+ }
+
+ variable "age" {
+ type = number
+ }
+
+ output printname {
+ value = "Hello, ${var.username}, your age is ${var.age}"
+ }
+
+- Once the data is saved in a file, make another file by the name of *terraform.tfvars*. Inside this file, write the following data so that it never ask you again.
+
+ age=25
+ username="Bilal Khan"
+
+- After saving the data in both of the files, if you write `terraform plan`, it will give you the result without asking you the username and age.
+
+## TFVARS File With Different Name.
+
+- If you want to change the name of *tfvars* file into something else according to your need, you can do this.
+
+- First of all copy the data of **tf-var** directory into a new directory using the command `cp -rvf tf-var tf-var-custom`. Get into the newly created directory by writing `cd tf-var-custom`.
+
+- After that, make a *tfvar* file by any name that you want like development, production, etc and write the following data into it.
+
+ age=20
+ username="Ali Ahmed"
+
+- After that, write `terraform plan -var-file=filename.tfvars`, it will show you the result of that particular file.
+
+- You can find more about this command and other commands by writing `terraform plan --help | less`.
+
+## Read Environment Variable in Terraform Configurations
+
+- If you want to declare the variable in your terminal and then use it in the terraform then you can also do it.
+
+- Create a directory by the name of *env-variable* and get inside it by typing `cd env-variable`.
+
+- After creating a directory, make a file in it by the name of *first.tf* and write the following data.
+
+ variable "username" {
+ type = string
+ }
+
+ output printname {
+ value = "Hello, ${var.username}"
+ }
+
+- Once the file is saved, write `terraform plan` to get the result. You will see that it will ask you for the value to enter and then it will give you the result.
+
+- To avoid this, you have to first declare an environment variable and then if you run the `terraform plan` command again, it won't ask you for the value to enter. Instead it will fetch the value from the environment variable.
+
+- First of all, type `echo $username`. It will show you nothing because an environment variable is not declared yet.
+
+- To declare an environment variable, type `export username=Umar` and then if you type `echo $username` again, it will show you result.
+
+- After that, type `terraform plan` command. You will see that again it is asking you for the value to enter.
+
+- The problem is not in the environment variable. The problem was that the terraform wants the specific name environment variable to be declared.
+
+- You need to declare an environment variable by the name of `TF_VAR_username` and then it will accept it.
+
+- First export it by writing `export TF_VAR_username=Umar` and then if you write `terraform plan` it won't ask you for the value to enter.
+
+## **Explaining it in a video**
+
+Here you can get an explanation in a video. [38/60 Day of DevOps Challenge]()
\ No newline at end of file
diff --git a/Days/day39.md b/Days/day39.md
new file mode 100755
index 0000000..b069683
--- /dev/null
+++ b/Days/day39.md
@@ -0,0 +1,102 @@
+On the thirty ninth day, I learned the following things about Terraform.
+
+# Terraform Core and Terraform Plugin
+
+- In this video, we're going to create some resources on the GitHub repo.
+
+- Visit a [website](https://registry.terraform.io/). It will show you all the providers in which you can create your terraform infrastructures.
+
+## How terraform will work in the infra?
+
+- You can create resources on GitHub, AWS, Azure, etc but how does terraform create the resources? What is the process?
+
+
+
+
+
+- First a developer will write configuration files, then it will ask terraform to create resources for them. Terraform will create the resources on AWS, Digital Ocean and other cloud providers.
+
+- When the terraform will create the resources, it will maintain the state of them in **.tfstate* file.
+
+- A question arises that how terraform will create the resources? The answer for this is the **plugin**. The resources will be created with the help of plugins.
+
+
+
+
+
+- When you will mention the cloud provider in the terraform. The terraform will download the plugin of that cloud provider.
+
+- The terraform will tell the plugin that it wants a particular resource on a particular cloud provider.
+
+- Every cloud provider plugin is different from other cloud provider plugin.
+
+- If you're working in an organization and that organization has an in-house cloud provider. In this case, if a particular plugin is not present in that cloud provider then you can create your own plugin using Golang and then the terraform will create resources based on your own create plugin.
+
+## Create your first Terraform Resource in the GitHub repository
+
+- Create a directory by the name of **terraform-first-resource** and get inside it by writing `cd terraform-first-resource`. Create a file in the directory by the name of *terraform.tf*.
+
+- Go to [terraform registry](https://registry.terraform.io/) and search the cloud provider. I will use the github provider which you can find [here](https://registry.terraform.io/providers/integrations/github/5.7.0).
+
+- You have to mention the cloud provider in your terraform file so that it fetch the resources.
+
+- Open the *terraform.tf* and write the following data into it.
+
+ provider "github" {
+
+ }
+
+- Now we want to add the resources in the github provide. To do this, go to the [documentation](https://registry.terraform.io/providers/integrations/github/latest/docs) of github provider and click on the resources on the left side and find the *github_repository*.
+
+- Click on the *github_repository* and copy the example usage code, paste it in the *terraform.tf* file and make some changes in it.
+
+ resource "github_repository" "terraform-first-repo" {
+ name = "first-repo-from-terraform"
+ description = "My first resource from terraform"
+ visibility = "public"
+ auto_init = true
+ }
+
+- In this block of code, first there is a *github_repository* that will show that it will create a github repository.
+
+- *terraform-first-repo* is a name through which terraform will identify it in the local machine.
+
+- *name* is the repository name that will be created.
+
+- *description* is the description of the repository.
+
+- *visibility* is public for repository.
+
+- *auto_init* will create a README file.
+
+- `terraform plan` will show you the plan, read the terraform configuration in the current working directory and tell you what the configuration will do.
+
+- `terraform providers` will show you the providers that terraform is currently using.
+
+- If you type `ls -a`, it will give you only the *terraform.tf* file.
+
+- You don't need to download and install the provider plugins manually. You can simply write `terraform init` to initialize it.
+
+- If you type `ls -a` again, it will give you one more file and one directory that are downloaded.
+
+- Type `ls` to further get the subdirectories, you will get the providers that are used in the terraform configuration, the plugin version, an operating system that is used and the programming language in which the plugin is created etc.
+
+- Now let's type `terraform plan`. Now it will show us all the things that will be added if they're applied.
+
+- If you type `terraform apply`, and enter the value *yes*, it will give us an authentication error because we didn't define the github account in the provider to create the repository in.
+
+- For that purpose, open your GitHub account, click on your avatar and click on the settings. Go to developer settings by scrolling down. Go to personal access tokens and Tokens (classic).
+
+- Click on generate a new token, give the token a name and check all the boxes. Click on the generate token option.
+
+- You will be provided a token that you just need to copy it and then write it in the provider like this:
+
+ provider "github" {
+ token=""
+ }
+
+- Save the file, go to the terminal and write `terraform apply` command again to make the repository in github.
+
+## **Explaining it in a video**
+
+Here you can get an explanation in a video. [39/60 Day of DevOps Challenge]()
\ No newline at end of file
diff --git a/Days/day4.md b/Days/day4.md
new file mode 100755
index 0000000..025abc6
--- /dev/null
+++ b/Days/day4.md
@@ -0,0 +1,9 @@
+On the forth day, I learned the following things about Networking.
+
+Click Here:
+
+- 🌐 [Day No. 4 of Learning Networking](../PDFs/Computer-Networking-1.pdf)
+
+## **Explaining it in a video**
+
+Here you can get an explanation in a video. [4/60 Day of DevOps Challenge]()
\ No newline at end of file
diff --git a/Days/day40.md b/Days/day40.md
new file mode 100755
index 0000000..e519675
--- /dev/null
+++ b/Days/day40.md
@@ -0,0 +1,55 @@
+On the fortith day, I learned the following things about Terraform.
+
+## Terraform .tfstate file and destroy Command
+
+- After the creating the *terraform.tf* file and creating a repository on GitHub, two more files will created after that. Those files are *terraform.tfstate* and *terraform.tfstate.backup*.
+
+- Open the *terraform.tfstate* file and it will show you all the resources that you have created in *terraform.tf* file.
+
+- If you want to create another repository, you can add it in the *terraform.tf* file.
+
+- After adding the second repository code in the file, type `terraform plan` command. It will show the new repository resources to be added.
+
+- Write `terraform apply --auto-approve` command. Auto approve won't ask you *YES* or *NO* option everytime and it will make the second repository without making changes in the first one.
+
+- Open the *terraform.tfstate* file and it will show you the first repository resources and the second repository resources that you have created.
+
+- Open the *terraform.tfstate.backup* and it will show you the backup of the earlier resources that you have created.
+
+- **Tip:** Don't try to manually change the .tfstate file.
+
+## Terraform Destroy
+
+- If you want to destroy the resources, you can do it by writing, `terraform destroy`. It will destroy all the resources and the repositories that you have created.
+
+- If you want to destroy a specific resource, you can do this by simply writing `terraform destroy --target github_repository.terraform-second-repo`.
+
+- If you write `terraform plan`, it will show you the resources of the second repository that needs to be added.
+
+## Terraform Validate Command
+
+- Before showing you how to validate the terraform, let's make some changes in the terraform files.
+
+- Open the *terraform.tf* file, copy the provider and paste into the *provider.tf* file.
+
+- Remove the token from the *provider.tf* file and write the following data into it.
+
+ provider "github" {
+ token = "${var.token}"
+ }
+
+- Make a *variable.tf* file and write the following data into it.
+
+ variable token {}
+
+- Make a *terraform.tfvars* file and write the token inside it.
+
+ token=""
+
+- After making these changes, type `terraform validate` command and it will give you the message **Success! The configuration is valid.**
+
+- Now you can destroy and apply the configuration file.
+
+## **Explaining it in a video**
+
+Here you can get an explanation in a video. [40/60 Day of DevOps Challenge]()
\ No newline at end of file
diff --git a/Days/day41.md b/Days/day41.md
new file mode 100755
index 0000000..14a8f15
--- /dev/null
+++ b/Days/day41.md
@@ -0,0 +1,73 @@
+On the forty first day, I learned the following things about Terraform.
+
+## Terraform Refresh
+
+- First of all remove the second repository data in the *teraform.tf* file.
+
+- Apply the changes by writing `terraform apply --auto-approve`. It will remove the second repository from the GitHub.
+
+- Open the first repository on GitHub and make some changes in it like change the description of it.
+
+- If you open the *terraform.tfstate* file, you will see that the state of the GitHub repo and the state of *terraform.tfstate* are different.
+
+- To make the state the same, write `terraform refresh` command and then to check the status of *terraform.tfstate* file. It will make both the states equal.
+
+- Write `terraform show` command to show the output of resources in the terminal.
+
+- Although both the states are same but the problem is that if you open `terraform.tf` file, the original text is still present. It means that if you apply the terraform, it will remove the newly created changes.
+
+- In this way, if somebody made changes in your repository but you want the original text to be present then you can write the `terraform plan` to show you the changes. You will see that the new changes will be replaced with the original text.
+
+- After seeing the changes, type `terraform apply` command. Now the changes will be reversed and back to the original text.
+
+- If you open the *terraform.tf* file and delete the description and then type `terraform plan` command. It will show you the text that will be removed from the GitHub repository.
+
+- Type `terraform apply` command and after applying, if you open the github repository, the description will be removed.
+
+## Terraform Output
+
+- Open the *terraform.tf* file and write the following data into it.
+
+ output "terraform-first-repo-url" {
+ value = github_repository.terraform-first-repo.html_url
+ }
+
+- You can the find the attributes [here](https://registry.terraform.io/providers/integrations/github/latest/docs/resources/repository).
+
+- Once the attribute is added, write the `terraform validate` to first validate it, then write `terraform plan` command to see which things will be added.
+
+- After that, write `terraform apply --auto-approve` command to apply it. Once it is applied, write `terraform output`. It will give you the output.
+
+## Terraform console
+
+- Open the *variable.tf* file and write the following variables in it.
+
+ variable "username" {
+ default="bilal"
+ }
+
+ variable "age" {
+ default = 23
+ }
+
+ variable "city" {
+ default = "quetta"
+ }
+
+- After saving the file, close it and write `terraform console` command in the terminal.
+
+- If you type, *var.city*, it will give you the city. If you type *var.username*, it will give you the username.
+
+- If you type `github_repository.terraform-first-repo.html_url`, it will give you the output link.
+
+- It will read the data from the current working directory that you have.
+
+## Terraform file indentation
+
+- Open the terraform files and add spaces in the file.
+
+- After adding spaces, if you go to the terminal and write `terraform fmt`, all the data will be formatted correctly.
+
+## **Explaining it in a video**
+
+Here you can get an explanation in a video. [41/60 Day of DevOps Challenge]()
\ No newline at end of file
diff --git a/Days/day42.md b/Days/day42.md
new file mode 100755
index 0000000..77cb691
--- /dev/null
+++ b/Days/day42.md
@@ -0,0 +1,164 @@
+On the forty second day, I learned the following things about Ansible.
+
+### **System Administrator**
+
+System administrator is a person who manages all the systems in an organization like which application to install in a particular machine, change the ip address of an OS, or making other users in an OS.
+
+### **Problem**
+
+- The management of the systems would be easy if there are few servers but if there are 1000 or more servers then it would be very difficult for one person to manage all of them in a short period of time.
+
+- If there are many system administrators then as humans they could make mistakes or maybe a company has not enough budget to hire them.
+
+### **Solution**
+
+The solution is automation. The role of system administrator is replaced with DevOps engineers.
+
+- **Configuration:** Each and every detail of your machine(server, storage).
+
+- **Management:** Delete, update, and create the details.
+
+Configuration management tool is a part of operations in DevOps.
+
+Configuration management tool is of two types:
+
+1. Push-based
+2. Pull-based
+
+**1. Push-based**:
+
+- Push-based configuration server pushes configuration to the nodes.
+
+- All the data will be present inside one server and that server will push the data and update their versions to other nodes.
+
+- Push-based configuration is used when you want the full control over other nodes and you don't want to send the data until you want.
+
+- The example of push-based tools are Ansible, and SaltStack.
+
+ -----
+ | |
+ -----
+ / \
+ |
+ |
+ ----- ---------- -----
+ | | <---- | Server | ----> | |
+ ----- ---------- -----
+ / \ | / \
+ |
+ -----
+ | |
+ -----
+ / \
+
+**2. Pull-based**
+
+- The nodes itself check and contact the server periodically(after some time) and fetches the configuration and update the version from it.
+
+- The example of pull-based tools are CHEF, and Puppet.
+
+- Pull-based configuration is used when you add multiple nodes. After that, you don't need to configure them manually. Instead they will do it automatically.
+
+ -----
+ | |
+ -----
+ / \
+ |
+ |
+ ----- ---------- -----
+ | | ----> | Server | <---- | |
+ ----- ---------- -----
+ / \ | / \
+ |
+ -----
+ | |
+ -----
+ / \
+
+## What is Ansible?
+
+- Ansible is a configuration management tool that will automate the processes of the nodes. It will control, and manage the servers automatically so that you don't have to do it manually.
+
+- It is a push based configuration tool.
+
+- Michael Dehaan developed ansible and the it began in February 2012.
+
+- RedHat acquired it in in 2015.
+
+- Ansible is available for RHEL, Debian, CentOS, Oracle Linux, Windows.
+
+- It can be used in on-promises or in the cloud.
+
+- It turns your code into infrastructure. It means if you want the particular infrastructure, write a code, run it and it will do that for you.
+
+## Structure
+
+ ssh --------
+ --------------> | Node |
+ ------------------- / --------
+ | | /
+ | | ssh --------
+ | | ----------------> | Node |
+ | | --------
+ | | \
+ ------------------- \ ssh --------
+ Ansible Server --------------> | Node |
+ --------
+
+- There is no middleman required in this mechanism.
+
+- Ansible uses YAML to transfer data.
+
+- It communicates with the help of ssh.
+
+- It is an agentless. It means that as in chef, the node was containing the chef-client agent to communicate with chef-server. Here there is no agent required.
+
+- The file in which the recipe or the data is written is called playbook.
+
+## Advantages
+
+- Ansible is free to use for everyone.
+
+- It is very lightweight and is not specific to any OS.
+
+- It is very secure due to its agentless capabilities and SSH security features.
+
+- Ansible does not need any special system administrator skills to install and use it.
+
+- It follows the push based mechanism to push the data. I won't accept the requests that are coming everytime to disturb it. It means that the ansible has a control to push the data the desired time it wants.
+
+## Disadvantages
+
+- Insufficient user-interface. Although ansible-tower is GUI but it is still in the development stage.
+
+- You can't achieve automation by ansible because somehow you have to be involved in it to push the data.
+
+- It is new to the market. Therefore it has limited support and documentation is available.
+
+## Terms used in Ansible
+
+- **Ansible Server:** The machine where ansible is installed and from which all the tasks and playbook will be written.
+
+- **Module:** Basically a module is a command or a set of similar commands meant to be executed on the client side.
+
+- **Task:** A task is a section that consists of a single procedure to be completed.
+
+- **Role:** A way of organizing tasks and related files to be called in a playbook.
+
+- **Fact:** Information fetched from the client system and the global variables with the gather facts operation.
+
+- **Inventory:** File containing the data about the ansible client servers.
+
+- **Play:** Execution of a playbook.
+
+- **Handler:** Task which is called only if a notifier is present.
+
+- **Notifier:** Section attributed to a task which calls a handler if the output is changed.
+
+- **Playbooks:** It consists of a code in YAML format, which describes tasks to be executed.
+
+- **Host:** These are the nodes which are automated by ansible.
+
+## **Explaining it in a video**
+
+Here you can get an explanation in a video. [42/60 Day of DevOps Challenge]()
\ No newline at end of file
diff --git a/Days/day43.md b/Days/day43.md
new file mode 100755
index 0000000..c7ea1a6
--- /dev/null
+++ b/Days/day43.md
@@ -0,0 +1,170 @@
+On the forty third day, I learned the following things about Ansible.
+
+ ssh --------
+ --------------> | Node |
+ ------------------- / --------
+ | | /
+ | | ssh --------
+ | | ----------------> | Node |
+ | | --------
+ | | \
+ ------------------- \ ssh --------
+ Ansible Server --------------> | Node |
+ --------
+
+- Ansible server contains the ansible packages and the updates will be given to the nodes.
+
+## Steps
+
+- Create an AWS account. Go to the services on the upper left side. Click on the compute and then click on EC2.
+
+- Click on the Instances(Running) and then click on the Launch instance on the upper right corner.
+
+- First give a tag name, then change the number of instances to 3.
+
+- Click on the Applications and OS images to Amazon.
+
+- Scroll down and create a new key pair.
+
+- Further scroll down in the network settings and click on create a new security group. Check the SSH and HTTP boxes. Leave the IP as it is.
+
+- Go to advanced settings and write the following data in the user data by scrolling down.
+
+ #!/bin/bash
+ sudo su
+ apt update -y
+
+- Click on the launch install button and then click on the view instances.
+
+- After the launching the instances, change the names of them. One would be ansible server and other two would be nodes.
+
+- Now open the ec2 instances one by one in the local machine by using SSH.
+
+- Click on the server option in AWS and copy the public IP address. Once the IP is copied, open the terminal ans write `ssh ec2-user@`. It will give us an option to write YES or NO. Type yes and it will give you permission denied message.
+
+- Go to the directory where the ansible key is present and use it in the machine by writing `ssh -i ec2-user@`.
+
+- It will give another error like this **Permissions 0664 for 'ansiblekey.pem' are too open.**
+
+- To counter this error, change the permission by writing `chmod 0400 ansiblekey.pem` and then again write `ssh -i ec2-user@`.
+
+- You can exit it by writing **exit** and again run it by writing `ssh -i ec2-user@`.
+
+- Do the same thing with the nodes also and then go to the ansible server terminal and make it a root user by writing `sudo su`.
+
+- Then download the ansible package by writing `wget http://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm`
+
+- Type `ls` command and then install the file that is downloaded by writing `yum install file-name`
+
+- `yum update -y` will update the machine.
+
+- Now install the packages one by one by typing `yum install git python python-pip openssl ansible -y`.
+
+- After downloading the packages, type `ansible --version` to check the version of ansible.
+
+- Now go to the hosts file inside ansible server `/etc/ansible/hosts` and paste the private ip-address of node1 and node2 inside a group. In this way, the record of each node will be inside the ansible server.
+
+ [group-name]
+
+
+
+- The hosts file will only work if the `/etc/ansible/ansible.cfg` file is updated by uncommenting some of the following data. By uncommenting, the hosts file data will be activated and run.
+
+ inventory = etc/ansible/hosts
+ sudo_user = root
+
+## Create a user
+
+- First run all three instances and run them as root user by typing `sudo su`.
+
+- Now create a user in all three instances by typing `adduser `.
+
+- Now set the password for this user by typing `passwd ansible` and it will give you an option to enter the password.
+
+- Now switch to the ansible user by typing `su - ansible` in all three instances.
+
+- If you want install a package like `sudo yum install httpd -y`, it will ask you for the password but still not downlaod the package because you don't have the root privileges.
+
+- Exit from the ansible user by typing `exit`, then in the root user type `visudo` in all three instances.
+
+- Now go inside this file and change the following things.
+
+ Allow root to run any commands anywhere
+ root ALL=(ALL) ALL
+ ansible ALL=(ALL) NOPASSWD: ALL
+
+- Become an ansible user again by typing `su - ansible`.
+
+- Now go to the ansible server and try to install httpd package as an ansible user.
+
+ `sudo yum install httpd -y`
+
+- Now establish a connection b/w the server and the node. Change all the instances into ansible users. Go to the ansible server by typing `ssh `.
+
+- It will give you the `permission denied` message.
+
+- Now we have to do some changes in sshd_config file in all the three instances. Go to the root server and open the `/etc/ssh/sshd_config` file and uncomment and comment the following data.
+
+ PermitRootLogin yes
+ PasswordAuthentication yes
+ #PasswordAuthentication no
+
+- Do this work in node1 and node2 also and restart all the instances by typing `service sshd restart`.
+
+- Now become an ansible user by typing `su - ansible` in all the instances and type `ssh ` to get the node access from an ansible user.
+
+- It will ask you for your password and after that you will be inside a particular node.
+
+- Create a file and it will be present in another node.
+
+## Solve a password problem that gets asked everytime
+
+ ssh ------------------------
+ --------------> | Public Key in a node |
+ ------------------- / ------------------------
+ | | /
+ | Public Key | ssh ------------------------
+ | | ----------------> | Public Key in a node |
+ | Private Key | ------------------------
+ | | \
+ ------------------- \ ssh ------------------------
+ Ansible Server --------------> | Public Key in a node |
+ ------------------------
+
+- The public key will be given to all the nodes that will authenticate it and there will be no need to ask for password everytime.
+
+- This is a trust-relationship. It means that root only will make a relationship with the root and a user will only make a relationship with user and that's why you have to be the ansible user on all the nodes to access other nodes by typing `su - ansible`.
+
+- Create keys and run commands as an ansible user by typing `ssh-keygen`.
+
+- Now find the hidden files by typing `ls -a` and you will get the .ssh directory.
+
+- `cd .ssh` will get you into ssh directory.
+
+- `ls` will give you id_rsa, id_rsa.pub, known_hosts files that contains the private, public, and the hosts.
+
+- Now copy the public key file in both the nodes by typing `ssh-copy-id @`
+
+- Now verify and go to the ansible by going backward from ssh directory by typing `cd ..` and then type `ssh `.
+
+- You will get into the node without the password being asked.
+
+## What if I want to make changes in few nodes or a group of few nodes?
+
+- Switch to ansible server by typing `su - ansible`.
+
+- `ansible all --list-hosts` will give you the list of all the nodes that are connected to the ansible server.
+
+- `ansible groupname --list-hosts` will give you a specific group name that contains the nodes.
+
+- The node ascending order representation starts from 0 to so on and the descending order representation starts from -1 to so on.
+
+- `ansible [0] --list-hosts` will give the first node of a particular group.
+
+- `ansible [1:4] --list-hosts` will give the details from node 2 to node 5 of a particular group.
+
+- The details of multiple groups can be shown by using colon in between like `[1:3]:[4:3]`.
+
+## **Explaining it in a video**
+
+Here you can get an explanation in a video. [43/60 Day of DevOps Challenge]()
\ No newline at end of file
diff --git a/Days/day44.md b/Days/day44.md
new file mode 100755
index 0000000..7f69e4d
--- /dev/null
+++ b/Days/day44.md
@@ -0,0 +1,111 @@
+On the forty forth day, I learned the following things about Ansible.
+
+There are three ways to push the code from ansible server to the node(s).
+
+1. Ad-hoc commands
+2. Modules
+3. Playbooks
+
+## 1. Ad-hoc commands
+
+- Ad-hoc means temporary. It will do the things for a short period of time.
+
+- It can run individually to perform quick funtions like creating the files, running or stopping the machines, etc.
+
+- Ad-hoc commands are the linux commands that will be used for temporary purpose.
+
+- The linux commands have no idempotency. It means that it will do one task repeatedly multiple times. It's the drawback of ad-hoc.
+
+- These ad-hoc commands are not used for configuration management and deployment because the commands are of one time usage.
+
+- The ansible ad-hoc commands uses the `/usr/bin/ansible` that contains the commands and this command line tool is used to automate a single task.
+
+### Steps to use ad-hoc commands
+
+- Start the instances first and then go to the directory where the ansible key is present and use it in the machine by writing `ssh -i ec2-user@` command of all three instances.
+
+- Go to ansible server by typing `su - ansible`.
+
+- `ansible -a "ls"` will execute the `ls` command in a particular group. `-a` is an argument that will tell that whatever I am writing in the inverted command should be executed.
+
+- `ansible [0] -a "touch filename"` will create a file in node1 in a particular group.
+
+- `ansible all -a "touch filename"` will create a file in all the nodes and in all the groups.
+
+- `ansible -a "ls -al"` will give a long list of hidden files in a particular group.
+
+- `ansible -a "sudo yum install httpd -y"` will install the httpd package in all the nodes in a particular group.
+
+- `ansible -ba "yum install httpd -y"` will install the httpd package in all the nodes in a particular group. Here the `sudo` is not used. Instead the `b`(become) is used to perform the `sudo` functionality.
+
+- `ansible -ba "yum remove httpd -y"` will remove the httpd package in all the nodes of a particular group.
+
+- To check a package, write `which httpd`. It will show you nothing.
+
+## 2. Ansible Modules
+
+- Module is a single command or single work that will be executed one at a time.
+
+- Ansible ships with a number of modules(called module library) that can be executed on remote hosts or through playbooks.
+
+- It is present by default in the package.
+
+- Your library of modules can reside on any machine and there are no servers, daemons or database required.
+
+- Where the ansible modules are stored?
+
+ - The default location for the inventory file is `/etc/ansible/hosts`.
+
+### Steps to use modules
+
+- `ansible -b -m yum -a "pkg=httpd state=present"` will install the httpd package in all the nodes of a particular group. The `m` sign is for module. After that, the `a` is used to insert the argument but this time there is no linux commands. Instead there is a module command. The next step is the `pkg=httpd` that will show the package and the `state=present` will install the package. You can replace `present` with `install` also but the `present` is the standard.
+
+ - To install = present
+ - To update = latest
+ - To uninstall = absent
+
+- To check a package, write `which httpd` in both the nodes. It will show the `httpd` package.
+
+- `ansible -b -m yum -a "pkg=httpd state=latest"` will update the `httpd` package in all the nodes of a particular group. The `state=latest` will update the package.
+
+- `ansible -b -m yum -a "pkg=httpd state=absent"` will remove the `httpd` package in all the nodes of a particular group. The `state=absent` will remove the package.
+
+- To check a package, write `which httpd` in both the nodes. It will show you nothing.
+
+- `ansible -b -m yum -a "pkg=httpd state=present"` will install the service in all the nodes and then run the service.
+
+- To check a package, write `which httpd` in both the nodes. It will show the `httpd` package.
+
+- To check the status of the service in the nodes, type `sudo service httpd status`. It will show you the inactive message.
+
+- Type `ansible -b -m service -a "name=httpd state=started"` in the ansible server and it will start the `httpd` service in all the nodes of a particular group. `state=started` will start the service.
+
+- To check the status of the service in the nodes, type `sudo service httpd status` again. It will show you the active message.
+
+- `ansible -b -m user -a "name="` will create a user in all the nodes of a particular group.
+
+- To check it, type `cat /etc/passwd` in the nodes terminal and they will show you the newly created user at the end.
+
+- First create a file in the ansible server to copy it.
+
+- `ansible -b -m copy -a "src= dest=/tmp"` will copy a file from the ansible server and paste it into **/tmp** directory of all the nodes in a particular group.
+
+- If you want to copy a file into a particular node then type `ansible [0] -b -m copy -a "src= dest=/tmp"`.
+
+- To check the directory, type `ls /tmp/` in a particular node.
+
+- `ansible -b -m user -a "name= state=absent"` will remove a user in all the nodes of a particular group.
+
+### How the ansible server will know that a package is already present?
+
+- It is possible with the help of idempotency. It means that the ansible server contains the setup module that will send a request to the nodes and that setup module will check a package whether it is present or not. If it is present then it will not overwrite it. If it is present then it will write it in those nodes.
+
+- The setup command will know whether a package is already present or not.
+
+- Run `ansible -m setup` in a server and it will show you all the information of the nodes in a particular group.
+
+- `ansible -m setup -a "filter=*ipv4*"` will give you the ip-address of all the nodes in a particular group.
+
+## **Explaining it in a video**
+
+Here you can get an explanation in a video. [44/60 Day of DevOps Challenge]()
\ No newline at end of file
diff --git a/Days/day45.md b/Days/day45.md
new file mode 100755
index 0000000..2c0da0f
--- /dev/null
+++ b/Days/day45.md
@@ -0,0 +1,219 @@
+On the forty fifth day, I learned the following things about Ansible.
+
+# Playbook
+
+- More than one command present in a YAML file will be called playbook. The combination of modules is called playbook.
+
+- Playbook in ansible is written in YAML format.
+
+- It is a human readable data serialzation language. It is commonly used for configuration.
+
+- Playbook is like a file where you write code consist of vars, handlers, files, templates, and roles.
+
+- Each playbook is composed of one or more "modules" in a list. Module is a list of configuration files.
+
+- Playbook is divided into many sections.
+
+ - **Target section:** Defines the host against which playbook task has to be executed.
+
+ - **Variable section:** Defines variables. If there are multiple variables then just change a first variable and it will update all the values.
+
+ - **Task section:** List of all modules that need to run in an order.
+
+## YAML
+
+- For ansible, nearly every YAML file starts with a list.
+
+- Each item in a list is a list of key-value pairs commonly called a dictionary.
+
+- All the YAML files will begin with **---** and end with **...**
+
+- All members of a list lines must begin with some indentation level starting with space.
+
+ For example:
+
+ --- # A list of fruits
+ Fruits:
+ - Mango
+ - Strawberry
+ - Banana
+ - Grapes
+ - Apple
+ ...
+
+- A dictionary is represented in a simple key-value pair.
+
+ For example:
+
+ --- # Details of customer
+ Customer:
+ Name: Bilal
+ Job: YouTuber
+ Skills: DevOps
+ Exp: 1 year
+ ...
+
+ **Note:** There should be space b/w **:** and the value.
+
+## Steps
+
+### **File 1**
+
+- Start the instances, connect the ansible server with node by using `ssh -i ec2-user@`.
+
+- After that, go to ansible server by typing `su - ansible`.
+
+- Create a YAML file by the name of *target.yml* and write the following data in it.
+
+ --- # Target Playbook
+ - hosts: groupname
+ user: ansible
+ become: yes
+ connection: ssh
+ gather_facts: yes
+ ...
+
+- `become: yes` means to give the sudo privilege.
+
+- `gather_facts: yes` will gather the private ip addresses of the nodes.
+
+- Execute the file by writing `ansible-playbook target.yml`
+
+- To check the idempotency, execute the above command, and this will show the `changed=0` because the task is not repeated.
+
+### **File 2**
+
+- Create a YAML file by the name of *task.yml* and write the following data in it.
+
+ --- # Task Playbook
+ - hosts: groupname
+ user: ansible
+ become: yes
+ connection: ssh
+ gather_facts: yes
+
+ tasks:
+ - name: Install httpd on Linux
+ action: yum name=httpd state=installed
+ ...
+
+- `yum` is the module name.
+
+- You can also write `present` as a state that will install a package.
+
+- Before executing a file, first remove the `httpd` package from all the nodes by typing `sudo yum remove httpd -y`.
+
+- Execute the file by writing `ansible-playbook task.yml` and it will show you `changed=1` because something new is added.
+
+- After executing a file, type `which httpd` in all the nodes to confirm the installation.
+
+## Variables
+
+- Ansible uses variables which are defined previously to enable more flexibility in playbooks and roles. They can be used to loop through a set of given values, access various information like the host name of a system and replace certain strings in templates with specific values.
+
+- Put variable section above tasks so that we define it first and use it later.
+
+### **Steps**
+
+- Go to ansible server and switch to ansible user.
+
+- Create a YAML file by the name of *vars.yml* and write the following data in it.
+
+ --- # Variables Playbook
+ - hosts: groupname
+ user: ansible
+ become: yes
+ connection: ssh
+ gather_facts: yes
+
+ vars:
+ - pkgname: httpd
+ tasks:
+ - name: Install httpd on Linux
+ action: yum name='{{pkgname}}' state=installed
+ ...
+
+- Before executing a file, first remove the httpd package from all the nodes by typing `sudo yum remove httpd -y`.
+
+- Execute the file by writing `ansible-playbook vars.yml`
+
+- After executing a file, type `which httpd` in all the nodes to confirm the installation.
+
+## Handlers Section
+
+- A handler is exactly the same as a task but it will run when it is called by another task.
+
+- The first task will be executed and then it will be shifted to another task. Without the first task completion, the second task won't complete.
+
+- The first task must contain the notify directive to indicate and also notify that it changed something.
+
+### **Steps**
+
+- Go to ansible server and switch to ansible user.
+
+- Create a YAML file by the name of *handlers.yml* and write the following data in it.
+
+ --- # Handlers Playbook
+ - hosts: groupname
+ user: ansible
+ become: yes
+ connection: ssh
+ tasks:
+ - name: Install httpd server
+ action: yum name=httpd state=installed
+ notify: restart HTTPD
+ handlers:
+ - name: restart HTTPD
+ action: service name=httpd state=restarted
+ ...
+
+- `handlers` value must be equal to `notify` value so that the handlers will run only after the task is completed.
+
+- Before executing a file, first remove the httpd package from all the nodes by typing `sudo yum remove httpd -y`.
+
+- **Dry run:** Check whether the playbook is formatted correctly before an execution by typing `ansible-playbook handlers.yml --check`.
+
+- Execute the file by writing `ansible-playbook handlers.yml`.
+
+- After executing a file, type `which httpd` in all the nodes to confirm the installation.
+
+- For checking the status, write `sudo service httpd status` in all the nodes. If it is `active` then it means the `httpd` is running.
+
+## Loops
+
+- Sometimes you want to repeat a task multiple times. In computer programming, this is called loops.
+
+- Common ansible loops include changing ownership on several files and/or directories with the file module, creating multiple users with the user module, and repeating a pulling step until certain results are reached.
+
+### **Steps**
+
+- Go to ansible server and switch to ansible user.
+
+- Create a YAML file by the name of *loops.yml* and write the following data in it.
+
+ --- # Loops Playbook
+ - hosts: groupname
+ user: ansible
+ become: yes
+ connection: ssh
+ tasks:
+ - name: Add a list of users in the nodes
+ user: name='{{item}}' state=present
+ with_items:
+ - Bilal
+ - Ali_ahmed
+ - Zeeshan
+ - Rahmat
+ ...
+
+- It will go inside the node1 of a groupname and create the users one after the other. Then it will go to node2 of a groupname and create the users one after the other.
+
+- The username should be written without a space, otherwise it will us an error.
+
+- Execute the file by writing `ansible-playbook loops.yml`.
+
+- To verify, go inside any node and type `cat /etc/passwd`.
+
+## **Explaining it in a video**
+
+Here you can get an explanation in a video. [45/60 Day of DevOps Challenge]()
\ No newline at end of file
diff --git a/Days/day46.md b/Days/day46.md
new file mode 100755
index 0000000..a95f34a
--- /dev/null
+++ b/Days/day46.md
@@ -0,0 +1,154 @@
+On the forty sixth day, I learned the following things about Ansible.
+
+# Ansible Conditions
+
+- Whenever we have different scenarios, we put conditions according to different scenarios.
+
+## When Statement
+
+- If there are multiple servers of different operating systems then in order to run let's say the last server, it will skip the first few servers of particular nodes to reach the selected one and run it.
+
+- Start the instances, connect the ansible server with node by using `ssh -i ec2-user@`.
+
+- After that, go to ansible server by typing `su - ansible`.
+
+- Create a YAML file by the name of *condition.yml* and write the following data in it.
+
+ --- # Condition Playbook
+ - hosts: groupname
+ user: ansible
+ become: yes
+ connection: ssh
+ tasks:
+ - name: install apache on debian
+ command: apt-get -y install apache2
+ when: ansible_os_family == "Debian"
+ - name: install apache on redhat
+ command: yum -y install httpd
+ when: ansible_os_family == "RedHat"
+ ...
+
+- For Amazon Linux and redhat, the `httpd` will install the `apache` but only the name is different.
+
+- Before executing a file, first remove the `httpd` package from all the nodes by typing `sudo yum remove httpd -y`.
+
+- Execute the file by writing `ansible-playbook condition.yml` and it will show you `changed=1` and `skipped=1` because new thing is added and one package is skipped.
+
+- After executing a file, type `which httpd` in all the nodes to confirm the installation.
+
+# Vault
+
+- Ansible allows keeping sensitive data such as passwords or keys in encrypted form rather than a plaintext in your playbooks.
+
+- To create a new encrypted playbook, write `ansible-vault create vault.yml`.
+
+- Edit the encrypted playbook by writing `ansible-vault edit vault.yml`.
+
+- To change the password, write `ansible-vault rekey vault.yml`.
+
+- To encrypt an existing playbook, write `ansible-vault encrypt file.yml`.
+
+- To decrypt an encrypted playbook, write `ansible-vault decrypt file.yml`.
+
+- The latest **AES256** technique is used in the encryption.
+
+# Roles
+
+- We can use two techniques for running a set of tasks. One is includes and another is roles.
+
+- Roles are good for organizing tasks, handlers, vars etc and encapsulating(putting data in a box) the data needed to accomplish those tasks, handlers, vars etc.
+
+- If any task, handler etc is required, the roles will only contact that box. In this way, the traffic will be distributed.
+
+- Some of the things that the roles contain are.
+
+ Ansible Roles
+ |
+ |
+ ----------------------------------------------------
+ | | | | | | |
+ | | | | | | |
+ Default Files Handlers Meta Templates Tasks Vars
+
+- We can organize playbooks into a directory structure called roles.
+
+- Adding more and more functionality to the playbook will make it difficult to maintain in a single file.
+
+ Playbook
+ -----------------------------------------------------------
+ | master.yml Roles |
+ | ------------------------ ------------------------ |
+ | | | | myrole | |
+ | | | | ------------------ | |
+ | | Target | | | -------------- | | |
+ | | | | | | Tasks | | | |
+ | | | | | | main.yml | | | |
+ | | | | | -------------- | | |
+ | | Roles | | | -------------- | | |
+ | | | | | | Vars | | | |
+ | | myrole | | | | main.yml | | | |
+ | | | | | -------------- | | |
+ | | | | | -------------- | | |
+ | | | | | | Handlers | | | |
+ | | | | | | main.yml | | | |
+ | | | | | -------------- | | |
+ | | | | ------------------ | |
+ | ------------------------ ------------------------ |
+ -----------------------------------------------------------
+
+- Every task will be defined in the myrole directory. If I want to go to a specific task, I will simply open that one instead of searching the whole playbook.
+
+- *master.yml* file will run myrole playbook and it will contact the directories that has a specific role assigned to it.
+
+ - Target contains the ansible server and the nodes that will run the task.
+ - myrole contains the tasks, vars, handlers etc that will be executed if required.
+
+ - You can only the change myrole directory name. Others are not allowed.
+
+## Different kind of roles
+
+- **Default:** It stores the data about the role/application. Default variables example is: If you want to run port 80 or 8080 then variables needs to be defined in this path.
+
+- **Files:** It contains files that need to be transferred to the remote VM(static files).
+
+- **Handlers:** They are triggers or or a next task that will run after the first task is completed.
+
+- **Meta:** It contain the files that establish roles dependencies e.g -> author name, supported platform, dependencies if any.
+
+- **Tasks:** It contains all the tasks that are normally in the playbook e.g -> installing packages and copies files etc.
+
+- **Vars:** Variables for the role can be specified in this directory and used in your configuration files. Both vars and default store the variables.
+
+## Steps
+
+- Make some directories by typing `mkdir -p playbook/roles/webserver/tasks`. `-p` is for the parent directory.
+
+- Move to the playbook directory by typing `cd playbook`.
+
+- Make a file inside the **tasks** directory by typing `touch roles/webserver/tasks/main.yml`.
+
+- Make a file inside the **playbook** directory by typing `touch master.yml`
+
+- Open the *main.yml* file and write the following things.
+
+ - name: install apache
+ yum: pkg=httpd state=latest
+
+- Open the *master.yml* file and write the target and the roles(mention the webserver that is the present inside the roles directory).
+
+ - hosts: groupname
+ user: ansible
+ become: yes
+ connection: ssh
+ roles:
+ - webserver
+
+- Before executing a file, first remove the `httpd` package from all the nodes by typing `sudo yum remove httpd -y`.
+
+- `ansible-playbook master.yml` will call the *master.yml* file. *master.yml* file will call the roles. Roles will call the webserver because it is mentioned in the *master.yml* file. After that, the *main.yml* file inside the **tasks** directory will be executed. If something is defined in **handlers** and **vars** directory, that will also be executed.
+
+- After executing a file, type `which httpd` in all the nodes to confirm the installation.
+
+## **Explaining it in a video**
+
+Here you can get an explanation in a video. [46/60 Day of DevOps Challenge]()
\ No newline at end of file
diff --git a/Days/day47.md b/Days/day47.md
new file mode 100755
index 0000000..d2046e9
--- /dev/null
+++ b/Days/day47.md
@@ -0,0 +1,206 @@
+On the forty seventh day, I learned the following things about CI/CD Pipeline.
+
+# CI/CD Pipeline
+
+- A continuous integration and continuous deployment (CI/CD) pipeline is a series of steps that must be performed in order to deliver a new version of software.
+
+- CI/CD is a methodology. It's not a tool.
+
+- The code won't be deployed all at once because it will create a problem for a developer to review it again after some days or months. Instead the code will be divided in different parts and those parts will be deployed continuously by giving us a feedback that whether it is succeeded or failed.
+
+- The process is automated so that if the bugs are found in between, it will automatically be detected and we can solve it from there.
+
+## Before continuous integration
+
+ Developer
+ () ---------- ---------- ----------
+ /\ -------> | | | | | |
+ | | ----------> | | ----------> | |
+ () -------> | | <------ | | | |
+ /\ ---------- | ---------- ----------
+ Repository | Integration/Build Testing
+ | |
+ | |
+ --<--------------<---------------<--
+
+- Before continuous integration, the process was very tense.
+
+- The developers push the files(containing thousand lines of code) to github.
+
+- The next step is to build the code at one place.
+
+- The third step is to test the code from start to the end to find the errors.
+
+## Problem
+
+- This process has a problem that if a file has some minor issues then it will go back to the developers.
+
+- The developers will shift their focus from another project to this problem again.
+
+- They will have to find a minor bug inside a huge file.
+
+## After continuous integration
+
+ Developer ------------------------------------
+ () ---------- | ---------- ---------- ---------- |
+ /\ -------> | | | | | | | | | |
+ | | -----------> | | Build | | Test | | Deploy | |
+ () -------> | | | | | | | | | |
+ /\ ---------- | ---------- ---------- ---------- |
+ Repository ------------------------------------
+ ↑ CI Server
+ ↑ ↓
+ <-------------------<-----------------<
+ Notification: Success/Failure
+
+- The developer push the files(containing few lines of code) to github.
+
+- The whole application won't be given to the repository and the CI server. Instead the code that is written in one or two days will be given to the CI server to check and then another day some more lines of code will be pushed to check and this process will go on.
+
+- In this way, continuous integration = continuous build + continuous testing.
+
+## CI/CD Pipeline
+
+ Check the quality and assurance
+ -------------------------------------------------------------------------↑-----
+ / \ \ \ \ \ \ ↑ \
+ ------------| | Version | | | | Prod. | Measure |
+ | -------> | Dev. | Control | Build | Unit Test | Auto Test | env. | and |
+ | ↑ -------| | | | | | Deploy | Validate |
+ | ↑ | \ / / / / / / /
+ | | ------------ ------------ ----------- ---------- ----------- --------- --------
+ | | |↓| |↓| |↓| |↓| |↓| |↓|
+ | ↑ | |↓| |↓| |↓| |↓| |↓| |↓|
+ | ↑ -------------------- ------------ ----------- ---------- ----------- --------- |
+ | ↑ <-------------- <--------- <-------- <------- <-------- <------ |
+ -------------------------------------------------------------------------------------
+ Production feedback and testing at every stage
+
+- The developer will send the code to version control and then it will go through all the stages one by one.
+
+- On each stage, the code will be tested and it will give the feedback.
+
+- If some bug found in any stage then the code will stop there and the feedback will be given to the developer to make it correct.
+
+- In this way, it will be easy to check the code before deployment.
+
+
+## Jenkins
+
+- DevOps has different phases like the following.
+
+ --------
+ | Plan | <--
+ -------- |
+ | <--------------- ----------
+ | ↑ --> | Deploy |
+ -------- | ↑ | ----------
+ | Code | <-- --------------- ---> |
+ -------- | Integration | / |
+ | Tool | / | -----------
+ | Jenkins | --> | Operate |
+ --------- --------------- -----------
+ | Build | ↓ ↓ ↓
+ --------- <---------------- ↓ ↓
+ ↓ ↓ -----------
+ ↓ -----------------> | Monitor |
+ -------- ↓ -----------
+ | Test | <---------------------
+ --------
+
+- You will first plan and write the code and then push it to GitHub.
+
+- After that, build the code with many tools like Maven, gradie etc.
+
+- For testing, use the Selenium, Junit tool, etc.
+
+- For deployment and operation, use the Chef, puppet, etc.
+
+- For monitoring, use the nagios tool.
+
+- Jenkins will automate this process of transitioning from one phase to another. It means that after completing the work on the first phase, you don't need to go to the next phase and so on. Instead the Jenkins will automatically go to each of the phase one by one.
+
+- Jenkins is an open-source tool written in Java that runs on Windows, MacOS and Linux. It is free, community supported tool for CI.
+
+- Jenkins automate the entire software development life cycle as I have shown you above.
+
+- Jenkins was originally developed by Sun Microsystem in 2004 under the name Hudson.
+
+- The project was later named Jenkins when Oracle bought Microsystem.
+
+- Jenkins was kept free but the paid enterprise version was released that is called Hudson.
+
+- It can run on any major platform without any compatibility issues.
+
+- There are other alternatives of Jenkins like Bamboo, Travis CI, Buildbot, etc.
+
+- Whenever developers write code, we integrate all that code of all developers at that point of time and we build, test and deliver/deploy to the client. This process is called CI/CD.
+
+- Because of CI, now the bugs will be reported fast so that the entire development process works fast.
+
+## Workflow of Jenkins
+
+ Developer -----------
+ () ---------- ----------- ----------> | Build |
+ /\ ----------> | GitHub | ----------> | Jenkins | <---------- | (Maven) |
+ ---------- ----------- -----------
+ ↓ ↓↑ ↑ ↓
+ ↓ ↓↑ ↑ ↓
+ <-------- ↓↑ ↑ ↓ -------------
+ ↓ ↓↑ ↑ -------------> | Testing |
+ ↓ ↓↑ <--------------- | (Selenium)|
+ ↓ ↓↑ -------------
+ ↓ ↓↑
+ ↓ --------------------
+ ↓ | QA(Checkstyle) |
+ ↓ -------------------- CI
+ --------------------------------↓-----------------------------------------------
+ ↓ CD
+ <---- ↓
+ ↓ ↓ ----------
+ ↓ ---> | Deploy |
+ ↓ ----------
+ ↓
+ ↓ -----------
+ ↓ -------> | Deliver |
+ -----------
+
+- Plugins are available for Jenkins that will help it to communicate with other tools.
+
+- Developer will write a code. It will be given to the GitHub.
+
+- Jenkins will pull the code from the GitHub and give it to build(maven).
+
+- Jenkins will pull the code from the maven and give it to the testing(selenium).
+
+- Jenkins will pull the code from the selenium and give it to the artifactory(for archiving purpose, it will store the final code that is able to be used). It will also give it to the QA(checkstyle).
+
+- Jenkins will pull the code from the checkstyle and deploy and deliver it.
+
+- After delivery, our work is finished but if the end user or customer does not know how to deploy it, then our technical team go for the support to deploy and deliver the software.
+
+- **Build:** Build will first compile the code,. Then it will review it. Next is to perform unit testing on it. After that, integration testing will be done. At the end, it will package it using WAR or JAR etc.
+
+## Advantages of Jenkins
+
+- It has a lot of plugins available that will help Jenkins to connect with other tools like GitHub, maven, etc.
+
+- You can write your own plugin.
+
+- You can use community plugin that are already present.
+
+- Jenkins is not a tool. It's a framework. You can do whatever you want. All you need are plugins.
+
+- All the things are automated but if you want to manually do some things like building, testing then you can do it manually.
+
+- Jenkins has a master and slaves architecture. Inside Jenkins there is one master and multiple slaves that you can connect who will perform the jobs.
+
+- We can attach multiple slaves(node) to one master(jenkins) that will do the work for us. Master instructs the slaves. If the slaves are not available. Jenkins itself will do the job.
+
+- It can schedule the tasks that after sometime that task will be started.
+
+- It can create the labels for the slaves so that the task will be done by slave1, or slave2.
+
+## **Explaining it in a video**
+
+Here you can get an explanation in a video. [47/60 Day of DevOps Challenge]()
\ No newline at end of file
diff --git a/Days/day48.md b/Days/day48.md
new file mode 100755
index 0000000..ae6176f
--- /dev/null
+++ b/Days/day48.md
@@ -0,0 +1,124 @@
+On the forty eigth day, I learned the following things about CI/CD Pipeline.
+
+## Installation of Jenkins
+
+- Visit this [website](https://www.jenkins.io/doc/book/installing/linux/) and first install the JAVA before installing the Jenkins otherwise jenkins won't start.
+
+- I am using Ubuntu so I will run the following commands to install JAVA and Jenkins.
+
+**Install JAVA**
+
+ sudo apt update
+
+ sudo apt install openjdk-11-jre
+
+ java --version
+
+**Install Jenkins**
+
+ curl -fsSL https://pkg.jenkins.io/debian-stable/jenkins.io.key | sudo tee \
+ /usr/share/keyrings/jenkins-keyring.asc > /dev/null
+
+ echo deb [signed-by=/usr/share/keyrings/jenkins-keyring.asc] \
+ https://pkg.jenkins.io/debian-stable binary/ | sudo tee \
+ /etc/apt/sources.list.d/jenkins.list > /dev/null
+
+ sudo apt-get update
+
+ sudo apt-get install jenkins
+
+ sudo systemctl enable jenkins
+
+ service jenkins status
+
+ service jenkins stop
+
+## Access the Jenkins
+
+- Find your machine ip address by writing `ifconfig`.
+
+- Copy it and write `sudo -i`. It will open the root user for you.
+
+- Include the PATH in the environment varaible otherwise it will give you this error: `'/usr/bin:/bin' is not included in the PATH environment variable.` The command for including is:
+
+ export PATH="/usr/bin:$PATH"
+
+- Add the ip address into the `etc/hosts` file by writing `echo " jenkins.local" >> /etc/hosts`.
+
+- If you type `ping jenkins.local`, it will be pinged to the given address. If you want to access it through the browser then it will be accessed through `jenkins.local`.
+
+- Enable the jenkins and start the service. Open the browser and type `http://jenkins.local:8080/` and it will open the window for you.
+
+- In the window, you have to enter the administrator password. Go to the terminal and write `sudo cat /var/lib/jenkins/secrets/initialAdminPassword`. It will give you the password that it has already created during the jenkins installation.
+
+- Paste the password and then you can install the plugins of your own choice or the suggested plugins. I will click on the suggested plugins.
+
+- Wait for it and it will install the plugins for you. Once the plugins are installed, click on the continue button and create the first admin user, click on the continue. It will give you the jenkins URL. Click on Save and finish button. You will get a message to start the Jenkins.
+
+## Dashboard Overview
+
+- After opening the dashboard, it will look like this.
+
+
+
+
+
+- Click on the New Item to create a new job.
+
+- Click on the People to get a user.
+
+- Click on the manage jenkins to set the system configuration, troubleshooting, etc.
+
+- Click on the Full name and click on the configure option to set up the user details.
+
+## Create your first job
+
+- Click on the new item to create your first job.
+
+- It will open a new window to create a new job and give it a name.
+
+- After that click on the Freestyle project and press OK. These things came from the plugin.
+
+- Now the configuration settings will be open for you for a new job.
+
+
+
+
+
+**Print Hello World**
+
+- Scroll down and click on Add Build Steps. It will show you the multiple options.
+
+- Click on the Execute shell option and type `echo "Hello World"`.
+
+- Scroll down and click on the Apply and save button.
+
+- After applying, click on the upper-left option that is Dashboard. This dashboard will show you all the list of jobs.
+
+- To run the job, click on the job and it will show you the down arrow to build it now.
+
+- Once the job is build, open the job and it will show you the job on the lower-left corner.
+
+- To check the output, click on the job that is build and click on the console output option.
+
+**Find the user and the path**
+
+- Click on the configure option of the job to add more options in the build like `whoami` and `pwd`.
+
+- Click on the build it now option, open the job again and click on the console output option. It will show you the result.
+
+- The user is jenkins which you can find in the terminal by typing `cat etc/passwd`. This user is created when the jenkins was installed.
+
+- The path of the job is `/var/lib/jenkins/workspace/demo-first`. If you type `ls /var/lib/jenkins/workspace/`, it will show you the **demo-first** directory.
+
+**Create a file**
+
+- Click on the configure option of the job to create a new file in the build by writing `touch sample.txt`.
+
+- Click on the build it now option, open the job again and click on the console output option. It will show you the newly created file.
+
+- You can check the file by going inside **demo-first** directory by typing `ls /var/lib/jenkins/workspace/demo-first`.
+
+## **Explaining it in a video**
+
+Here you can get an explanation in a video. [48/60 Day of DevOps Challenge]()
\ No newline at end of file
diff --git a/Days/day49.md b/Days/day49.md
new file mode 100755
index 0000000..e3f7ed7
--- /dev/null
+++ b/Days/day49.md
@@ -0,0 +1,107 @@
+On the forty ninth day, I learned the following things about CI/CD Pipeline.
+
+# Search Panel
+
+- Before working on the search plan, let's do the initial steps first.
+
+- First enable jenkins and start it by writing the following commands.
+
+ sudo systemctl enable jenkins
+
+ service jenkins start
+
+- Open the browser and type `http://jenkins.local:8080/` and it will open the jenkins dashboard for you.
+
+- Create another job and write `echo "Hello World"` in it.
+
+- The jobs will be shown to you like this.
+
+
+
+
+
+- As you can see in the above picture that there are two jobs and one of them has a tick sign and other one has three dots.
+
+- The tick sign represents that the job is built and the three dots represent that this job is not built yet.
+
+- The sun shows the stability sign. Both of the jobs are stable now.
+
+- Last success shows the last time, the job was succeeded. The **#5** number shows the 5 builds that were successfully done.
+
+- Last failure shows the last time, the job was failed.
+
+- Let's configure the job and write an unexisted command in the Execute shell.
+
+- After writing an unexisted command, if you build it, it will show you the error message like this.
+
+
+
+
+
+- Here you can see in the picture that the cloud bursted because the command is wrong.
+
+- The last failure was 11 seconds ago and that first build failed.
+
+## Search panel
+
+- Now let's write demo in the search bar and click on it. It will show you all the jobs that start with the word demo. You can choose any job from them.
+
+- If you want to choose a specific name, you can also write it there.
+
+- If you want to search a specific build in a job then you can write let's say *demo-first 1* or *demo-first 4*
+
+- If you want to get the console of the job then you can write like this *demo-first 1 console*. It will give you the first build result of a particular job.
+
+## Naming convention
+
+- You should follow the proper naming convention to give the jobs a name like:
+
+ 1. testproject-build
+ 2. testproject-deploy
+ 3. testproject-production
+
+## Manage jenkins
+
+- Click on the manage jenkins option and it will lead you to another page.
+
+- Read the description of each option and take overview about it.
+
+- Go to the system configure message and in the system message type **Hello World** and apply it.
+
+- If you want to easily access the configuration then write configure in the search bar and it will lead you there.
+
+- You will see an option after the configure message section and that is **# of executors**. the number of executors will show that how many jobs you can run in a system. You can change its number.
+
+## Install plugins
+
+- Click on the manage jenkins option in the dashboard.
+
+- Open the manage jenkins option in the system configuration option.
+
+- In the plugin manager, you will get the updates, available plugins, installed plugins and the advanced tab through you which you can deploy your own plugin.
+
+- Now install a plugin let's say a theme, you can go to the jenkins [website](https://www.jenkins.io/) and click on the plugins option.
+
+- You will get into the search bar. Here write the word theme and it show you the bunch of themes to install.
+
+- Go the manage jenkins, and click on the plugin manager. Enter the name of the plugin that you want in the available option. If the plugin is found then search it then select it and click on the install without restart button.
+
+- After installing it, it will show you the success message.
+
+- Check on the restart jenkins option and then login again.
+
+- After login, open the manage jenkins option and click on the configure system option.
+
+- If you scroll down, you will see three built-in themes. Click on any of them and apply them.
+
+## Create user
+
+- Go to the manage jenkins option and scroll down to click on the manage users under the security.
+
+- Click on the create user option at the left side and it will ask you the username, password, confirm password and the full name.
+
+- Save it, logout and now enter the username and password of a new user.
+
+## **Explaining it in a video**
+
+Here you can get an explanation in a video. [49/60 Day of DevOps Challenge]()
\ No newline at end of file
diff --git a/Days/day5.md b/Days/day5.md
new file mode 100755
index 0000000..4a11c26
--- /dev/null
+++ b/Days/day5.md
@@ -0,0 +1,9 @@
+On the fifth day, I learned the following things about Networking.
+
+Click Here:
+
+- 🌐 [Day No. 5 of Learning Networking](../PDFs/Computer-Networking-2.pdf)
+
+## **Explaining it in a video**
+
+Here you can get an explanation in a video. [5/60 Day of DevOps Challenge]()
\ No newline at end of file
diff --git a/Days/day50.md b/Days/day50.md
new file mode 100755
index 0000000..611b05e
--- /dev/null
+++ b/Days/day50.md
@@ -0,0 +1,112 @@
+On the fifteeth day, I learned the following things about CI/CD Pipeline.
+
+## Jenkins Role Base Access Control (RBAC)
+
+- In the previous video, I have shown you how you can create a user but the problem was that the newly created user was also able to configure the jobs of the first user.
+
+- In this video, I am going to prevent this from happening.
+
+- This can possible with the help of role base access control.
+
+- Go to this [website](https://plugins.jenkins.io/) and write role base. You will see that a *Role-based authorization strategy* plugin is shown to you.
+
+- Once the plugin is found, go to the manage plugins option and open the available plugin tab. Search the *Role-based authorization strategy*. You will find it there.
+
+- Check the plugin and click on the install without restart button.
+
+- Once the plugin is installed, restart the dashboard and go to *Configure global security*.
+
+- Here you will see under the *Authorization* that the logged in-users can do anything. Change it to *Role-based strategy*.
+
+- Once you apply and save this, you will see under the security section that manage and assign roles is appeared.
+
+- Click on it and you can manage and assign roles in it.
+
+- Click on the manage roles option to add a new role that is developer.
+
+- After saving it, if you logout and login to the second user, you will see the access denied message because this second user does not have any permission.
+
+- Again login to the first user, and inside the manage role, give the developer the overall read permission and save it.
+
+- Go to the assign role and type the user name that is **ali** and click on add button to add it and then assign the developer role to the user **ali**.
+
+- Apply and save the changes.
+
+- If you login to the second user again, you will see the empty dashboard with no job present in it.
+
+- Now go to the manage roles of the first user and give read and build permission. Apply and save it.
+
+- Login to the second user and you will see that the jobs are appeared but if you open any of the job, it won't give an option to configure.
+
+## Use of git plugin and clean workspace
+
+- First create a job and write `echo "Hello World"` in the execute shell.
+
+- After creating the job, build it and show console output.
+
+- Create a github repository with README file in it and add another file also by clicking on the *add file* option.
+
+- Make it an HTML file and give it a name like *index.html* and write the following data in it.
+
+
+
+
This is Bilal Khan
+
+
+
+- Go to the jenkins configuration of the newly created job and write the following data in it.
+
+ ls
+ git clone
+ cd repo-name
+ ls
+ cat README.md
+ cat index.html
+
+- After writing the data, apply and save it and then build it to get the console output.
+
+- If you build this job again, it will give you an error because the repository is already cloned.
+
+- To tackle this problem, go to the configuration of the job and scroll down to the *build environment* and check the box *Delete workspace before build starts* and then apply and save it.
+
+- If you build it again, it will delete the repository and then again clone it.
+
+- You can also write clone commands somewhere else and just write the `cat` in the execute shell.
+
+- Create a new and then go to the configuration of it. Scroll down and in the *source code management*, check the *Git* option. Enter the repository url and change the branch name from master to main.
+
+- Open the execute shell and write the following commands in it.
+
+ # clone
+ # cd jenkins
+ ls
+ cat README.md
+ cat index.html
+
+- The clonning and the changing directory option is already done. Now only the list and cat option will be executed.
+
+- If you made some changes in the github file, and build the jenkins job again, you will see the updated output.
+
+## Trigger Build Remotely
+
+- If you want to execute the jenkins job using the terminal then you can do this by remotely building the trigger.
+
+- Go to the configuration option of the job and scroll down to click on the *Build triggers* option. Give any authentication token name.
+
+- Once you gave token name, there will an example URL that you need to copy and after some modification of writing job name and the token, run it in the browser and it will create a build for you in the jenkins.
+
+- If you want to do the same functionality in the terminal then copy the link and past it with the curl command like this `curl ` and execute it. You will see that the new build is created.
+
+- If it is giving you the message to authenticate the user first then you need a install a plugin for that.
+
+- Go to the manage plugins and on the available plugins section, write *build authorization token root* and install it. After installation, restart the jenkins dashboard.
+
+- Once the plugin is installed, take the link from the plugin example [here](https://plugins.jenkins.io/build-token-root/). The link is like this: `http://jenkins.local:8080/buildByToken/build?job=&token=`.
+
+- Take this link and paste it in the browser and it will create another build for you.
+
+- Let's create a build in the terminal by typing the command `curl http://jenkins.local:8080/buildByToken/build?job=\&token=`. You need to add `\` before `&` sign to run it successfully.
+
+## **Explaining it in a video**
+
+Here you can get an explanation in a video. [50/60 Day of DevOps Challenge]()
\ No newline at end of file
diff --git a/Days/day51.md b/Days/day51.md
new file mode 100755
index 0000000..e0f0890
--- /dev/null
+++ b/Days/day51.md
@@ -0,0 +1,87 @@
+On the fifty first day, I learned the following things about CI/CD Pipeline.
+
+## Build after other project are build (jenkins upstream and downstream)
+
+- Now let's take a look that how to link two jobs so that the previous one won't be executed unless the first one is executed.
+
+- If the first job(upstream) is executed then the second job(downstream) will start.
+
+- First create a job by the name of upstream and then go to the configure to write `echo "Hello World"` in the execute shell.
+
+- Then create the second job by the name of downstream and go to the configuration. Scroll down and in the *build triggers* option check the option that is *build after other projects are build*
+
+- It will give you an option to link the previous project(upstream) with downstream project.
+
+- Write the `echo "Hello World"` and `sleep 5` commands in the execute shell.
+
+- Apply and then save the configuration.
+
+- Build the upstream job and wait for 5 seconds when the sleep time is over then the second job(downstream) will automatically be executed.
+
+- You can design a CI/CD pipeline using upstreaming and downstreaming.
+
+## Build after other project are build (failed, unstable job)
+
+- Open the newly created job(downstream job) configuration and scroll down to go to the *build trigger* option and check another option that is *Trigger even the build fails*.
+
+- Apply and save the configuration.
+
+- After applying if you made changes in the previous job(upstream job) and write an unexisted in it, the next job will still be executed even if the first one fails.
+
+- Now let's apply the unstable functionality in the upstream job.
+
+- First open the newly created job(downstream job) configuration and scroll down to go to the *build trigger* option and check another option that is *Trigger even the build is unstable*.
+
+- Apply and save the configuration.
+
+- Once you apply the changes, go to the previous job(upstream job), scroll down and write `exit 10` in the *Execute shell*.
+
+- Go to the advanced option of the *Execute shell* and then write `10` in the *Exit code to set build unstable* option.
+
+- It will show that after 10 seconds, the upstream job will become unstable and downstream job will start executing.
+
+## Build Periodically
+
+- If you want to execute a job and create build after every 2 minutes or any time that you want then you can do it.
+
+- To do this, first create a job and then go to the configuration. Scroll down and write `date` in the *Execute shell*.
+
+- After writing the data in the execute shell, scroll up and go to the *build triggers* option and check the *Build periodically* option.
+
+- Once it is checked then you can schedule the task by writing the cron job in it. You can get an example by clicking on the question mark.
+
+- Write `H/2 * * * *` to execute the job and build it after every two minutes.
+
+## Poll SCM(Source code management)
+
+- If you want to monitor the changes in your github repository then you can also do this.
+
+- First create a job and then go to the configuration. Scroll down and go to the *build triggers* option and check the *Poll SCM* option.
+
+- Once it is checked then you can schedule the task by writing the cron job in it. You can get an example by clicking on the question mark.
+
+- Write `H/2 * * * *` to execute the job to monitor the changes after every 2 minutes.
+
+- The next is to create a github repository and take a link of that repository by clicking on the code button and getting the HTTPS link.
+
+- Once the link is copied, go to the jenkins dashboard and go to the source code management in the configuration.
+
+- Select the Git option, paste the link there and change the branch from master to main.
+
+- Go to the execute shell and write the following data into it.
+
+ ls
+ cat README.md
+ cat index.html
+
+- After making changes, apply and save the configuration.
+
+- Wait for the first build and if you wait for 2 minutes, the build will not happen because no changes are made in the github.
+
+- Now make some changes in the github and wait for 2 minutes. After 2 minutes, you will see that the build is created.
+
+- If you open the console output, you will see that changes are now present.
+
+## **Explaining it in a video**
+
+Here you can get an explanation in a video. [51/60 Day of DevOps Challenge]()
\ No newline at end of file
diff --git a/Days/day52.md b/Days/day52.md
new file mode 100755
index 0000000..2cb6df3
--- /dev/null
+++ b/Days/day52.md
@@ -0,0 +1,163 @@
+On the fifty second day, I learned the following things about Continuous Monitoring.
+
+# Continuous Monitoring Tool
+
+- Monitoring is required after deployment of an application.
+
+- Here you will monitor the bugs, errors or exploitation in an application after deployment.
+
+## Why we need continuous monitoring?
+
+We need it to avoid
+
+- Our application downtime and reduce it.
+
+- Failure inside CI/CD pipeline.
+
+- Application failure.
+
+- Infrastructure failure.
+
+- Analysis of code failure.
+
+## Phases of continuous monitoring
+
+- Define -> Develop a monitoring strategy(monitoring the pipeline, github code, build code, etc).
+
+- Establish -> How frequently you're going to monitor it.
+
+- Implement.
+
+- Analyze data and report finding.
+
+- Respond based on errors/failures.
+
+- Review and update.
+
+## Monitoring tools
+
+- Nagios
+
+- Splunk
+
+- Prometheus
+
+- ELK
+
+- Librato
+
+- Amazon Cloudwatch
+
+- Sensu
+
+## Nagios
+
+- We are going to discuss nagios because it is commonly used nowadays.
+
+- Nagios is an open source software for continuous monitoring of systems, networks, and infrastructure. It runs plugins that are stored on a server which is connected on a host or another server on your network or the internet. In case of any failure, nagios alerts about the issues so that the technical team can perform recovery process immediately.
+
+- It's a client-server architecture.
+
+- Usually on a network, a nagios server is running on a host and the plugins are running on all the remote host which you should monitor.
+
+## History of Nagios
+
+- In year 1999, Ethan galstad developed it as a part of Netsaint distribution.
+
+- In 2002, Ethan renames the project to "Nagios" because of the trademark issues with the name "Netsaint".
+
+- In 2009, Nagios released its first commercial version Nagios XI.
+
+- In 2012, Nagios was again renamed as Nagios core.
+
+- It uses port numbers 5666, 5667, and 5668 to monitor its client. These port numbers can be changed because they're logical. You can change them in the configuration file if you want to.
+
+## Why Nagios?
+
+We use nagios because it
+
+- Detect all types of network, or server issues.
+
+- Helps you to find the root cause of the problem which allow you to get the permanent solution to the problem.
+
+- Reduce downtime.
+
+- Monitors the entire infrastructure actively(nagios server itself tells the client).
+
+- Monitors the entire infrastructure passively(client tells the nagios server).
+
+- Allows you to monitor and troubleshoot server performance issues.
+
+- Automatically fix problems.
+
+## Features of Nagios
+
+- Oldest and latest.
+
+- Good log(reporting) and database system.
+
+- Informative and attractive web interface.
+
+- Automatically server alerts if condition changes.
+
+- Helps you to detect network errors or server crashes.
+
+- You can monitor the entire business process and IT infrastructure with a single time(in one time)
+
+- Monitors the network services like http, smtp, snmp, ftp, ssh, pop, DNS, LDAP, IPMI etc.
+
+## Nagios Architecture
+ ------------------
+ Web page dashboard <--------------- Configuration files ssh | -------------- |
+ ↑ ↑ ------------------> | | nrpe agent | |
+ -------↑-------------↑----------- ↑ | -------------- |
+ | ↑ ↑ | | ------------------
+ | ↑ -------↑------- | |
+ | ↑ | IP Add. | | | ------------------
+ | ↑ | User & Pass | | | ssh | -------------- |
+ | ↑ --------------- | | ---------------> | | nrpe agent | |
+ | ↑ ↓ | | ↑ | -------------- |
+ | --↑--- ---------- ---------> ---------- ------------------
+ | | DB | <--- | Daemon | | | NRPE |
+ | ------ ---------- <--------- ---------- ------------------
+ | | ↓ ssh | -------------- |
+ --------------------------------- ---------------> | | nrpe agent | |
+ Nagios - Server | -------------- |
+ ------------------
+ Client/Node
+
+- Nagios server contains the configuration files in which all the things like ip address, http, smtp etc, username and password of all nodes are present and these nodes will be represented through this configuration data(ip, username, etc).
+
+- The daemon will collect the data from the configuration files to perform the functionalities.
+
+- Once the data is received in the daemon, it will call a plugin that is called NRPE(Nagios remote plugin executor) and that plugin will collect data from the nodes and store it in its own database.
+
+- The connection will be through ssh.
+
+- On the receiving side of the node, there is another plugin called the nrpe agent that will build the connection, receive the requests and responed the status of the nodes.
+
+- The status will go back to the NRPE and it will be given to the daemon.
+
+- Once the status is received in the daemon, it will be given to the database for storage.
+
+- The data will now be available on the webpage dashboard. You can access it via the internet.
+
+- Main configuration file path of nagios is `/usr/local/nagios/nagios.cfg`.
+
+- All monitoring things are called "service". For example: 5 nodes and 4 things running on them. It means you're monitoring 5x4 = 20 services.
+
+## Prerequisites
+
+- httpd
+
+- php
+
+- gcc & gd (compiler to convert raw data into binaries)
+
+- makefile (to build)
+
+- perl (script)
+
+## **Explaining it in a video**
+
+Here you can get an explanation in a video. [52/60 Day of DevOps Challenge]()
\ No newline at end of file
diff --git a/Days/day53.md b/Days/day53.md
new file mode 100755
index 0000000..4d0a704
--- /dev/null
+++ b/Days/day53.md
@@ -0,0 +1,150 @@
+On the fifty third day, I learned the following things about Continuous Monitoring.
+
+## Installation of Nagios
+
+**Step 1**
+
+- To start Nagios core installation you must have your EC2 instance up and run and have already configured. SSH access to the http.
+
+- During the creation of AWS instance, auto-assign public-ip should be enabled.
+
+- Give the security group a name that is *nagios*.
+
+- Edit security group and allow all the traffic instead of allowing only HTTPS.
+
+- Number of instance should only be one.
+
+- Give the tag name as Nagios-server.
+
+- Allow all the traffic anywhere for testing purpose.
+
+- Create a key-pair by a name nagioskey and download it.
+
+- Click on the launch install button and then click on the view instances.
+
+- Click on the server option in AWS and copy the public IP address. Once the IP is copied, open the terminal and write `ssh ec2-user@`. It will give us an option to write YES or NO. Type yes and it will give you permission denied message.
+
+- Go to the directory where the ansible key is present and use it in the machine by writing `ssh -i ec2-user@`.
+
+- It will give another error like this **Permissions 0664 for 'ansiblekey.pem' are too open.**
+
+- To counter this error, change the permission by writing `chmod 0400 ansiblekey.pem` and then again write `ssh -i ec2-user@`.
+
+- You can exit it by writing **exit** and again run it by writing `ssh -i ec2-user@`.
+
+**Step 2**
+
+- Once the instance is created, write `sudo su` to go to the root user of the instance.
+
+- `yum install httpd php` will install httpd and php packages.
+
+- `yum install gcc glibc glibc-common` will install these libraries.
+
+- `yum install gd gd-devel` will install development package.
+
+**Step 3**
+
+- Create account information, you need to setup a nagios user. Run the following commands.
+
+ - `adduser -m nagios`
+ - `passwd nagios`
+
+- Now it will ask you the password. Give it any password.
+
+- Now add a group by typing `groupadd nagioscmd`.
+
+- Add users in a group by typing
+
+ - `usermod -a -G nagioscmd nagios`
+ - `usermod -a -G nagioscmd apache`
+
+**Step 4**
+
+- Create a downloads directory inside a home directory by writing `mkdir ~/Downloads` and go inside it by writing `cd ~/Downloads`.
+
+- Download the source code tarballs of both nagios and the nagios plugins.
+
+ - `wget http://prdownloads.sourceforge.net/sourceforge/nagios/nagios-4.0.8.tar.gz`
+ - `wget http://nagios-plugins.org/download/nagios-plugins-2.0.3.tar.gz`
+
+**Step 5**
+
+- Compile and install nagios. Extract the nagios source code tarball.
+
+ - `tar zxvf nagios-4.0.8.tar.gz`
+ - `cd nagios-4.0.8`
+
+- Run the configuration script with the name of the group which you have created in the above step.
+
+ - `./configure --with-command-group=nagioscmd`
+
+- Compile the nagios source code by typing `make all`.
+
+- Install binaries, init script, sample config files and set permission on the external command directly. To compile init script type.
+
+ - `make install`
+ - `make install-init`
+ - `make install-config`
+ - `make install-commandmode`
+
+**Step 6**
+
+- Configure the web-interface by typing `make install-webconf`.
+
+**Step 7**
+
+- Create a nagiosadmin account for login into the nagios web-interface. Set password as well.
+
+ - `htpasswd -c /usr/local/nagios/etc/htpasswd.users nagiosadmin`
+
+- Asking for password. Set a new password by typing `service httpd restart`.
+
+**Step 8**
+
+- Compile and install the nagios plugins. Extract the nagios plugins. Source code tarball.
+
+ - `cd ~/Downloads`
+ - `tar zxvf nagios-plugins-2.0.3.tar.gz`
+ - `cd nagios-plugins-2.0.3`
+
+- Compile and install the plugins.
+
+ - `./configure --with-nagios-user=nagios --with-nagios-group=nagios`
+ - `make`
+ - `make install`
+
+**Step 9**
+
+- Start nagios. Add nagios to the list of system services and have it automatically start when the system boots.
+
+ - `chkconfig --add nagios`
+ - `chkconfig nagios on`
+
+**Step 10**
+
+- Verify the sample nagios configuration files.
+
+ - `/usr/local/nagios/bin/nagios -v /usr/local/nagios/etc/nagios.cfg`
+
+- If there are not errors, start nagios.
+
+ - `service nagios start`
+ - `service httpd restart`
+
+**Step 11**
+
+- Copy public ip address of EC2 instance and paste it in google chrome in a given way.
+
+ - **For example:** `12.1.1.1/nagios/` in the browser
+ - Enter the username: nagiosadmin
+ - Enter the password: 12345
+
+- You will see the dashboard like this:
+
+
+
+
+
+## **Explaining it in a video**
+
+Here you can get an explanation in a video. [53/60 Day of DevOps Challenge]()
\ No newline at end of file
diff --git a/Days/day54.md b/Days/day54.md
new file mode 100755
index 0000000..d024410
--- /dev/null
+++ b/Days/day54.md
@@ -0,0 +1,132 @@
+On the fifty forth day, I learned the following things about Cloud Computing.
+
+## Cloud Computing
+
+- Cloud computing is an on-demand delivery of computer power, database, storage, applications and other IT resources through a cloud service platform(aws, azure, gcp, etc) via the internet with pay-as-you-go pricing model.
+
+## Problems before cloud computing
+
+- Before cloud computing, when you were starting a company, You already need the following items in advance.
+
+ - Let's say you wanted 4 servers.
+ - One AD server
+ - One DNS server
+ - Two Application servers
+ - Router
+ - Switch
+ - Cabling
+ - Gateway
+ - Firewall
+ - AC to cool the servers down
+ - 24x7 electricity
+ - Employees to maintain these servers in day and night
+
+- Opex was required. It means that the employees expenditure was required to maintain the resources.
+
+- It needed so much cost to install all of them.
+
+- It required many days to install them.
+
+- If the business failed, then this investment was wasted.
+
+## Characteristics of Cloud computing
+
+- On demand self-service.
+
+- Broad network access(Access it from anywhere in the world through internet).
+
+- Scalability(Increase or decrease the number of servers).
+
+- Resource pooling(Get the required number of servers from a pool or a collection of servers).
+
+- Measured services(Analysis about the traffic that visited the servers).
+
+## Top players in Cloud
+
+- AWS (Amazon Web Services)
+
+- Microsoft Azure
+
+- GCP (Google Cloud Platform)
+
+- Alibaba Cloud
+
+- Oracle
+
+## History
+
+- AWS was launched in 2006.
+
+- AWS completely moved to Amazon.com in 2010.
+
+- AWS certification started in 2013.
+
+- The profit got doubled in 2015, 2016.
+
+- They discussed about the virtual reality, AI, etc in a 2017 Re-invent conference.
+
+## Services in cloud
+
+- There are three services in a cloud that it provides.
+
+ - IAAS(Infrastructure as a service).
+ - PAAS(Platform as a service).
+ - SAAS(Software as a service).
+
+ --------------> -------------------
+ | | Application |
+ | |------------------
+ | | Data |
+ | |------------------ <--
+ | | Runtime | |
+ | |------------------ |
+ | | Middleware | |
+ | --> |------------------ |
+ SAAS | | | OS | |
+ | | |------------------ | PAAS
+ | | | Virtualization | |
+ | | |------------------ |
+ | IAAS | | Server | |
+ | | |------------------ |
+ | | | Storage | |
+ | | |------------------ |
+ | | | Network | |
+ ----------->--> ------------------- <--
+
+- The OS, virtualization, server, storage, and network services that you take from the cloud is called IAAS.
+
+- The above ones including the middleware and the runtime services that you take from the cloud is called PAAS.
+
+- All the above ones including the application, and the data services that you take from the cloud is called SAAS.
+
+## Deployment Model of Cloud
+
+There are three deployment models of cloud.
+
+- Public Cloud
+- Private Cloud
+- Hybrid Cloud
+
+**Public Cloud:** Public cloud is for general users that anyone can use it. E.g. AWS, Azure, GCP.
+
+**Private Cloud:** Private cloud is only used in an enterprise and in its different branches for its company usage. It is in your own hands, so it is secure.
+
+**Hybrid Cloud:** Big companies uses both public cloud and private cloud. Azure provides the better facility of providing the hybrid cloud. You can merge public and private clouds in Azure.
+
+## Virtualization
+
+- The network, storage and server should be virtualized in order to run them.
+
+- You need a hypervisor to make the network, storage, and server virtualized. Hypervisor divides and allocates the resources of network, storage, and server.
+
+- There are different hypervisors for different companies.
+
+ - Microsoft has hyper-v.
+ - AWS has citrix.
+ - VMWare has ESXi.
+
+- You can only use cloud if you have the virtualization installed.
+
+## **Explaining it in a video**
+
+Here you can get an explanation in a video. [54/60 Day of DevOps Challenge]()
\ No newline at end of file
diff --git a/Days/day55.md b/Days/day55.md
new file mode 100755
index 0000000..e9a5605
--- /dev/null
+++ b/Days/day55.md
@@ -0,0 +1,158 @@
+On the fifty fifth day, I learned the following things about Cloud Computing.
+
+# Elastic Compute Cloud
+
+- Amazon EC2 or virtual machines provides scalable computing capacity in the AWS cloud.
+
+- It enables you to scale up or scale down the instances to handle changes in requirements. You don't need to buy another server. You can just increase or decrease the requirement on the same server.
+
+- You can use Amazon EC2 to launch as many or as few virtual servers as you need, configure it security, and manage the networking and the storage.
+
+- Amazon EC2 is having two storage options i.e EBS and instance store. EBS is kept on the side of the physical server but instance store is present in host memory. Instance store is fast as compare to EBS.
+
+- Preconfigured templates are available known as Amazon Machine Image like microsoft image, ubuntu image, etc.
+
+- **By default, when you create an EC2 account with amazon, your account is limited to a maximum of 20 EC2 instances per region with two default High I/O instances.**
+
+# Types of EC2 instances
+
+There are seven types of instances.
+
+1. General purpose -> Balanced Memory and CPU
+2. Compute Opmtimized -> More CPUs than RAM
+3. Memory Opmtimized -> More RAMs
+4. Storage Optimized -> Low Latency in which More Storage is Required
+5. Accelerated Computing/GPU -> Graphics Optimized
+6. High Memory Optimized -> High RAMs, Nitro System(Specialized hypervisor to increase the performance)
+7. Previous Version Instances
+
+## 1. General Purpose Instance
+
+- General purpose instances provide a balance of compute, memory and networking resources and can be used for a variety of workloads.
+
+- There are 3 series in a general purpose instance.
+
+ - A Series
+ - A1
+ - M Series
+ - M4
+ - M5
+ - M5a
+ - M5ad
+ - M5d
+ - T Series
+ - T2
+ - t2.micro (Free tier eligible)
+ - T3
+ - T3a
+
+- Instances are available in four sizes.
+
+ - Nano or Micro (contains 1 virtual CPU, approximately 2GB RAM)
+ - Small
+ - Medium
+ - Large (Higher configuration of virtual CPUs and RAM)
+
+- A Series contains medium and large sizes.
+
+- M Series contains only large size.
+
+- T Series contains all the sizes from nano to large.
+
+### A Series
+
+- **A1-instances:** are ideally suitable for scale-out workloads(suddenly adding more configurations and managing them) that are supported by the [Arm ecosystem](https://www.arm.com/company/news/2019/04/the-arm-ecosystem-more-than-just-an-ecosystem).
+
+- **Arm ecosystem:** Arm ecosystem provide customers with wide range of products to get market faster than the competition.
+
+- **Microservice:** It is a distinct method of developing software systems that tries to focus on building single-function modules with well defined interfaces and operations.
+
+- These instances are well-suited for the following applications.
+
+ - WebServer
+ - Containerized microservices
+ - Caching fleets(Stores the subset of data)
+ - Distributed data stores
+ - Application that requires Arm instruction set
+
+### M Series
+
+- **M4 instances:** The new M4 instances use a custom Intel Xeon E5-2676 v3 Haswell processor and it is optimized specifically for EC2.
+
+ - vCPU (Virtual CPU) -> 2 to 40 (max)
+ - RAM -> 8 GB to 160 GB (max)
+ - Instance storage -> EBS only
+
+- **M5, M5a, M5ad and M5d instances:** These instances provide an ideal cloud infrastructure, offering a balance of compute, memory and networking resources for a broad range of applications.
+
+ - It is used in the gaming server, web server, small and medium databases.
+ - vCPU -> 2 to 96 (max)
+ - RAM -> 8 GB to 384 GB (max)
+ - Instance storage -> EBS and NVMe SSD (Nitro system SSD)
+
+### T Series
+
+- **T2, T3, and T3a instances:** These instances provide a baseline of CPU performance from 5% to 40% with the ability to burst to a higher level(may increase above 40% but not necessary) when required by your workload.
+
+ - An unlimited instances can sustain high CPU performance for any period of time whenever required.
+
+ - It is not used for real time scenario. Instead it is used for practicing purpose like:
+ - Websites
+ - Code repositories
+ - Development, build, test
+ - Microservices of small applications
+
+ - vCPU -> 2 to 8
+ - RAM -> 0.5 GB to 32 GB
+
+## 2. Compute Optimized Instance
+
+- Compute optimized instances are ideal for compute-bound applications(processing several requests at the same time) that benefit from high performance processors.
+
+- It is cheap and cost-effective.
+
+- There are three types of series in it.
+
+ - C3 (It was the previous instance. Updated version is C5)
+ - C4
+ - C5
+ - C5n
+
+- **C4 instances** are optimized for compute intensive workloads and deliver very cost effective high performance at a low price per compute ratio.
+
+ - vCPU -> 2 to 36
+ - RAM -> 3.75 to 60 GB
+ - Storage -> EBS only
+ - Network Bandwidth -> 10 Gbps
+
+- Its usecases are
+
+ - Web server
+ - Batch Processing
+ - MMO Gaming
+ - Video Encoding
+
+- **C5 instances** are optimized for compute intensive workloads and deliver very cost effective high performance at a high price per compute ratio.
+
+- It is powered by AWS Nitro system.
+
+ - vCPU -> 2 t0 72
+ - RAM -> 4 to 192 GB
+ - Intance storage -> EBS and NVMe SSD
+ - Network Bandwidth -> Upto 25 Gbps
+
+- Its usecases are
+
+ - High performance web server
+ - Gaming
+ - Video Encoding
+
+- **Note:**
+
+ - C5 supports maximum 25 EBS volume
+ - C5 uses Elastics network adapter
+ - C5 uses new EC2 hypervisor
+
+## **Explaining it in a video**
+
+Here you can get an explanation in a video. [55/60 Day of DevOps Challenge]()
\ No newline at end of file
diff --git a/Days/day56.md b/Days/day56.md
new file mode 100755
index 0000000..bdff761
--- /dev/null
+++ b/Days/day56.md
@@ -0,0 +1,108 @@
+On the fifty sixth day, I learned the following things about Cloud Computing.
+
+## 3. Memory Optimized Instance
+
+- Memory optimized instances are designed to deliver fast performance for workloads that process large data sets in memory.
+
+- There are three types of series in it and each of them have subtypes also.
+
+ - R series
+ - R4
+ - R5
+ - R5a
+ - R5ad
+ - R5d
+
+ - X series
+ - X1
+ - X1e
+
+ - Z series
+ - Z1d
+
+- **R series:** It is used for high performance, ralational(MySQL) and NoSQL(MongoDB, Cassandra) databases.
+
+- Distributed web scale cache stores that provide in-memory caching of key value type data. To process more data in the run time, R series will be used.
+
+- It is used in financial services like Hadoop.
+
+ - vCPU -> 2 to 96
+ - RAM -> 16 to 768 GB
+ - Instance Storage -> EBS and NVMe SSD
+
+- **X series:** It is well suited for high performance database, memory intensive enterprise applications, relational database workload, and SAP HANA applications.
+
+- It is used in electronic design automation.
+
+ - vCPU -> 4 to 128
+ - RAM -> 122 to 3904 GB
+ - Instance Storage -> SSD
+
+- **Z series:** Z series delivers a sustained all core frequency upto 4.0 GHz, the fastest of any cloud instances.
+
+- It uses AWS Nitro system, XEON processor and upto 1.8 TB of instances storage.
+
+ - vCPU -> 2 to 48
+ - RAM -> 16 to 384 GB
+ - Instance Storage -> NVM SSD
+
+- It is used in electronic design, automation and certain databases workloads with high per-core licensing cost.
+
+## 4. Storage optimized instance
+
+- Storage optimized instances are designed for workloads that require high, sequential read and write access to very large data.
+
+- They are optimized to deliver tens of thousands of low latency, random I/O operations per second(IOPS) to application.
+
+- There are three types of series in it.
+
+ - D series
+ - D2
+ - H series
+ - H1
+ - I series
+ - I3
+ - I3en
+
+- **D series:** It is well suited for massive parallel processing (MPP) in data warehouse, map reduce, hadoop distributed computing, and log or data processing app.
+
+ - vCPU -> 4 to 36
+ - RAM -> 30.5 to 244 GB
+ - Instance Storage -> SSD
+
+- **H series:** This series will provide you upto 16TB of HDD based local storage, high disk throughput(result) and balance of compute and memory of storage.
+
+- Well suited for app requiring sequential access to large amounts of data on direct-attached instance storage.
+
+- There is another storage that is network-attached storage that will be accessible through a network, not directly accessible. EBS is the network-attached storage.
+
+- Application that requires high throughput access to large quantities of data.
+
+ - vCPU -> 8 to 64
+ - RAM -> 32 to 256 GB
+ - Instance Storage -> HDD
+
+- **I series:** It is well suited for High frequency online transaction processing system(OTPS).
+
+- It is used in relational databases.
+
+ - NoSQL databases
+ - Distributed file system
+ - Data warehousing application
+
+- Spaces used in
+
+ - vCPU -> 2 to 96
+ - RAM -> 16 to 768 GB
+ - Instance Storage -> NVMe SSD
+ - Network Performance -> 25 Gbps to 100 Gbps
+
+- Sequential throughput
+
+ - Read -> 16 GB/s
+ - Write -> 64 GB/s (I3)
+ 8 GB/s (I3en)
+
+## **Explaining it in a video**
+
+Here you can get an explanation in a video. [56/60 Day of DevOps Challenge]()
\ No newline at end of file
diff --git a/Days/day57.md b/Days/day57.md
new file mode 100755
index 0000000..90d564e
--- /dev/null
+++ b/Days/day57.md
@@ -0,0 +1,157 @@
+On the fifty seventh day, I learned the following things about Cloud Computing.
+
+## 5. Accelerated Computing Instances
+
+- Accelerated computing instance families use hardware accelerators or co-processors to perform some functions such as floating point number calculation, graphics processing or data pattern matching more efficiently than it is possible in software running on CPUs.
+
+- This instance is used in AI, ML, and DL. It is also used for live streaming that requires fast processing.
+
+- There are three types of series in it.
+
+ - F series
+ - F1
+ - P series
+ - P2
+ - P3
+ - G series
+ - G2
+ - G3
+
+**F series:**
+
+- F1 instance offers customizable hardware acceleration with Field Programmable Gate Arrays(FPGA). FPGA fastly processes an image and make changes in it.
+
+- Each FPGA contains 2.5 million logic elements(logic gates like AND, NOT gates, etc) and 6800 Digital signal processing(DSP) engines.
+
+- Designed to accelerate computationally intensive algorithms such as data flow or highly parallel operations. Games usually requires parallel processing like button is pressed and action is done live.
+
+- F1 provides local NVM SSD storage.
+
+ - vCPU -> 8 to 64
+ - FPGA -> 1 to 8
+ - RAM -> 122 to 976 GB
+ - Instance storage -> NVMe SSD
+
+- It is used in Genomics research, financial analytics, real time video recording, and big data search.
+
+**P series:**
+
+- It uses NVIDIA Tesla GPUs
+
+- It provides high bandwidth networking.
+
+- Upto 32GB of memory per GPU which makes them ideal for deep learning and computational fluid dynamics.
+
+- **P2 instance**
+
+ - vCPU -> 4 to 64
+ - GPU -> 1 to 16
+ - RAM -> 61 to 732 GB
+ - GPU RAM -> 12 to 192 GB
+ - Network Bandwidth -> 25 Gbps
+
+- **P3 instance**
+
+ - vCPU -> 8 to 96
+ - GPU -> 1 to 8
+ - RAM -> 61 to 768 GB
+ - Storage -> SSD and EBS
+
+- It is used in machine learning, databases, seismic analysis(earthquake study), genomics, molecular modeling(chemistry related), AI, Deep learning.
+
+- **Note:** P3 support CUDA 9 and OpenCL API. P2 supports CUDA 8 and OpenCL 1.2
+
+**G series:**
+
+- It is optimized for graphics intensive applications.
+
+- It is well suited for apps like 3D visualization applications.
+
+- G3 instances use NVIDIA Tesla M60 GPU and provide a cost effective high performance platform for graphic applications.
+
+ - vCPU -> 6 to 64
+ - GPU -> 1 to 4
+ - RAM -> 30.5 to 488 GB
+ - GPU Memory -> 8 to 32 GB
+ - Network Performance -> 25 Gbps
+
+- It is used in video creation services, 3D visualization, streaming graphics-intensive application.
+
+## 6. High Memory Instances
+
+- High memory instances are purpose built to run large-in-memory databases, including production development of SAP HANA in the cloud.
+
+- It has only one series that is U-series. U-series has 3 subtypes that U6, U9 and U12.
+
+**Note**
+
+- High memory instances are bare metal instances and do not run on a hypervisor.
+
+- The dedicated host will be provided for running only your server in it.
+
+- The dedicated host is available in purchasing category for 3 years term.
+
+- OS is directly installed on hardware.
+
+**Features**
+
+- Latest generation intel xeon pentium 8176M processor.
+
+- 6, 9, 12 TB of instance memory, the largest of any EC2 instance.
+
+- Powered by the AWS nitro system, a combination of dedicated hardware and lightweight hypervisor.
+
+- Bare metal performance with direct access to host hardware.
+
+- EBS is optimized by default at no additional cost.
+
+- Network performance is 25 Gbps.
+
+- Dedicated EBS bandwidth is 14 Gbps. It will read and write the data from storage fastly.
+
+- Each instance offer 448 logical processor.
+
+## 7. Previous Version Instances
+
+- These are the previous instances that you can still purchase and use them.
+
+- If you bought any one of them previously as a dedicated instance but now they're added to the previous version, the price of these instances won't change. They will be the same
+
+ - T1
+ - M1
+ - C1
+ - CC2
+ - M2
+ - CR1
+ - CG1
+ - i2
+ - HS1
+ - M3
+ - C3
+ - R3
+
+## Important Questions
+
+- **Q.** When the EC2 bill starts and stops or by which mean we have to pay?
+
+ **A.** EC2 instances will start counting the bill if it is booted and it will shut down when you terminate.
+
+- **Q.** If you stop or shutdown the server or instance but didn't terminate, will I have to pay the whole bill?
+
+ **A.** No, but you just have to pay the storage fee of the instance, not the processing fee because the server is not running.
+
+- **Q.** Is the pay per second option available for windows?
+
+ **A.** No, it is currently available for linux servers/instances only. For windows, it is pay per hour.
+
+- **Q.** Will the bill be provided based on seconds usage or minutes or hourly usage?
+
+ **A.** The bill will be provided to you every month and they will be based on the hourly usage. The minutes and seconds will be calculated in those hours.
+
+- **Q.** Does the billing contains the taxes?
+
+ **A.** No, the taxes are excluded from the bills. Only the instance payment is fixed. The taxes depends from country to country.
+
+## **Explaining it in a video**
+
+Here you can get an explanation in a video. [57/60 Day of DevOps Challenge]()
\ No newline at end of file
diff --git a/Days/day58.md b/Days/day58.md
new file mode 100755
index 0000000..75e02fb
--- /dev/null
+++ b/Days/day58.md
@@ -0,0 +1,35 @@
+On the fifty eighth day, I learned the following things about Cloud Computing.
+
+# AWS Demo
+
+- Go to [AWS](https://aws.amazon.com/) and create an account. If you don't know then here is the [video](https://www.youtube.com/watch?v=FKCh9drnc5E) through which you can create it.
+
+- On the upper right side of the corner, you can select a region. If your country's region is not present then choose the nearest one. It will help the data to be fetched quickly.
+
+- The instances of one region won't be present in another region.
+
+- You can check the billing by going to the upper right corner. Click on the name of the account holder and then click on the billing dashboard option.
+
+- It will lead you to the billing dashboard in which you can check your balance and other options.
+
+- If there is some bill that you don't want to pay as a beginner then you can message AWS to cancel your bill.
+
+- To do this click on the question mark on the upper right corner and click on the support center.
+
+- It will lead you to another page. Click on the create case option and check the account and billing option.
+
+- It will open a form for you. Select the billing service from the options.
+
+- Category should a dispute a charge. Click on the additional information button and it will lead you to another page.
+
+- Write the subject the and message to clean and remove the bill by giving them a reason.
+
+- Once you have written the reason, click on the next button to give the contact information and they will contact you and you tell the same reason again.
+
+- After that, click on the submit button.
+
+- The next step is to create an EC2 instance. I have briefly explained it in my ansible [video]() or in the nagios [video]().
+
+## **Explaining it in a video**
+
+Here you can get an explanation in a video. [58/60 Day of DevOps Challenge]()
\ No newline at end of file
diff --git a/Days/day59.md b/Days/day59.md
new file mode 100755
index 0000000..1f2b435
--- /dev/null
+++ b/Days/day59.md
@@ -0,0 +1,197 @@
+On the fifty ninth day, I learned the following things about Helm.
+
+# Helm
+
+
+
+
+
+- Before discussing helm, let's understand the common functionalities present in linux.
+
+- Suppose you created a linux machine and then did the following things.
+
+ **1. Package management:** It means to install the packages by using the commands like `apt`, `yum` etc.
+
+ **2. Automated installation:** It means that it will install the packages automatically without the files moving manually.
+
+ **3. Version management:** It means that it will update the packages.
+
+ **4. Dependency management:** It means that it will install the dependencies that are required for a package to be installed.
+
+ **5. Remove:** It means that you can remove the packages and all the dependencies will be removed with it.
+
+## What is Helm?
+
+- Helm is a package manager that deals with package management in kubernetes.
+
+- It means to manage the manifest or YAML files, creating them and managing the packages inside that YAML file.
+
+- Suppose you have 2-tier architecture in kubernetes. One is frontend and another one is backend.
+
+- In the frontend you create a deployment. Inside that deployment, the replicas are present. Inside each pod the container will be present and each container will run an application. You also need configmap that will contain the configuration data inside the file. You also create a service YAML file that will tell in which pod the application will run.
+
+ --------------------
+ | ------------- |
+ | | Service | |
+ | ------------- |
+ | ------------- |
+ | | configmap | |
+ | ------------- |
+ | --------------- |
+ | | Deployment | |
+ | --------------- |
+ --------------------
+
+
+- In the backend, you need a stateful set that runs an application in the database. You also need another YAML file to store the confidential data. You also need a service YAML file that will tell accessibility b/w different pods outside the cluster and inside the cluster etc.
+
+ --------------------
+ | ------------- |
+ | | Service | |
+ | ------------- |
+ | ------------- |
+ | | Secret | |
+ | ------------- |
+ | ------------- |
+ | | Stateful | |
+ | | Set | |
+ | ------------- |
+ --------------------
+
+- First the backend will be executed and then the result will be shown to you on the frontend.
+
+**Problem**
+
+- If you have one or two files then it is easy to write `kubectl apply` with every file and send it to the kubernetes master.
+
+- But if there are hundreds of files then applying every file is not an easy task.
+
+**Solution**
+
+- To solve this problem, helm comes into picture. It will take all the YAML files of an application and consider it as a package.
+
+- Now there is no need to apply each of the file. Instead a package that contains the file will be executed with one command.
+
+- Helm is a utility tool or the package maanger that will make the processes of kubernetes easy. Now instead of writing `apt` or `yum`, write `helm`.
+
+## Difference b/w Helm and Helm chart
+
+- Helm creates a package in which all the manifest files are present and that package is called Helm chart.
+
+- Helm chart is a collection of manifest files that becomes a package.
+
+- Now you can easily deploy the helm chart(package) into the kubernetes cluster.
+
+## Brief Intro
+
+- Helm is introduced first time in 2015.
+
+- Helm 3 was released in Nov 2019.
+
+- Helm helps you manage k8s applications with helm charts which helps you define, install, upgrade and remove even the most complex kubernetes application.
+
+- `helm` is the equivalent of `yum` and `apt`.
+
+- Helm is now an official k8s project and is a part of CNCF.
+
+- The main building block of helm based deployments are Helm charts that describe a configurable set of dynamically generated k8s resources.
+
+- The chart can either be stored locally or fetched from remote chart repositories.
+
+- Just like github or dockerhub, helm has helmhub that will take and install a package from the helmhub repository.
+
+## Why use Helm?
+
+- Writing and maintaining kubernetes YAML manifest for all the required kubernetes objects can be a time consuming issue and tedious task.
+
+- For the simple deployment, you would need atleast 3 YAML manifest file with duplicated and hardcoded values.
+
+- Helm simplifies the process and create a single package that can be referred to your cluster.
+
+- Helm k8s automatically maintains a database of all versions of your releases so when something goes wrong during the deployment, rolling back to the previous version is just one command away.
+
+## Some keywords to understand Helm
+
+- **Chart:** Helm charts are simply k8s yaml clusters manifests combined into a single package that can be advertised to your k8s cluster.
+
+- **Release:** A chart can often be installed many times and each time it is installed, a new release is created. Those release will help you to go to the next and previous state of the kubernetes if you want to. Consider a MySQL chart, if you can install that chart twice, each one will have its own release which will in turn have its own release name.
+
+- Helm keeps tracks or keep the record of all chart execution like install, upgrade, remove, rollback.
+
+- **Repository:** Location where packaged can be stored and shared. It can local or remote repository.
+
+## How Helm helped us in the CI/CD pipeline?
+
+- Suppose we have Dev, QA, Pre-Production and Production. For each environment you have different number of replicasets, like Dev contains 2 replicaset, QA replicaset and so on.
+
+- Now the question arises that do we need to write the number of replicasets everytime in each manifest file.
+
+- To solve this problem, helm provides you a template which will contain the empty field in which you just need to put the values according to your replicaset.
+
+- There will be another file in which you just need to write the numbers that will be allocated to the replicaset. In this way, you don't need to edit the YAML file everytime to change the number of replicaset.
+
+- Instead you just need to mention the file name in the manifest file that contains the values. The key will be present in the manifest and the value will be present in another file that will be called in the manifest.
+
+- Template engine will create the replicas that you mentioned in a file according to each of the environment.
+
+**Example**
+
+ apiVersion: apps/v1
+ kind: Deployment
+ metadata:
+ name: release-name-springboot
+ spec:
+ replicas: {{Values.replicaCount}}
+ selector:
+ matchlabels:
+ app kubernetes.io/name:Springboot
+
+## Helm Architecture
+
+- Helm is a single-service or a client architecture. It means that only client is required and the server is not required because kubernetes now contains RBAC(Role-based access control).
+
+- Helm 2 had a client-server architecture but with RBAC in kubernetes, helm 3 has come and it only requires the client architecture.
+
+- Client is responsible for implementing helm. There is no core processing logic distributed among components.
+
+- Implementation of helm 3 is single command line that knows that how to manage the kubernetes cluster.
+
+- Helm and its library both are written in Golang.
+
+- In previous scenario, you had to write `kubectl apply` to install a package that is present in the manifest file.
+
+- That package had to contact the api-server and then the package would be installed in all the nodes like this.
+
+ --------------------
+ | -------------- |
+ | | API-SERVER | |
+ | -------------- |
+ | ↓ |
+ | ------------ |
+ | | Node 1 | |
+ | ------------ |
+ | ------------ |
+ | | Node 2 | |
+ | ------------ |
+ --------------------
+
+- In the helm case, first you will write `helm install jenkins` and it will go to the helm-client. The helm-client will find that package in the helm registry or repository and give it to the helm-client.
+
+- The helm-client will give that package to the API-Server and then it will be installed on each node like this.
+
+ --------------------
+ ---------------- | -------------- |
+ | Helm-client | ----------> | | API-SERVER | |
+ ---------------- | -------------- |
+ ↓ ↑ | ↓ |
+ ↓ ↑ | ------------ |
+ ↓ ↑ | | Node 1 | |
+ ↓ ↑ | ------------ |
+ ----------------- | ------------ |
+ | Helm registry | | | Node 2 | |
+ ----------------- | ------------ |
+ --------------------
+
+## **Explaining it in a video**
+
+Here you can get an explanation in a video. [59/60 Day of DevOps Challenge]()
\ No newline at end of file
diff --git a/Days/day6.md b/Days/day6.md
new file mode 100755
index 0000000..8e69782
--- /dev/null
+++ b/Days/day6.md
@@ -0,0 +1,9 @@
+On the sixth day, I learned the following things about Networking.
+
+Click Here:
+
+- 🌐 [Day No. 6 of Learning Networking](../PDFs/Computer-Networking-3.pdf)
+
+## **Explaining it in a video**
+
+Here you can get an explanation in a video. [6/60 Day of DevOps Challenge]()
\ No newline at end of file
diff --git a/Days/day60.md b/Days/day60.md
new file mode 100755
index 0000000..1ddb2cf
--- /dev/null
+++ b/Days/day60.md
@@ -0,0 +1,220 @@
+On the sixteeth day, I learned the following things about Helm.
+
+## Helm Commands
+
+- `helm repo` will interact with charts or YAML files repository that you can download according to your need.
+
+- `helm repo list` will show you the list of repositories that are connected with helm.
+
+- `helm repo add ` will add the repo to the helm.
+
+- `helm repo remove ` will remove the repo from the helm.
+
+- `helm search repo ` will search the charts.
+
+- `helm show ` will show the information of a chart before installation.
+
+- `helm install ` will install a package for you.
+
+- `helm install --wait --timeout 10s` will wait for 10 seconds and then show you the result of installation.
+
+- `helm create ` will create a new chart with a given name.
+
+## Helm Installation
+
+- Before installing helm, you need to first of all install the minikube either in your local machine or in the remote server.
+
+- The commands for installing the minikube increased when you install it on AWS server. I will show you all the commands.
+
+- First open the AWS management console and create the t2 medium instance.
+
+- Click on the launch instance. Give the tag name minikube and select an ubuntu image 18.04.
+
+- Choose the t2.medium instance type that is not free but required. It won't charge you that much.
+
+- Leave the details as it is like the number of instances. Create a security group and give it a name **minikube-sg** and allow all the traffic.
+
+- Create a key pair and download to use it further and launch the instance.
+
+- Once the instance is launched and running, click on it and copy the public address of it.
+
+- Go to the directory where the key is present and use it in the machine by writing `ssh -i ubuntu@`.
+
+- If you type `ec2-user` as we have used before then it will give you this error: `ec2-user@: Permission denied (publickey).` Check out this [website](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/connection-prereqs.html) for setting different usernames.
+
+- It will give another error like this **Permissions 0664 for 'key.pem' are too open.**
+
+- To counter this error, change the permission by writing `chmod 0400 ` and then again write `ssh -i ubuntu@`.
+
+- Once the instance is created, write `sudo su` to go to the root user of the instance.
+
+- First of all install the docker by writing `sudo apt update && apt -y install docker.io`
+
+- Then install the kubectl by writing `curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl && chmod +x ./kubectl && sudo mv ./kubectl /usr/local/bin/kubectl`
+
+- Then install the minikube by writing `curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 && chmod +x minikube && sudo mv minikube /usr/local/bin`
+
+- Once the minikube is installed, start it by first writing `apt install conntrack`
+
+- After that, type `minikube start --vm-driver=docker` and it will give you the following error.
+
+
+
+
+
+- It is saying that docker should not be used with root privilege. To solve this problem, type `CTRL+D` to go out from root.
+
+- After that, again type `minikube start --vm-driver=docker` and it will give you another following error.
+
+
+
+
+
+- To solve this problem, type the following commands.
+
+ sudo groupadd docker
+ sudo usermod -aG docker $USER
+ newgrp docker
+
+- If it is still not working recovering then visit this [website](https://linuxhandbook.com/docker-permission-denied/#:~:text=deal%20with%20it.-,Fix%201%3A%20Run%20all%20the%20docker%20commands%20with%20sudo,the%20Docker%20daemon%20socket%27%20anymore.) that will show you more ways.
+
+- Once the commands are executed successfully, you will get the following result.
+
+
+
+
+
+- If you type `minikube status`, it will show you running status.
+
+- Once the minikube is started, write `curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3`
+
+- After installation, write `chmod 700 get_helm.sh` to change the permission and then `./get_helm.sh` to run it.
+
+- If you type `which helm`, it will show you the location `usr/local/bin/helm`
+
+- Check the helm version by typing `helm version`
+
+## Initial commands
+
+- `helm repo add stable https://charts.helm.sh/stable` will create your first helm chart by the name of stable.
+
+- Once the repo is added, type `helm repo list`. You will see that a new repo is added.
+
+- If you want to remove the stable repo, type `helm repo remove stable`. If you type `helm repo list` again, it will show you nothing.
+
+- Again install the `stable` chart and then type `helm search repo jenkins`. It will search the jenkins for you by the name `stable/jenkins`.
+
+- `helm search repo tomcat` will search the tomcat for you by the name `stable/tomcat`.
+
+- `helm search repo apache` will search the apache and bring out multiple charts.
+
+- `helm show values stable/tomcat` will show you the values of tomcat.
+
+- `helm show chart stable/tomcat` will show you the chart of tomcat.
+
+- `helm show all stable/tomcat` will show you all the information of the tomcat chart.
+
+- `sudo apt install tree` will install the tree package for you.
+
+- `helm create helloworld` will create the helloworld chart for you and type `ls` to show the **helloworld** directory.
+
+- Write `tree`. It will show you all the directories and subdirectories and files in them.
+
+- To delete the chart, type `rm -rf helloworld`.
+
+- Type `kubectl get all`. It will give you all the details of the pods, deployments, services etc.
+
+- Type `helm install testjenkins stable/jenkins`. It will deploy your jenkins release.
+
+- Type `helm install testtomcat stable/tomcat`. It will deploy your tomcat release.
+
+- Type `kubectl get all` again and it will show you all the data like pods, deployments etc of the charts that you have created.
+
+- `helm list` will show you the information of that release.
+
+- `helm delete testjenkins` will uninstall the testjenkins release.
+
+- You can also dry run the command. It means that the command won't be executed but it will show you the output that if that command is executed then the result will be like this.
+
+- To execute the dry run command, type `helm install --dry-run testchart stable/tomcat`.
+
+- If you type `kubectl get all`. It didn't installed the testchart, it just showed the output without executing the command.
+
+- Now delete the testtomcat release by writing `helm delete testtomcat`.
+
+- Again type `kubectl get all`. It will show you nothing.
+
+- `helm list` will show you nothing because everything is uninstalled.
+
+- `helm install --wait --timeout 20s testtomcat stable/tomcat` will wait for 20 seconds and then the release will be created.
+
+- Type `kubectl get all` and it will show you the new chart that is created.
+
+- Now if you type `helm list`. It will show you the version of it also.
+
+- Install another release of tomcat with different version by typing `helm install testchart stable/tomcat --version `.
+
+- If you type `helm list`. It will show you two tomcat charts with different versions.
+
+## Change the configuration during installation
+
+- If you want to set parameters or change the configuration data during the installation then there are two ways. The first one is the `--set` and the second one is the `--values` or `-F`.
+
+- `--set` specifies the command in the command line that needs to be executed during the installation.
+
+- `--values` specifies a YAML file that contains a list of commands to be executed. It will work when you first install your chart.
+
+- First delete all the releases by writing `helm delete ` and then write `helm list` to check whether the charts are present or not.
+
+- Show the values of the chart before installation by writing `helm show values stable/tomcat`. If you scroll up, you will see that under the service there is a type load balancer that you need to change.
+
+- Write `helm install stable/tomcat --set service.type=NodePort` and now the service type load balancer is replaced with node port in the command line.
+
+- `helm get ` will show you the parameters that are changed after the chart installation.
+
+- `helm list` will show you all the releases.
+
+- `helm status ` will show you status of a particular release.
+
+- `helm history ` will show you the history of the release.
+
+- `helm install testchart stable/tomcat --version 0.4.0` will install the old version of release.
+
+- `helm list` will show you the number of releases.
+
+- `helm upgrade ` will upgrade the release with that chart. e.g. `helm upgrade testtomcat stable/tomcat`.
+
+- `helm rollback ` will roll you back to a specific version.
+
+## Some other commands
+
+- Go to helmhub/artifacthub [website](https://artifacthub.io/) and there you will find many charts and search the mysql chart and download it using the following commands.
+
+ helm repo add my-repo https://charts.bitnami.com/bitnami
+ helm install my-release my-repo/mysql
+
+- Once the MySQL chart is installed, type `helm list`. It will show you a new release by the name of my-release.
+
+- You can delete the chart using the following command.
+
+ helm delete my-release
+
+- `helm history ` will show you the history of the release.
+
+- Again install the mysql but the previous version with new name `helm install my-release2 my-repo/mysql --version `.
+
+- Type `helm list` to show you the charts with multiple releases.
+
+- `helm pull ` will download the tar file of a chart from a repository. Chart name is `stable/tomcat`.
+
+- `helm pull --untar ` will download the tar file of a chart from a repository and also untar it.
+
+- Type `ls` to show the untar directory and get inside the untar directory to check the files in it.
+
+- You can install the chart from the local chart archive, e.g. `helm install tomcat-0.4.3.tgz`.
+
+- You can install the release from an unpacked chart directory also, e.g. `helm install untarred-directory-name`. Release name could be anything.
+
+## **Explaining it in a video**
+
+Here you can get an explanation in a video. [60/60 Day of DevOps Challenge]()
\ No newline at end of file
diff --git a/Days/day7.md b/Days/day7.md
new file mode 100755
index 0000000..8077e21
--- /dev/null
+++ b/Days/day7.md
@@ -0,0 +1,9 @@
+On the seventh day, I learned the following things about Networking.
+
+Click Here:
+
+- 🌐 [Day No. 7 of Learning Networking](../PDFs/Computer-Networking-4.pdf)
+
+## **Explaining it in a video**
+
+Here you can get an explanation in a video. [7/60 Day of DevOps Challenge]()
\ No newline at end of file
diff --git a/Days/day8.md b/Days/day8.md
new file mode 100755
index 0000000..7e3e111
--- /dev/null
+++ b/Days/day8.md
@@ -0,0 +1,9 @@
+On the eighth day, I learned the following things about Networking.
+
+Click Here:
+
+- 🌐 [Day No. 8 of Learning Networking](../PDFs/Computer-Networking-5.pdf)
+
+## **Explaining it in a video**
+
+Here you can get an explanation in a video. [8/60 Day of DevOps Challenge]()
\ No newline at end of file
diff --git a/Days/day9.md b/Days/day9.md
new file mode 100755
index 0000000..835ff5e
--- /dev/null
+++ b/Days/day9.md
@@ -0,0 +1,9 @@
+On the ninth day, I learned the following things about Networking.
+
+Click Here:
+
+- 🌐 [Day No. 9 of Learning Networking](../PDFs/Computer-Networking-6.pdf)
+
+## **Explaining it in a video**
+
+Here you can get an explanation in a video. [9/60 Day of DevOps Challenge]()
\ No newline at end of file
diff --git a/Images/catalog.png b/Images/catalog.png
new file mode 100644
index 0000000..e813926
Binary files /dev/null and b/Images/catalog.png differ
diff --git a/Images/choose-sub.jpg b/Images/choose-sub.jpg
new file mode 100644
index 0000000..0373921
Binary files /dev/null and b/Images/choose-sub.jpg differ
diff --git a/Images/cicd.png b/Images/cicd.png
new file mode 100644
index 0000000..2824802
Binary files /dev/null and b/Images/cicd.png differ
diff --git a/Images/dashboard.png b/Images/dashboard.png
new file mode 100644
index 0000000..dd18735
Binary files /dev/null and b/Images/dashboard.png differ
diff --git a/Images/datree_policies.png b/Images/datree_policies.png
new file mode 100644
index 0000000..ba4ee37
Binary files /dev/null and b/Images/datree_policies.png differ
diff --git a/Images/devops.png b/Images/devops.png
new file mode 100755
index 0000000..d057e9c
Binary files /dev/null and b/Images/devops.png differ
diff --git a/Images/devops2.png b/Images/devops2.png
new file mode 100755
index 0000000..04d6757
Binary files /dev/null and b/Images/devops2.png differ
diff --git a/Images/devops3.png b/Images/devops3.png
new file mode 100755
index 0000000..e5de987
Binary files /dev/null and b/Images/devops3.png differ
diff --git a/Images/devops4.png b/Images/devops4.png
new file mode 100755
index 0000000..f11f6ef
Binary files /dev/null and b/Images/devops4.png differ
diff --git a/Images/first.png b/Images/first.png
new file mode 100644
index 0000000..abafa61
Binary files /dev/null and b/Images/first.png differ
diff --git a/Images/helm.svg b/Images/helm.svg
new file mode 100644
index 0000000..1e2db8a
--- /dev/null
+++ b/Images/helm.svg
@@ -0,0 +1,28 @@
+
+
diff --git a/Images/infrastructer.png b/Images/infrastructer.png
new file mode 100644
index 0000000..47bfa9e
Binary files /dev/null and b/Images/infrastructer.png differ
diff --git a/Images/initial.png b/Images/initial.png
new file mode 100644
index 0000000..ec2297e
Binary files /dev/null and b/Images/initial.png differ
diff --git a/Images/jenkins.png b/Images/jenkins.png
new file mode 100644
index 0000000..8b0aecc
Binary files /dev/null and b/Images/jenkins.png differ
diff --git a/Images/jenkins2.png b/Images/jenkins2.png
new file mode 100644
index 0000000..03666ac
Binary files /dev/null and b/Images/jenkins2.png differ
diff --git a/Images/job.png b/Images/job.png
new file mode 100644
index 0000000..8959a44
Binary files /dev/null and b/Images/job.png differ
diff --git a/Images/jobfailure.png b/Images/jobfailure.png
new file mode 100644
index 0000000..cbe12c9
Binary files /dev/null and b/Images/jobfailure.png differ
diff --git a/Images/kubernetes-objects-1.png b/Images/kubernetes-objects-1.png
new file mode 100755
index 0000000..0bbe95d
Binary files /dev/null and b/Images/kubernetes-objects-1.png differ
diff --git a/Images/kubernetes-objects-2.png b/Images/kubernetes-objects-2.png
new file mode 100755
index 0000000..0dc01df
Binary files /dev/null and b/Images/kubernetes-objects-2.png differ
diff --git a/Images/kubescape.png b/Images/kubescape.png
new file mode 100644
index 0000000..541438d
Binary files /dev/null and b/Images/kubescape.png differ
diff --git a/Images/lens-id-signup.jpg b/Images/lens-id-signup.jpg
new file mode 100644
index 0000000..17d791c
Binary files /dev/null and b/Images/lens-id-signup.jpg differ
diff --git a/Images/lens-id.jpg b/Images/lens-id.jpg
new file mode 100644
index 0000000..ce1c7f0
Binary files /dev/null and b/Images/lens-id.jpg differ
diff --git a/Images/lens-login.jpg b/Images/lens-login.jpg
new file mode 100644
index 0000000..0c6bab1
Binary files /dev/null and b/Images/lens-login.jpg differ
diff --git a/Images/lifecycle.png b/Images/lifecycle.png
new file mode 100644
index 0000000..0c4c07f
Binary files /dev/null and b/Images/lifecycle.png differ
diff --git a/Images/monokle.png b/Images/monokle.png
new file mode 100644
index 0000000..990ea1d
Binary files /dev/null and b/Images/monokle.png differ
diff --git a/Images/path.png b/Images/path.png
new file mode 100755
index 0000000..7cbc6b6
Binary files /dev/null and b/Images/path.png differ
diff --git a/Images/policy.png b/Images/policy.png
new file mode 100644
index 0000000..b4adba4
Binary files /dev/null and b/Images/policy.png differ
diff --git a/Images/popup.png b/Images/popup.png
new file mode 100644
index 0000000..817300c
Binary files /dev/null and b/Images/popup.png differ
diff --git a/Images/ready.jpg b/Images/ready.jpg
new file mode 100644
index 0000000..551e390
Binary files /dev/null and b/Images/ready.jpg differ
diff --git a/Images/result.png b/Images/result.png
new file mode 100644
index 0000000..85a1b10
Binary files /dev/null and b/Images/result.png differ
diff --git a/Images/second.png b/Images/second.png
new file mode 100644
index 0000000..4d96386
Binary files /dev/null and b/Images/second.png differ
diff --git a/Images/terraform1.png b/Images/terraform1.png
new file mode 100644
index 0000000..e5e5f45
Binary files /dev/null and b/Images/terraform1.png differ
diff --git a/Images/terraform2.jpeg b/Images/terraform2.jpeg
new file mode 100644
index 0000000..7050a25
Binary files /dev/null and b/Images/terraform2.jpeg differ
diff --git a/Images/third.png b/Images/third.png
new file mode 100644
index 0000000..518f577
Binary files /dev/null and b/Images/third.png differ
diff --git a/Images/verify-email-then-subscribe.jpg b/Images/verify-email-then-subscribe.jpg
new file mode 100644
index 0000000..f25a6f1
Binary files /dev/null and b/Images/verify-email-then-subscribe.jpg differ
diff --git a/Images/welcome-screen.png b/Images/welcome-screen.png
new file mode 100644
index 0000000..340a400
Binary files /dev/null and b/Images/welcome-screen.png differ
diff --git a/PDFs/Computer-Networking-1.pdf b/PDFs/Computer-Networking-1.pdf
new file mode 100755
index 0000000..2f6f3bc
Binary files /dev/null and b/PDFs/Computer-Networking-1.pdf differ
diff --git a/PDFs/Computer-Networking-2.pdf b/PDFs/Computer-Networking-2.pdf
new file mode 100755
index 0000000..852fa77
Binary files /dev/null and b/PDFs/Computer-Networking-2.pdf differ
diff --git a/PDFs/Computer-Networking-3.pdf b/PDFs/Computer-Networking-3.pdf
new file mode 100755
index 0000000..ab09156
Binary files /dev/null and b/PDFs/Computer-Networking-3.pdf differ
diff --git a/PDFs/Computer-Networking-4.pdf b/PDFs/Computer-Networking-4.pdf
new file mode 100755
index 0000000..c2614e4
Binary files /dev/null and b/PDFs/Computer-Networking-4.pdf differ
diff --git a/PDFs/Computer-Networking-5.pdf b/PDFs/Computer-Networking-5.pdf
new file mode 100755
index 0000000..9ca923c
Binary files /dev/null and b/PDFs/Computer-Networking-5.pdf differ
diff --git a/PDFs/Computer-Networking-6.pdf b/PDFs/Computer-Networking-6.pdf
new file mode 100755
index 0000000..50ae020
Binary files /dev/null and b/PDFs/Computer-Networking-6.pdf differ
diff --git a/PDFs/Computer-Networking-7.pdf b/PDFs/Computer-Networking-7.pdf
new file mode 100755
index 0000000..6ff2a78
Binary files /dev/null and b/PDFs/Computer-Networking-7.pdf differ
diff --git a/PDFs/Computer-Networking-8.pdf b/PDFs/Computer-Networking-8.pdf
new file mode 100755
index 0000000..5819b37
Binary files /dev/null and b/PDFs/Computer-Networking-8.pdf differ
diff --git a/PDFs/Docker-1.pdf b/PDFs/Docker-1.pdf
new file mode 100755
index 0000000..d2348f6
Binary files /dev/null and b/PDFs/Docker-1.pdf differ
diff --git a/PDFs/Kubernetes-1.pdf b/PDFs/Kubernetes-1.pdf
new file mode 100755
index 0000000..5116713
Binary files /dev/null and b/PDFs/Kubernetes-1.pdf differ
diff --git a/PDFs/Kubernetes-2.pdf b/PDFs/Kubernetes-2.pdf
new file mode 100755
index 0000000..2f649a5
Binary files /dev/null and b/PDFs/Kubernetes-2.pdf differ
diff --git a/PDFs/YAML-1.pdf b/PDFs/YAML-1.pdf
new file mode 100755
index 0000000..4abf408
Binary files /dev/null and b/PDFs/YAML-1.pdf differ
diff --git a/README.md b/README.md
new file mode 100755
index 0000000..6f69279
--- /dev/null
+++ b/README.md
@@ -0,0 +1,135 @@
+# 60-Days-Of-DevOps
+
+
+
+
+
+This repository is used to document my journey of **60 Days of DevOps** challenge. The reason for this documentation is to help others understand the stuff that are required for *DevOps*.
+
+This journey will not cover all things about "DevOps" but it will cover the areas that I feel will benefit my learning and understanding overall. I have created 60 videos for 60 days. So if you don't understand the documentation, you can watch the videos also.
+
+Let's write the DevOps definition here and then start the journey day by day. I hope you will enjoy this. Happy Learning!
+
+# What is DevOps?
+
+The word DevOps is a combination of the terms development and operations, meant to represent a collaborative or shared approach to the tasks performed by a company's application development and IT operations teams.
+
+
+
+
+
+It is an ability to deliver applications and services at high velocity: evolving and improving products at a faster pace than organizations using traditional software development and infrastructure management processes. This speed enables organizations to better serve their customers and compete more effectively in the market.
+
+## How DevOps Works?
+
+Under a DevOps model, development and operations teams are no longer “siloed.” Sometimes, these two teams are merged into a single team where the engineers work across the entire application lifecycle, from development and test to deployment to operations, and develop a range of skills not limited to a single function.
+
+In some DevOps models, quality assurance and security teams may also become more tightly integrated with development and operations and throughout the application lifecycle. When security is the focus of everyone on a DevOps team, this is sometimes referred to as DevSecOps.
+
+# Progress
+
+## **Learn Git and GitHub**
+- 📚 [**Day No. 1:** Git Init, Commit, Stash, etc](Days/day1.md)
+- 📚 [**Day No. 2:** Git Branch & Checkout](Days/day2.md)
+- 📚 [**Day No. 3:** GitHub Origin and Upstream repositories](Days/day3.md)
+
+## **Learn Networking**
+- 🌐 [**Day No. 4:** Computer Networking, Protocols, IP Address, etc](Days/day4.md)
+- 🌐 [**Day No. 5:** Identify an Application, Ways of Communication, etc](Days/day5.md)
+- 🌐 [**Day No. 6:** Network Topologies & OSI Model](Days/day6.md)
+- 🌐 [**Day No. 7:** TCP/IP Model, Networking Architecture, etc](Days/day7.md)
+- 🌐 [**Day No. 8:** HTTPS Methods, DNS, etc](Days/day8.md)
+- 🌐 [**Day No. 9:** Transport Layer, TCP & UDP, etc](Days/day9.md)
+- 🌐 [**Day No. 10:** Network Layer, Internet protocols, etc](Days/day10.md)
+- 🌐 [**Day No. 11:** Data Link Layer, Firewall, etc](Days/day11.md)
+
+## **Learn Linux**
+- 🐧 [**Day No. 12:** Copying, Moving, & Removing Files, etc](Days/day12.md)
+- 🐧 [**Day No. 13:** Root Privilege, Searching Files & Finding Text in it, etc](Days/day13.md)
+- 🐧 [**Day No. 14:** Aliases, Sorting of Data, etc](Days/day14.md)
+
+## **Learn YAML**
+- ⌨️ [**Day No. 15:** Markup Language, Objects, etc](Days/day15.md)
+- ⌨️ [**Day No. 16:** YAML Syntax, Listing, Data Types, etc](Days/day16.md)
+- ⌨️ [**Day No. 17:** Sequence, Map, Pairs, etc](Days/day17.md)
+
+## **Learn Docker**
+- 🏗️ [**Day No. 18:** Virtual Machine, Container, & Docker](Days/day18.md)
+- 🏗️ [**Day No. 19:** Pull Image, Start & Stop Containers, etc](Days/day19.md)
+- 🏗️ [**Day No. 20:** Docker Build & Docker Engine](Days/day20.md)
+
+## **Learn Kubernetes**
+- ☸ [**Day No. 21:** Monolithic vs Microservices, Kubernetes & its history, etc](Days/day21.md)
+- ☸ [**Day No. 22:** Kubernetes Architecture, Master & Worker Nodes, etc](Days/day22.md)
+- ☸ [**Day No. 23:** Minikube Installation, & Executing YAML Files, etc](Days/day23.md)
+- ☸ [**Day No. 24:** Labels & Selectors and their Usage](Days/day24.md)
+- ☸ [**Day No. 25:** Deployment & Rollback](Days/day25.md)
+- ☸ [**Day No. 26:** Kubernetes Networking](Days/day26.md)
+- ☸ [**Day No. 27:** Jobs, Init containers & Pod Lifecycle](Days/day27.md)
+
+## **Learn Kubernetes Tools**
+
+- 📜 [**Day No. 28:** Learn Datree](Days/day28.md)
+- 📜 [**Day No. 29:** Learn Lens](Days/day29.md)
+- 📜 [**Day No. 30:** Learn Monokle](Days/day30.md)
+- 📜 [**Day No. 31:** Learn Kubescape](Days/day31.md)
+- 📜 [**Day No. 32:** Learn GitHub Actions](Days/day32.md)
+
+## **Learn Prometheus**
+
+- 📜 [**Day No. 33:** Learn Prometheus](Days/day33.md)
+- 📜 [**Day No. 34:** Prometheus installation & Node Exporter](Days/day34.md)
+
+## **Learn Terraform**
+
+- 📜 [**Day No. 35:** DevOps Tasks Before & After Automation, Terrafrom Intro](Days/day35.md)
+- 📜 [**Day No. 36:** Terraform Configurations, Write Multiple Blocks, etc](Days/day36.md)
+- 📜 [**Day No. 37:** Set a Default Value, Multiple Variables, etc](Days/day37.md)
+- 📜 [**Day No. 38:** Map Variable, TFVARS files, etc](Days/day38.md)
+- 📜 [**Day No. 39:** Terraform Core & Terraform Plugin](Days/day39.md)
+- 📜 [**Day No. 40:** Terraform .tfstate file & destroy Command](Days/day40.md)
+- 📜 [**Day No. 41:** Terraform Refresh, Output, etc](Days/day41.md)
+
+## **Learn Ansible**
+
+- 📜 [**Day No. 42:** System Administrator Problems & Solutions, etc](Days/day42.md)
+- 📜 [**Day No. 43:** Create a User & Make Changes in Nodes](Days/day43.md)
+- 📜 [**Day No. 44:** Ad-hoc Commands, Ansible Modules, etc](Days/day44.md)
+- 📜 [**Day No. 45:** Learn Ansible Playbook](Days/day45.md)
+- 📜 [**Day No. 46:** Ansible Conditions and Roles](Days/day46.md)
+
+## **Learn CI/CD Pipeline**
+
+- 📜 [**Day No. 47:** Before & After CI/CD Pipeline, & Jenkins Intro](Days/day47.md)
+- 📜 [**Day No. 48:** Jenkins Installation && First Hello-World, etc](Days/day48.md)
+- 📜 [**Day No. 49:** Search Panel, Installation of Plugins](Days/day49.md)
+- 📜 [**Day No. 50:** Jenkins Role Base Access Control](Days/day50.md)
+- 📜 [**Day No. 51:** Jenkins Upstream and Downstream](Days/day51.md)
+
+## **Learn Continuous Monitoring**
+
+- 📜 [**Day No. 52:** Continuous Monitoring & Nagios Intro](Days/day52.md)
+- 📜 [**Day No. 53:** Installation of Nagios & Dashboard Overview](Days/day53.md)
+
+## **Learn Cloud Computing**
+
+- 📜 [**Day No. 54:** Before & After Cloud, Services in Cloud, etc](Days/day54.md)
+- 📜 [**Day No. 55:** Elastic Compute Cloud, General Purpose & Compute Optimized Instances](Days/day55.md)
+- 📜 [**Day No. 56:** Memory & Storage Optimized Instances](Days/day56.md)
+- 📜 [**Day No. 57:** Accelerated Computing, High Memory Instances, etc](Days/day57.md)
+- 📜 [**Day No. 58:** AWS Demo](Days/day58.md)
+
+## **Learn Helm**
+
+- 📜 [**Day No. 59:** Intro of Helm and Its Usage](Days/day59.md)
+- 📜 [**Day No. 60:** Learn Helm Commands](Days/day60.md)
+
+
+## **Author Info**
+
+- YouTube -> [iBilalKayy](https://www.youtube.com/channel/UCBLTfRg0Rgm4FtXkvql7DRQ)
+- Hashnode -> [ibilalkayy](https://ibilalkayy.hashnode.dev/)
+- LinkedIn -> [ibilalkayy](https://www.linkedin.com/in/ibilalkayy/)
+- Twitter -> [ibilalkayy](https://twitter.com/ibilalkayy)
+
+[Back to Top](#60-Days-Of-DevOps)
\ No newline at end of file