Docker Revision

Docker is a tool that allows you to package your application along with everything it needs to run (like libraries, dependencies, and configurations) into a container. This ensures that the application runs the same way no matter where it's deployed, whether it's on your computer, a server, or in the cloud.

DOCKER ARCHITECTURE :

NOTE: All the containers are isolated from each other.

DOCKER vs VMs :

Working of Docker :

NOTE: A Docker Registry is a central repository for storing and distributing docker images. Docker Hub is the default public registry provided by Docker for hosting and distributing container images. A Docker registry is a service that stores Docker images (e.g., Docker Hub), while a Docker repository is a collection of related images within that registry (e.g., the nginx repository on Docker Hub).

Steps to dockerize a simple react app:-

  1. creating a simple react app using the command "npx create-react-app appname" . Note:- Have the proper node version.

  2. Create a Dockerfile:- FROM node:22 //desired node version (must be available in Dockerhub) WORKDIR /myapp // all the contents shall be inside the directory myapp COPY . . // To copy all the files from the Current Directory of the system to the myapp directory RUN npm install // Deleted the node modules from app and used this command again for getting node modules (in order to make the app lighter while initial executing). EXPOSE 3000 // port at which it's available CMD [ "npm", "start" ] //if the image gets created then start it

  3. Create a docker image using the command " docker build ."

  4. Get details of that image using the command "docker image ls" // we will get important details like IMAGE ID , SIZE, etc.

  5. Note : "." with docker build confirms that the docker file is present in the current location.

  6. Next, we can run it by the command : docker run -p 3000:3000 IMAGE ID. Then to stop the container :- first use "docker ps" to see the details of the container like CONTAINER ID , NAME ,etc. Then use the command "docker stop CONTAINER NAME".

docker run -p VS docker run -d VS docker run :

1. docker run -p 3000:3000 IMAGE_ID Purpose: This runs a Docker container and maps port 3000 from your local machine (host) to port 3000 inside the container. Example: If you're running a web server inside the container on port 3000, this command allows you to access it by going to localhost:3000 in your browser.

Example: docker run -p 3000:3000 my-image This means: Local port 3000 is forwarded to the container's port 3000. You can access the application on your local machine via http://localhost:3000.

2. docker run -d 3000:3000 IMAGE_ID Purpose: This runs the Docker container in detached mode (background) while mapping port 3000 on the host to port 3000 inside the container.

Explanation: The -d option makes sure the container runs in the background, so you don't see logs or output in your terminal.

Example: docker run -d -p 3000:3000 my-image This means: The container runs in the background. Local port 3000 is mapped to container port 3000. You can still access the web app via localhost:3000 but won't see logs in your terminal.

3. docker run IMAGE_ID Purpose: This simply runs the Docker container without any extra options.

Explanation: By default, this runs the container in the foreground (you’ll see logs/output in your terminal) and doesn't map any ports.

Example: docker run my-image This means: The container starts, but you won't be able to access it via the browser because no ports are mapped. The terminal will show the container's output. In Summary: docker run -p 3000:3000 IMAGE_ID: Runs the container and maps local port 3000 to container port 3000. docker run -d -p 3000:3000 IMAGE_ID: Runs the container in the background with port mapping. docker run IMAGE_ID: Runs the container without port mapping, visible in the terminal.

NOTE :We can monitor everything related to this using Docker Desktop (since currently we are working in windows with VS code).

Points to be noted:

  1. If docker run -d -p 3000:3000 IMAGE_ID command is being used twice at a time then it will show error as port 3000 is already being allocated . So to run multiple containers we can use consecutive commands like "docker run -d -p 3001:3000 IMAGE_ID" , "docker run -d -p 3002:3000 IMAGE_ID" ,etc. So now we can see localhost:3000 , localhost:3001 and localhost:3002. Since containers remain isolated from each other , hence no issue is happening here. Meaning of Ports in the command "docker run -d -p 3001:3000 IMAGE_ID": -p 3001:3000: This option maps ports between your local machine (host) and the Docker container.

  2. 3001 (on the left side): This is the port on your local machine (host), which you will use to access the service. 3000 (on the right side): This is the port inside the Docker container, where the application is running. Example: Your Docker container is running an application on port 3000 inside the container. You map port 3001 on your local machine to port 3000 inside the container. This means: When you visit http://localhost:3001 in your browser or make requests to port 3001 on your machine, those requests are forwarded to port 3000 inside the container, where the application is actually running.

  3. docker ps will show only the containers which are running but docker ps -a will show all the containers even the ones which are not running . We can use the command " docker rm CONTAINERNAME" to remove a container. We can also manually do these tasks from docker desktop.

  4. Use and Benefit of "--rm" option in the command "docker run -d --rm -p 3000:3000 IMGID" :- Purpose: The --rm option tells Docker to automatically remove the container once it stops.

    How It Works: Normally, when a container stops running, it remains on your system. This means it still takes up space, and you would need to manually clean it up using docker rm. With --rm, Docker automatically deletes the container as soon as it stops running. Benefit: No leftover containers: After the container finishes its job or crashes, it’s removed automatically, so you don’t have to worry about manually cleaning it up. Saves Disk Space: It helps keep your system tidy and saves storage space because old, stopped containers don’t accumulate. Convenience: It's especially useful for short-running tasks or one-off jobs where you don't need the container to persist after it finishes.

  5. The --name "NAME" option in the Docker command is used to assign a custom name to the container when it is run. This name is easier to reference and manage than using the automatically generated container Name. Example: docker run -d --name "my-container" -p 3000:3000 IMGID . Example Uses: Stopping the container: docker stop my-container (Instead of using the container's ID, you use my-container.)

    Viewing logs: docker logs my-container (Access the logs of the container using its custom name.)

    Removing the container : docker rm my-container

  6. If we execute three commands together "docker build ." , "docker build -t myapp:01 ." , "docker build -t myapp:02 ." then , When you run the docker image ls command, Docker will show a list of all images on your system. It will include:

    The image ID of the first image built (the one with no tag). The tag myapp:01 and its corresponding image ID. The tag myapp:02 and its corresponding image ID. Here’s how the output might look: REPOSITORY TAG IMAGE ID CREATED SIZE

    myapp 01 abc123xyz 2 minutes ago 150MB

    myapp 02 def456abc 1 minute ago 150MB

    <none> <none> ghi789xyz 3 minutes ago 150MB

  7. We can use the command docker rmi myapp:02 to remove the Docker image that is tagged as myapp:02

  8. NOTE : Use of various versions is required especially when we update our project. Suppose we do some modification in our codebase then we can create a new version of the image and then build a container at a different port for that image . Then we can see the changes in the new localhost port.

  9. docker pull : This command is used to download (or "pull") a Docker image from a Docker registry. For example:- docker pull python is used to download the official Python image from the Docker Hub to your local machine.

Steps to dockerize a simple python app:-

  1. Created a directory named "demo". Inside it , created a myapp.py file

  2. Code of myapp.py :- num1 = int(input("Enter 1st no.")) num2 = int(input("Enter 2nd no.")) result = num1 + num2 print(f"sum of two numbers = {result}") 3)In the terminal , used the command "docker pull python"

  3. Created a Dockerfile inside the demo directory :- FROM python WORKDIR /myapp COPY . . CMD ["python","myapp.py"]

  4. Then in terminal, used "docker build ." . Then used "docker image ls" to see the details of the image.

  5. Then used the command "docker run -it IMAGEID" .

  6. I got the result in the terminal as :- Enter 1st no.5

    Enter 2nd no.10

    sum of two numbers = 15

Push an Image to DockerHub:

What it means: Upload a Docker image from your computer to DockerHub so others can pull it.

Steps:

Log in to DockerHub using the command docker login. Then enter the username and password. Tag your image using the command docker tag <image_id> username/repository:tag . Push the image using the command docker push username/repository:tag. For example :- docker tag 34...c4 kishaloy01/test ; docker push kishaloy01/test

Let us see some frequently used Docker commands :-

  1. docker container rename
  • Purpose: Renames an existing container.

  • Example:

      docker container rename old_name new_name
    
    • Changes a container's name from old_name to new_name.

  1. docker container kill
  • Purpose: Immediately stops a running container (forces it to stop).

  • Example:

      docker container kill <container_name_or_id>
    
    • Stops a container abruptly without a graceful shutdown.


  1. docker container pause
  • Purpose: Pauses all processes running inside a container.

  • Example:

      docker container pause <container_name_or_id>
    
    • Freezes a container (like pausing a video).


  1. docker container unpause
  • Purpose: Resumes a paused container.

  • Example:

      docker container unpause <container_name_or_id>
    
    • Restarts the processes in a container that was paused.


  1. docker container prune
  • Purpose: Deletes all stopped containers.

  • Example:

      docker container prune
    
    • Useful for cleaning up stopped containers to save disk space.

DOCKER VOLUME:

Let us now understand the concept of Docker Volume with an example.

Suppose I have a python app which performs this way :-

Now after dockerizing the above app and upon building its image and then running it as "docker run -it --rm IMAGEID " , we can execute the program and suppose we enter the name "Ram" . Then after the program ends if we again re run it (after recreating the container, as --rm was been used) and this time if we enter the name "Raju", then the final list will only show the name Raju ; instead of both Ram and Raju . This problem can be solved by the concept of "Docker Volume" . So now we can use the command : docker run -it --rm -v myvolume:/myapp/ IMAGEID . Now if we repeat the above process , we will get the names of both Ram and Raju in the final list.

OBSERVATION :

So, a Docker volume is a storage space managed by Docker. It lets you save data outside of a container so that even if the container is deleted, the data remains.

Some commands related to docker volume:

Docker Bind Mounts :

A bind mount in Docker is a way to link a specific folder or file on your host machine (your computer) to a folder or file inside the container. This means the container can directly use and modify files on your computer.

Key Points About Bind Mounts: The host folder or file must already exist. Changes made to files in the container are immediately reflected on the host and vice versa. Best for development, where you want real-time updates (e.g., editing source code). How Bind Mounts Work : You specify a host path (e.g., /my-folder) and a container path (e.g., /app). Docker connects these paths so the container can see and modify files in the host folder.

Let's see an example to understand this concept:-

We are getting the output as Server1 Server2 ; since, the servers.txt is being written like that. Now, in our local machine if we modify the servers.txt file and include Server3 into it , then on running the program the change will get reflected. If we want this reflection to happen even upon dockerizing then we need to use the concept of Bind Mounts. So, we can use a command like : docker run -v /Users/.../Desktop/test/servers.txt:/myapp/servers.txt --rm IMAGEID.

Basically, docker run -v hostpath:containerpath --rm IMAGEID.

OBSERVATIONS :

The servers.txt file consists of :

And the code of servers.py file :

try:
    with open('servers.txt', 'r') as file:
        content = file.readlines()
except Exception as e:
    print(e, type(e))
else:
    for line in content:
        print(f'{line.rstrip()}')

Then we build the Docker image and run the container using the concept of Bind Mounts :

Now if we modify the servers.txt file from our system and include Server 4 ; then the change will get reflected inside the container file also .

Let’s deep dive to understand the concept of Dockerfile better :

1) We know that by default, Docker uses a file named Dockerfile in the current directory to build an image. However, you can name your Dockerfile something else (e.g., CustomDockerfile) and still use it to build your Docker image. To do this, you need to explicitly specify the file using the -f option with the docker build command.


Explanation with Example

Scenario 1: Default Dockerfile

If your file is named Dockerfile, you can simply run:

docker build -t my-image .

Here:

  • -t my-image: Tags the image as my-image.

  • .: Refers to the current directory containing the Dockerfile.

Scenario 2: Custom Dockerfile Name

Suppose you name your Dockerfile MyCustomFile. In this case, you need to specify it with the -f option:

docker build -t my-custom-image -f MyCustomFile .

Here:

  • -f MyCustomFile: Tells Docker to use MyCustomFile as the Dockerfile.

  • -t my-custom-image: Tags the image as my-custom-image.

  • .: Refers to the current directory containing MyCustomFile.

2) When working with a Dockerfile, try to add new instructions at the end of the file instead of inserting them in the middle. This approach saves time because Docker uses caching for previously executed lines. The lines at the top that haven't changed will be executed from the cache, while only the newly added lines will take extra time to build. If you insert instructions in the middle, Docker has to rebuild all the subsequent lines, which increases the build time.


Example:

Initial Dockerfile

FROM python:3.9-slim
RUN touch abc1
RUN touch abc2
RUN touch abc3

Scenario 1: Adding a New Line at the End

Suppose you need to include an additional line. Adding the instruction at the end will use the cache for the top lines:

FROM python:3.9-slim
RUN touch abc1
RUN touch abc2
RUN touch abc3
RUN touch abc4

When you build the image, Docker will:

  1. Use the cache for lines 1-4.

  2. Only execute the new RUN touch abc4 instruction.

This saves time.

Scenario 2: Inserting a Line in the Middle

If you insert the same line in the middle:

FROM python:3.9-slim
RUN touch abc1
RUN touch abc2
RUN touch abc5
RUN touch abc3
RUN touch abc4

Docker will have to rebuild the cache from the point of the new RUN instruction onward. This makes the build slower because all subsequent lines will be re-executed.

3) Difference Between ADD and COPY in Dockerfile

Both ADD and COPY are used to copy files or directories into a Docker image, but they have distinct features.


1. COPY

  • Purpose: For basic file or directory copying.

  • Use Case: When you want to copy files from your local system into the container without any extra processing.

  • Advantages:

    • Simple and straightforward.

    • Faster and less resource-intensive than ADD.

Example:

COPY app.py /app/app.py
  • Copies app.py from your local system to /app in the container.

2. ADD

  • Purpose: Copies files and has additional features like:

    • Unzipping tar files automatically.

    • Supporting remote URLs (downloads files directly).

  • Use Case: When you need the extra functionality, such as extracting tar files or fetching remote files.

  • Disadvantage: Slower than COPY and can lead to unintended behaviors if not used carefully.

Example 1: Copy and Extract Tar File

ADD app.tar.gz /app/
  • Automatically extracts app.tar.gz into the /app/ directory.

Example 2: Download from URL

ADD https://example.com/app.py /app/app.py
  • Downloads app.py from the URL to /app.

4) ENV (Environment Variables)

  • Purpose: Sets environment variables in the container.

  • Use Case:

    • To define configuration values that can be reused in multiple places.

    • To make the container more flexible and customizable at runtime.

Example:

ENV APP_PORT=8080
ENV APP_ENV=production

CMD ["python", "app.py", "--port", "$APP_PORT", "--env", "$APP_ENV"]

LABEL in Dockerfile

The LABEL command is used to add metadata to the Docker image. This metadata can provide information about the image like its version, description, author, and more. It's useful for organizing and managing Docker images.

Use Case:

  • Add useful metadata to the image for better documentation, versioning, and identification.

Example:

LABEL version="1.0"
LABEL description="This is a web application container"
LABEL author="Your Name"
  • Here, the LABEL command adds metadata to the Docker image. It includes the version, description, and author.

Viewing Labels:

  • You can view the labels of an image using:
docker inspect <image-name> | grep "Labels"

DOCKER NETWORK INTRODUCTION:

Docker network is a virtual network created by Docker to enable communication between Docker containers.

Let us create a network named "my-net" , which will consist of a Python container and a MySQL container. So first use the command "docker network create my-net" .

Then using docker network ls command we can see the presence of "my-net" network , which by-default will be of "bridge" driver .

Then we will create a MySQL container , using the command "docker run -d --env MYSQL_ROOT_PASSWORD="root" --env MYSQL_DATABASE="userinfo" --name mysqldb --network my-net mysql" .

CODE of the app.py file:

import pymysql

# Function to create a connection to the MySQL database
def create_connection():
    return pymysql.connect(
        host="mysqldb",  # Your MySQL server host
        user="root",     # Your MySQL username
        password="root", # Your MySQL password
        database="userinfo"  # Your MySQL database name
    )

# Function to create a table to store usernames if it doesn't exist
def create_table(connection):
    cursor = connection.cursor()
    cursor.execute("""
        CREATE TABLE IF NOT EXISTS usernames (
            id INT AUTO_INCREMENT PRIMARY KEY,
            name VARCHAR(255)
        )
    """)
    connection.commit()
    cursor.close()

# Function to insert a name into the database and save it to a file
def insert_name(connection, name):
    cursor = connection.cursor()
    cursor.execute("INSERT INTO usernames (name) VALUES (%s)", (name,))
    connection.commit()
    cursor.close()
    # Append the name to a file
    with open("servers.txt", "a") as file:
        file.write(name + "\n")

# Function to fetch all usernames from the database
def fetch_all_usernames(connection):
    cursor = connection.cursor()
    cursor.execute("SELECT name FROM usernames")
    usernames = [row[0] for row in cursor.fetchall()]
    cursor.close()
    return usernames

# Main function
def main():
    connection = create_connection()
    create_table(connection)

    while True:
        print("\n1. Add a name")
        print("2. Show all usernames")
        print("3. Quit")
        choice = input("Enter your choice: ")

        if choice == "1":
            name = input("Enter a name: ")
            insert_name(connection, name)
            print(f"Name '{name}' added to the database and servers.txt file.")
        elif choice == "2":
            usernames = fetch_all_usernames(connection)
            if usernames:
                print("\nUsernames in the database:")
                for name in usernames:
                    print(name)
            else:
                print("\nNo usernames found in the database.")
        elif choice == "3":
            print("Goodbye!")
            break
        else:
            print("Invalid choice. Please try again.")

    connection.close()

if __name__ == "__main__":  # Corrected "__main__" check
    main()

Code of the Dockerfile :

FROM python
WORKDIR /myapp
COPY . .
RUN pip install pymysql
RUN pip install cryptography
CMD ["python", "app.py"]

Next, we create its image using docker build . command and then use docker image ls command to retrieve its IMAGEID. :

Then we run that container using the command "docker run -it --rm --network my-net IMAGEID":

So this way we can see the communication between Python and MySQL.

DOCKER COMPOSE:

Docker Compose is a tool that simplifies the process of defining and running multi-container Docker applications.

Let's take an example to understand the need of Docker Compose: In the previous example , we were using the command "docker run -d --env MYSQL_ROOT_PASSWORD="root" --env MYSQL_DATABASE="userinfo" --name mysqldb --network my-net mysql" to start a MySQL container but it's quite an inefficient way of writing code. Also, everytime it's difficult to write such a long command and , the probablity of making error is also more. Thus, in such cases , it's better to use Docker Compose

Code of docker-compose.yml file:

services:
  mysqldb:
    image: 'mysql:latest'
    environment:
      - MYSQL_ROOT_PASSWORD=root
      - MYSQL_DATABASE=userinfo
    container_name: mysqldb

Then, we need to use the docker-compose up command.

We can verify the execution of the above command by checking the presence of image and container using docker image ls and docker ps commands , respectively.

To stop this , we can use the docker-compose down command.

NOTE: To run it in detached mode , we can use the command docker-compose up -d

Let’s study about Docker Compose in detail :

First of all , let us DELETE ALL THE EXISTING DOCKER CONTAINERS at once :

We use the command : docker container rm -f $(docker container ls -aq)

RESULT :

Suppose we want to Connect WordPress with a MySQL Database , then we can use the commands :

Create the MySQL Database Container:

docker run --name db-mysql -e MYSQL_ROOT_PASSWORD=mypassword -d mysql:5.7
  • --name db-mysql: Names the container db-mysql.

  • -e MYSQL_ROOT_PASSWORD=mypassword: Sets the root password for MySQL.

  • -d: Runs the container in detached mode.

  • mysql:5.7: Uses the MySQL 5.7 image.

Then we inspect the container (we use commands like : docker ps and docker container inspect CONTAINER_ID ) created above to get details like its PORT Number (here , 3306 ) and ip-address (here , 172.17.0.2 ) :

Create the WordPress Container*:*

docker run -d -p 8000:80 --name my-wordpress -e WORDPRESS_DB_HOST=172.17.0.2:3306 -e WORDPRESS_DB_USER=root -e  WORDPRESS_DB_PASSWORD=mypassword wordpress

  • -p 8000:80: Maps port 8000 on your system to port 80 in the container.

  • --name my-wordpress: Names the container my-wordpress.

  • The WORDPRESS_DB_* environment variables configure WordPress to connect to the MySQL database.

So next, we inspect the container of wordpress to get its ip-address docker container inspect e977ef3c0069 :

Then we use the command elinks to hit the ip-address of wordpress : 172.17.0.3

Then we get :

Now , we can do this whole process simply using the concept of docker compose.

So, first of all, we create a docker-compose.yaml file :

For instance :

Then we use docker-compose up -d command : We see that the containers get created

Then we inspect the wordpress container to get its ip-address : docker container inspect 6184616ec04f

Then we try that on elinks :

And get our result :

Now, let us understand in depth how to write a proper docker-compose file :

For “version” , we can refer :

So suppose if docker --version of our machine is 18.06.0+ , hen we can use any compose file format of 3.7 or below 3.7 but not above 3.7 .

Now, let us first clear the content of the existing docker-compose.yaml file ; we can do that using the command : sudo truncate -s 0 docker-compose.yaml

Now, let’s write the docker-compose.yaml file like this:

Then we run it in the detached mode :

Now, suppose we want to do minor changes like adding PORT to webapp1 , then we can simply do it by :

We can then verify the successful execution of the compose by seeing the presence of port 8000 in the webapp1 container :

Then suppose if we change the PORT numbers (from 8000:80 to 9000:90), then the old containers will get removed and the new containers will get created and started :

Now suppose, if we want to have the replicas of these containers then we can use the scale command :

In the above docker-compose.yaml file : It defines two services: webapp1 and webapp2, both using the nginx image.

  • For webapp1, it maps a range of ports (9001-9006) on the host to port 95 inside the container.

  • webapp2 does not have port mapping specified, so it will use the default configuration.

This is the final result :

If we want to stop all those containers which are being created using docker-compose , we can use the command : docker-compose stop

The outcome :

We can again start these containers using the command : docker-compose start

We can also restart using docker-compose restart command .

NOTE : When you want to use a different file name instead of the default docker-compose.yaml, you can specify the file using the -f option.

Example:

docker-compose -f custom-compose.yaml up -d

This command tells Docker Compose to use custom-compose.yaml instead of the default docker-compose.yaml file to start the services.

Let us study about the commands related to docker compose (run the command docker-compose --help) :

Just like .yaml , we can also use .json files . In that case , the syntax maybe like :

docker-compose -f kishaloy.json up -d

Basic Docker Networking Theory Explained :

Docker networking allows containers to communicate with each other, with the host machine, and with external systems.


1. Docker's Default Bridge Network (docker0)

  • When Docker is installed on a host machine, it typically creates a default bridge network (docker0) unless explicitly configured to use only custom networks.

  • Think of docker0 as a switch that connects containers to the host machine and allows them to communicate with each other.

Example:

  • The host machine sends packets to the docker0 interface (suppose ip: 172.17.0.1).

  • When you create containers, they are automatically connected to docker0 and assigned IPs from this range:

    • Container 1: 172.17.0.2/16

    • Container 2: 172.17.0.3/16

  • Containers communicate through docker0. The bridge forwards the packets sent by the host machine to the appropriate container based on its IP (e.g., 172.17.0.2).


2. Accessing Containers from the Host Machine

  • The host communicates directly with containers through the docker0 bridge interface.

  • The docker0 bridge acts as a virtual switch, routing traffic between the host and connected containers.


3. Accessing Containers from an External Machine

  • To access a container from another machine:

    1. Hit the Host Machine's Main Interface Card (MIC):
      First, communicate with the IP address of the host machine's MIC.

    2. Map Host Ports to Container Ports:
      Since there may be multiple containers running on the host, each container is mapped to a unique port on the host.

      • Example: If a container exposes port 80, it might be mapped to host port 8080.
        Access the container using host-machine-ip:8080.

4. Custom Bridge Networks

You can create custom bridge networks to isolate or organize containers into groups.

Example:

  • Create three networks:

    • Dev-team with IP range 10.50.0.0/24

    • Test-team with IP range 10.100.0.0/24

    • Prod-team with IP range 10.200.0.0/24

Process:

  1. By default, containers are connected to docker0.

  2. You can manually connect containers to specific networks:

    • Since the Dev-team network has an IP 10.50.0.0/24 ; thus , a container in Dev-team will get an IP like 10.50.0.2/24 and the gateway for this network will get the ip 10.50.0.1/24.

    • Similarly:

      • Test-team: IP 10.100.0.2/24

      • Prod-team: IP 10.200.0.2/24

Result:

  • Containers in one network (e.g., Dev-team) cannot communicate with containers in another network (e.g., Test-team).

  • This provides network isolation.


5. Enabling Cross-Network Communication

If a container in one network needs to communicate with a container in another network (suppose a Dev-team container wants to communicate with a Test-team container) , there are three solutions:

Solution 1: Create a Common Network

  • Create a new network (e.g., common) and connect both the above mentioned containers to this network.

  • They can now communicate through the common network.

Solution 2: Port Mapping

  • Expose specific ports on both containers and map them to the host machine.

  • Example:

    • Container A in Dev-team: Map port 5000 to host-port 9000.

    • Container B in Test-team: Map port 6000 to host-port 9001.

    • Communication happens via the host, e.g., host-ip:9000host-ip:9001.

Solution 3: Overlay Networking

  • Overlay networks allow communication between containers across multiple Docker hosts.

  • This is used in advanced scenarios like Docker Swarm. (Will study this later.)

PRACTICAL OF THE ABOVE THEORY:

We can verify the ip of docker0 below :

We can verify the ip of the bridge network from below :

OBSERVATION : The IP of the gateway for the default bridge network matches the IP of docker0, and both are 172.17.0.1 by default. It’s because docker0 is the virtual switch and its IP (172.17.0.1) serves as the gateway for all containers in the default bridge network.

Now, suppose we create a container . Then , we see that by default it becomes a part of the bridge network :

Let us now create a network named “dev-team” : docker network create dev-team

We see that by default , the network is of bridge type and it got a subnet ip : 172.20.0.0/16 .

NOTE : If we want we can create a network with a subnet ip of our choice :

docker network create prod-team --subnet=10.200.0.0/24

Now, if we create a container on the prod-team network : docker run -d -it --network=prod-team python

OBSERVATION : Hence , we see that if the network has subnet ip as : 10.200.0.0/24 ; then on creating a container on that network , the container ip becomes : 10.200.0.2 and the gateway gets : 10.200.0.1 .

SOME THEORY :

What is Subnet Masking?

  • An IP address consists of two parts:

    1. Network Portion: Identifies the network.

    2. Host Portion: Identifies individual devices (hosts) within the network.

  • The subnet mask determines how many bits are used for the network portion and how many are left for the host portion. The /16 or /24 specifies the number of bits reserved for the network.


Understanding /16 and /24:

  • /16 means the first 16 bits of the IP address are reserved for the network portion.

    • Example: 172.20.0.0/16

      • Network portion: 172.20 (fixed)

      • Host portion: 0.0 (variable)

      • Total host range: 172.20.0.1 to 172.20.255.254 (~65,000 hosts).

  • /24 means the first 24 bits of the IP address are reserved for the network portion.

    • Example: 10.200.0.0/24

      • Network portion: 10.200.0 (fixed)

      • Host portion: 0 (variable)

      • Total host range: 10.200.0.1 to 10.200.0.254 (254 hosts).

Practical of cross-network communication :

Setup: Create Two Networks and Two Containers

  1. Create Two Custom Networks:

    • Dev-team network: IP range 10.50.0.0/24.

    • Test-team network: IP range 10.100.0.0/24.

    docker network create --subnet=10.50.0.0/24 Dev-team
    docker network create --subnet=10.100.0.0/24 Test-team
  1. Run Containers in Separate Networks:

    • Container A in the Dev-team network.

    • Container B in the Test-team network.

    docker run -d --name dev-container --network=Dev-team python:3.9-slim sleep infinity
    docker run -d --name test-container --network=Test-team python:3.9-slim sleep infinity

At this stage, dev-container and test-container cannot communicate because they are in isolated networks.

NOTE: sleep infinity command is a way to keep the container running indefinitely.


Solution 1: Create a Common Network

  1. Create a Shared Network:

     docker network create common-network
    
  2. Connect Both Containers to the Common Network:

     docker network connect common-network dev-container
     docker network connect common-network test-container
    
  3. Verify Network Connectivity:

    • Use the docker exec command to ping one container from another via the common-network.
    docker exec -it dev-container ping test-container
    docker exec -it test-container ping dev-container

Both commands should succeed, showing that the containers can now communicate through the common-network.

Observation :


Solution 2: Port Mapping

  1. Expose Ports on Each Container:

    • Container A (Dev-team): Expose port 5000, map it to host port 9000.

    • Container B (Test-team): Expose port 6000, map it to host port 9001.

    docker run -d --name dev-container --network=Dev-team -p 9000:5000 python:3.9-slim python -m http.server 5000
    docker run -d --name test-container --network=Test-team -p 9001:6000 python:3.9-slim python -m http.server 6000
  1. Access Containers via the Host:

    • From the host machine, access dev-container on port 9000:

        curl http://localhost:9000
      
    • Access test-container on port 9001:

        curl http://localhost:9001
      

      Observation :

  2. Cross-Container Communication:

    • Container A (Dev-team) accesses Container B (Test-team) through the host:

        docker exec -it dev-container curl http://<host-ip>:9001
      
    • Container B accesses Container A:

        docker exec -it test-container curl http://<host-ip>:9000
      

Replace <host-ip> with your actual host machine IP address.

Observations:

Key Takeaways:

  • Solution 1 (Common Network) is straightforward for containers on the same Docker host and avoids port conflicts.

  • Solution 2 (Port Mapping) allows access via the host and is useful when containers are in isolated networks or for external access.

Deleting Docker Networks :

  • You must disconnect the containers from the network before deleting it.

  • You may also simply delete the containers if they’re not that important.

  • If containers are actively using the network, the docker network rm command will fail.

  • Using the syntax docker network disconnect <network_name> <container_name_or_id> :

    Automatically Remove Unused Networks:

      docker network prune
    

    This will automatically remove all unused networks (those without active containers).

    PROOF :

DOCKER SWARM :

The below diagram gives us a good idea about the working of Docker Swarm :

What is Docker Swarm?

Docker Swarm is a tool that helps manage and scale applications across multiple machines. It ensures high availability, load balancing, and resilience in case of failures.


Why Docker Swarm?

  1. Traditional Setup Challenges:

    • Imagine you run an application on a single host machine with 100 containers. You use an ingress load balancer to distribute traffic among these containers.

    • This works well until the host machine fails. If the host machine is destroyed, your application goes down entirely.

  2. The Solution – Docker Swarm:

    • Docker Swarm creates a cluster of machines (nodes).

    • The cluster can include:

      • Manager nodes: These control and manage the cluster.

      • Worker nodes: These run the containers.

    • If one node fails, its containers are moved to other nodes in the cluster, keeping your application running.


How Does Docker Swarm Work?

  1. Nodes in the Cluster:

    • A node is an individual machine in the cluster (can be physical, virtual, or cloud-based like AWS EC2).

    • Each node can host multiple containers.

  2. Manager and Worker Nodes:

    • Manager nodes: Handle the cluster's overall management (e.g., deploying apps, monitoring health).

    • Worker nodes: Execute tasks (run the containers).

    • A manager node can also perform worker tasks.

  3. Leader Election:

    • When there are multiple manager nodes, one becomes the leader through an election process.

    • If the leader fails, a new leader is chosen automatically.


Features of Docker Swarm:

  1. Resilience:

    • If a node goes down, its containers are automatically transferred to other nodes in the cluster.
  2. Load Balancing:

    • Each node has an ingress load balancer to evenly distribute traffic among the containers.
  3. Geographic Security:

    • Nodes can be placed in different locations (e.g., 2 in India, 2 in Singapore) for redundancy and security.
  4. Replication:

    • If traffic increases, Swarm can create replicas of containers to handle the load.

How Traffic is Managed:

  1. Example Setup:

    • Your website: sanjaydahiya.com

    • Public IP: 54.10.100.102

    • Traffic hits a main load balancer at this IP.

    • The load balancer directs traffic to four systems (nodes) in the Swarm:

      • System 1: 20 containers

      • System 2: 30 containers

      • System 3: 20 containers

      • System 4: 30 containers

  2. Load Balancing Within Nodes:

    • Each system has its own load balancer to distribute traffic among its containers.
  3. Automatic Scaling:

    • If System 3’s containers are overloaded, Swarm can replicate more containers on any available node.

PRACTICAL :

Step 1: Set Up Docker Swarm

  1. Initialize Docker Swarm on a Manager Node: Run this on your main machine (I’m running on an ec2 instance named manager-node), which will act as the Manager Node:

     docker swarm init
    
    • The output will include a join token for adding worker nodes.

  2. Add Worker Nodes: On other machines (in my case, ec2 instances), run the join command from Step 1:

     docker swarm join --token <TOKEN> <MANAGER_NODE_IP>:2377
    
    • Replace <TOKEN> and <MANAGER_NODE_IP> with the values from the swarm init output.

    • BUT when I did the above step , I got an issue :

      The reason I am getting the above issue is because the security group associated with the manager node isn’t updated. So we need to manually change the settings of manager-node ec2 instance and Allow inbound traffic on port 2377 (TCP). We can do it like :

      After doing the above changes , we get our desired result :

Check Nodes: On the manager node, verify the nodes in the swarm:

    docker node ls

We see :

In the above picture, first two are worker nodes and the third one is Manager Node.

On the worker node, we can’t view or modify cluster state :

Step 2: Deploy a Service with Replicas

  1. Deploy a Simple Web Application: Let’s create a simple NGINX service with 5 replicas:

     docker service create --name web-service --replicas 5 -p 8080:80 nginx
    
    • --replicas 5: Creates 5 instances (replicas) of the service.

    • -p 8080:80: Maps port 80 in the container to port 8080 on the host.

  2. Verify the Service: Check the status of the service:

     docker service ls
    

    List all the tasks (replicas) of the service:

     docker service ps web-service
    

    From the above picture , we observe that the “web-service.3” service is running on manager-node. The services “web-service.1” and “web-service.4” are running on worker-node1. The services “web-service.2” and “web-service.5” are running on worker-node2.

  3. Access the Application: Open elinks and go to http://<MANAGER_NODE_IP>:8080. You’ll see the default NGINX page:

    Step 3: Simulate Node Failure

    1. Simulate Node Failure: Stop Docker on one of the worker nodes hosting the replicas:

       sudo systemctl stop docker
      
    2. Observe Replica Redistribution: Check the service status again on the manager node:

       docker service ps web-service
      

      You’ll notice that replicas on the failed node are redistributed to other nodes

    3. OBSERVATIONS :

Suppose we stop and disable the docker at worker-node1 , then we see:

The node status of worker-node1 in cluster is Down. Also, we see that the webservices which were earlier running at worker-node1 are now running at manager-node.

Now, if we want to do redeployment of all tasks across the swarm , we can use the command :

sudo docker service update --force web-service

On doing that , we get:

Now, we see above that web-service.1 ,2 are running at worker-node2 and web-service.3,4,5 are running manager-node.

Suppose I create a new node in the cluster as worker-node3 and again do force redeployment , then we see:

Step 4: Scaling the Service

  1. Increase the Number of Replicas:

     docker service scale web-service=10
    

  2. Verify the Updated Replicas:

     docker service ps web-service
    

    REMOVING NODES FROM THE CLUSTER :

  1. For a Worker Node To remove a worker node from the swarm:

     sudo docker swarm leave
    

    This will remove the node from the cluster.

  2. For a Manager Node If you want to remove a manager node, you must ensure it is not the leader of the swarm. If it is the leader, demote it first:

     sudo docker node demote <NODE_NAME>
    

    Then, remove it:

     sudo docker swarm leave
    
  3. Force Removal If the node is stuck or unable to leave the swarm gracefully, you can force it:

     sudo docker swarm leave --force
    

    This forcibly removes the node from the cluster.