Install Nginx Docker

/ Comments off


This video is second part of my previous one: Install Ubuntu Desktop Docker Using Portainer and Access it From Browser (VNC/noVNC) - https://youtu.be/YKH2RwH.

Thus, GitLab’s Nginx server does not interfere with my own Nginx server and I can get rid of all GitLab components easily if I decide I do not like it. I created an ansible role for GitLab in Docker which you can use to simplify your setup. Installing Docker. First, we need to install Docker. Install your dependencies with pip install-r requirements. Txt; Make a flask app at app / main. For the Docker image we will use, you need to do two important things: Make sure the app is really called main. Py; Make sure the Flask app variable is really called app. The Flask app variable is declared like app = Flask (. Step 3: Create Nginx Proxy Manager directory. Open Putty to SSH into your docker server. The first thing we need to do is make a directory for Nginx Proxy Manager. I keep all of my containers in /srv/config/, so I’ll creating a nginxproxymanager directory there. First, navigate to the directory. Then make a new folder.

Reading Time: 6 mins.

Overview

Docker Virtualization, known for its simple and convenient way of deploying applications is the best-to-go option for every developer and organisation. With its container-based technology, and built-in features such as docker-compose, docker images, docker volumes, docker hub and so on, it has further streamlined the development process as you can define the infrastructure along with volumes, networks and applications in a single file. Also, it offers the flexibility to run multiple applications on the provided host or directly executing the container within the host environment per se.

Here, in this article, a basic working environment will be created by setting Nginx as a web server and MySQL as the database, all within separate Docker containers. The entire stack including the config. files such as PHP, Nginx and MySQL will be defined in the docker-compose file.

Prerequisites

  • A Ubuntu 18.04 server with sudo (non-root user) privileges.
  • Installed Docker and Docker Compose, if you haven’t installed yet, go to How to Install Docker on Ubuntu 18.04

Step 1: Creating the project folders

You need to create the following files and directories to get started with the demo project.

Folder Overview:

Step 2: Creating the required config. files

i) Nginx site configuration

Get started with the nginx folder using the command below

Next, open the file, site.conf using the below command.

Install

Note: It is the fastcgi_pass php:9000; that guides Nginx on how to connect with PHP containers.

ii) Creating index.php file:

To get started with the webcontent folder, type:

Next create a file called index.php

To open the created index.php file, type:

iii) Creating php.ini file:

Since the php.ini file copied to the php container is used here, we can straightway make changes in the php.ini file. Post then just restart the container for the changes to get applied.

In case, if you make changes within the php code of the index.php file, you need not to restart the container. All the changes will immediately get applied once the browser page reloads. Here, “depends_on” – restricts the container from running before other container (on which it depends on)

Install Nginx Docker Alpine

Step 3: Creating docker-compose yaml file

In the docker yaml file, we are going to integrate the following services,

  • Nginx
  • PHP
  • MySQL

Next, open the yaml file using the below command.

Note: Make sure that you stick with the syntax to avoid errors as the docker yaml file is case sensitive.

Step 4: Proceeding with data persistence

Docker comes with a number of flexible features that ensure that every other required task can be done within the Docker software itself. One such powerful offering is Docker Volumes where you can persist (save data) the database, config files, applications, etc. In simple words, Docker Volumes helps in back up and persistence beyond the container’s lifecycle.

Now to get started with the MySQL database persistence using Docker Volume, go to docker-compose file wherein under the db service define a volume called dbdata.

Next add dbdata in the docker-compose yaml file as shown below:

Note: Include these lines at the end of the docker yaml file. Once you have included it, it will look like the following code.

Step 5: Running the docker container

First, we will run the docker yaml file using docker-compose followed by checking the docker container status, port and shell access.

Let’s run the yaml file, and it can be achieved via two commands.

i) Docker-compose up

We can run the docker yaml file using the Docker-compose up command but it will end up displaying increased logs or verbose.

ii) Docker-compose up -d

This Docker-compose up -d command helps you to quickly run the docker-compose yaml file without displaying any verbose.

Here, we are proceeding with the latter command.

Note: If you already have the images, you can directly use it in Docker. If not, download the image using the pull command from Docker Hub.

Output

Post downloading the image from Docker Hub, the Docker containers will automatically be up and running.

Note – If the image is already in local you will get a following output directly

Output

To check the docker container status, type:

Output

Install Nginx Docker Windows

To check the docker image status, type:

Output

Step 6: Adding domain to etc hosts

Either you need to add a container ip address or localhost url in etc hosts. Here, we are going with the localhost url.

Open the etc file using the following command.

Note: Include the same domain, docker-demo.com as you used in the nginx site configuration.

Step 7: Verification

To directly access the docker container shell, enter the below command.

Hit the localhost domain which you have added in etc hosts.http://docker-demo.com

Conclusion

Just to emphazise, if you have noticed, it is the Docker that plays a vital role in simplifying the development process. Want to read more about Docker and its number of capabilities, read our blog post on A Brief Introduction to Docker and Its Terminologies Setting up a stack with Nginx as a web server and MySQL as the database with required PHP config. files is not an easy and one-step process. With Docker Compose which allows you to create multiple containers, you can define the infrastructure along with the required config. files within a single file using a single command.

Install

Overview

I was recently diagnosing an issue at work where a service was configured with multiple differing ingress resources. The team’s reasoning for this was entirely reasonable and, above all, everything was working as expected.

However, once we tried to abandon Azure Dev Spaces and switch to Bridge to Kubernetes (“B2K”) it was quickly discovered that this setup wasn’t going to work straight out of the box - B2K doesn’t support multiple ingresses configured with the same domain name. The envoy proxy reports the following error:

Install Nginx Docker Windows 10

As a result, I decided the best course of action was to understand the routing that the team had enabled, and work out a more efficient way of handling the routing requirements using a single ingress resource.

To make this as simple as possible, I decided to get a sample service up and running locally so I could verify scenarios locally without having to deploy into a full cluster.

Docker Desktop

I’m using a mac at the moment, but most (if not all) of the commands here will work on Windows too, especially using WSL2 rather than PowerShell.

The Docker Desktop version I have installed is 2.4.0.0 (stable) and is the latest stable version as of the time of writing.

I have the Kubernetes integration enabled already, but I had a version of Linkerd running there which I didn’t want to interfere with what I was doing. To get around this, I just used the Docker admin GUI to “reset the Kubernetes cluster”:

Docker

To install Docker Desktop, if you don’t have it installed already, go to https://docs.docker.com/desktop/ and follow the instructions for your OS.

Once installed, ensure that the Kubernetes integration is enabled.

Note: You don’t need to enabled “Show system containers” for any of the following steps to work.

Now you should be able to verify that your cluster is up and running:

Note: I’ve aliased kubectl to k, simply out of laziness efficiency.

This will show all pods in all namespaces:

Install Nginx

Now we have a simple 1-node cluster running under Docker Desktop, we need to install the Nginx ingress:

Tip: It’s not best practice to just blindly install Kubernetes resources by using yaml files taken straight from the internet. If you’re in any doubt, download the yaml file and save a copy of it locally. That way, you can inspect it and ensure that it’s always consistent when you apply it.

This will install a Nginx controller in the ingress-nginx namespace:

Routing

Now that you have installed the Nginx controller, you need to make sure that any deployments you make use a service of type NodePort rather than the default ClusterIP:

Domains

I’ve used a sample domain of chart-example.local in the Helm charts for this repo. In order for this to resolve locally you need to add an entry to your hosts file.

On a Mac, edit /private/etc/hosts. On Windows, edit c:WindowsSystem32Driversetchosts and add the following line at the end:

Now you can run a service on your local machine and make requests to it using the ingress routes you define in your deployment.

The rest of this article describes a really basic .NET Core application to prove that the routing works as expected. .NET Core is absolutely not required - this is just simple example.

Sample Application

The concept for the sample application is a simple one.

  • There will be three different API endpoints in the app:
    • /foo/{guid} will return a new foo object in JSON
    • /bar/{guid} will return a new bar object in JSON
    • / will return a 200 OK response and will be used as a liveness and readiness check

The point we’re trying to prove is that API requests to /foo/{guid} resolve correctly to the /foo/* route, and requests to /bar/{guid} resolve correctly to the /bar/* route.

The following requests should return the expected results:

This should return an object matching the following:

Similarly, a request to the /bar/* endpoint:

This should return an object matching the following:

The sample code for this application can be found at https://github.com/michaelrosedev/sample_api.

Dockerfile

The Dockerfile for this sample application is extremely simple - it simply uses the .NET Core SDK to restore dependencies and then build the application, then uses a second stage to copy the build artifacts into an alpine image with the .NET Core runtime.

Note: This image does not follow best practices - it simply takes the shortest path to get a running service. For production scenarios, you don’t want to be building containers that run as root and expose low ports like 80.

Helm

The Helm chart in this repo was generated automatically with mkdir helm && cd helm && helm init sample. I then made the following changes:

  • Added a namespace value to the values.yaml file
  • Added usage of the namespace value in the various Kubernetes resource manifest files to make sure the application is deployed to a specific namespace
  • Changed the image.repository to mikrose/sample (my Docker Hub account) and the image.version to 1.0.1 (the latest version of my sample application)
  • Changed service.type to NodePort (because ClusterIP won’t work without a load balancer in front of the cluster)
  • Enabled the ingress resource (because that’s the whole point of this exercise)
  • Added the paths /foo and /bar to the ingress in values.yaml:

Namespace

All the resources in the service that has the issue (see Overview) are in a dedicated namespace, and I want to reflect the same behaviour here.

The first thing I need to do then is add the desired namespace to my local cluster (sample):

This will create a new namespace called sample in the cluster.

Now we can install the Helm chart. Make sure you’re in the ./helm directory, then run the following command:

  • helm install {name} ./{chart-dir} -n {namespace}, i.e. helm install sample ./sample -n sample

This will install the sample application into the sample namespace.

You can then verify that the pod is running:

Tip: If you need to do any troubleshooting of 503 errors, first ensure you have changed your service to use a service.type of NodePort. Ask me how I know this…

Now you can make requests to your service and verify that your routes are working as expected:

And that’s it - we now have a working ingress route that we can hit from our local machine.

That means that it should be straightforward to configure and experiment with routing changes without having to resort to deploying into a full cluster - you can speed up your own local feedback loop and keep it all self-contained.

I will now be using this technique to wrap up an existing service and optimise the routing.