How do I configure Nginx for a multi-part project?

Hello,
I have a microservice project (Node.js) where each part is in a separate directory. I want to run this website through Nginx. By default, Nginx uses the /usr/share/nginx/html directory. I have two questions:

1- Should I copy all the project files to the /usr/share/nginx/html directory?

2- Should I create a directory for each project in the /usr/share/nginx/html directory and then use the location block in the Nginx configuration file?

Thank you.

Have you read NGINX Tutorial: How to Deploy and Configure Microservices - NGINX?

Hi,
Maybe my question was a little vague. Consider a website called example.com. At example.com/part1, could part1 be a microservice?

I do not think so. But part1.example.com could be an independent microservice…

1 Like

Hi,
Thank you so much for your reply.
You right.
I have some questions:

1- Can I run microservices on my system for testing? What should I use instead of part1.example.com? Can I use something like part1.localhost.localhost?

2- I have several microservices in the container as follows:

$ docker ps
CONTAINER ID   IMAGE             COMMAND                  CREATED          STATUS          PORTS                                       NAMES
cfd3dda07cfb   yaml-Part1           "docker-entrypoint.s…"   20 hours ago     Up 20 hours     0.0.0.0:3001->3000/tcp, :::3001->3000/tcp   Microservice-1
2b1f111e106d   yaml-Part2           "docker-entrypoint.s…"   20 hours ago     Up 20 hours     0.0.0.0:3000->3000/tcp, :::3000->3000/tcp   Microservice-2
a50ed19dc560   yaml-Part3           "docker-entrypoint.s…"   20 hours ago     Up 20 hours     0.0.0.0:3002->3000/tcp, :::3002->3000/tcp   Microservice-3

Is the following configuration correct?

worker_processes 1;

events { worker_connections 10000; }

http {

  sendfile on;

  gzip              on;
  gzip_http_version 1.0;
  gzip_proxied      any;
  gzip_min_length   999;
  gzip_disable      "MSIE [1-6]\.";
  gzip_types        text/plain text/xml text/css
                    text/comma-separated-values
                    text/javascript
                    application/x-javascript;

  # List of application servers
  upstream Part1-Micro {

  server yaml-Part1.localhost.localhost:3000;

  }
  upstream Part2-Micro {

  server yaml-Part2.localhost.localhost:3001;

  }
  upstream Part3-Micro {

  server yaml-Part3.localhost.localhost:3002;

  }

    server {

    listen 80;
    server_name default_server;
    error_log  /var/log/nginx/error.system-default.log;
    access_log /var/log/nginx/access.system-default.log;
    charset utf-8;
   
    location /Part1 {

    proxy_pass         http://Part1-Micro;
    proxy_redirect     off;
    proxy_set_header   Host $host;
    proxy_set_header   X-Real-IP $remote_addr;
    proxy_set_header   X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header   X-Forwarded-Host $server_name;

    }
    # Proxying the Communities API
    location /Part2 {

    proxy_pass         http://Part2-Micro;
    proxy_redirect     off;
    proxy_set_header   Host $host;
    proxy_set_header   X-Real-IP $remote_addr;
    proxy_set_header   X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header   X-Forwarded-Host $server_name;

    }
    # Proxying the Devices API
    location /Part3 {

    proxy_pass         http://Part3-Micro;
    proxy_redirect     off;
    proxy_set_header   Host $host;
    proxy_set_header   X-Real-IP $remote_addr;
    proxy_set_header   X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header   X-Forwarded-Host $server_name;

    }
  }
}

I am not familiar with interpreted micro services, but a guess i to test locally you have to specify port. http://127.0.0.1:3000 if the service is running on port 3000. The name does not matter if you not is running via a DNS.

1 Like

Hello,
Should each microservice have its own IP address? For example:

 1.2.3.4    foo.example.org        foo
 5.6.7.8    bar.example.org        bar

Should the host on which the microservice is running have several IP addresses?

Not for testing - only different ports. And for live you have different “sub domains”. Like part2.example.com, part3.example.com

Not necessary. You can host all micro services in ONE single VPS. Look at micro services as different web sites - sort of.

One ip-address and each site running under different ports. Nginx redirect the incoming requests to the correct port. read more…

server {
    server_name go4webdev.org www.go4webdev.org;
                      
    location / {
        proxy_pass http: //localhost:9090;
    }
}
                      
server {
    server_name static.go4webdev.org;
                      
    location / {
        proxy_pass http: //localhost:9091;
    }
}
                      
server {
    server_name table.go4webdev.org;
                      
    location / {
        proxy_pass http: //localhost:9092;
    }
}
1 Like

Hello,
Thanks again.
Other questions were raised to me:

1- Was my configuration wrong? Your configuration is different from mine.

2- So, if I can start all the microservices on the same IP address, then when I ping part1.example.com and part2.example.com they should have the same IP address. Right?

3- Is the /etc/hosts file below correct?

 1.2.3.4    part1.example.org        part1
 1.2.3.4    part2.example.org        part2

4- How do I browse those addresses? curl http://localhost:80/part1?

I use a compiled language with built in engine. I see you use Node.js so I guess that you can run several Node applications on different ports. Then it should not be that different from my Nginx settings.

Your VPS (or server) is like a telephone. Calling the main number (IP-address). To talk to the right person you ask the switchboard (Nginx) to connect to the correct port (direkt number).

I have never used /etc/hosts in this way before. IDK.

Have you tried to use http://serverip:portnumber in your browser instead of curl? Or you can try to confirm that Node is listening on a port on Debian (if you not use Debian or Ubuntu search for the exact command)

nc -zv 127.0.0.1 7004

Hello,
I changed the configuration as below:

server {
    server_name server1;
    listen 80;
    error_log  /var/log/nginx/error.system-default.log;
    access_log /var/log/nginx/access.system-default.log;
    charset utf-8;
                   
    location / {
     proxy_pass http://localhost:3000;
     proxy_http_version 1.1;
     proxy_set_header Upgrade $http_upgrade;
     proxy_set_header Connection 'upgrade';
     proxy_set_header Host $host;
     proxy_cache_bypass $http_upgrade;
    }
}

server {
    server_name server2;
    listen 80;
    error_log  /var/log/nginx/error.system-default.log;
    access_log /var/log/nginx/access.system-default.log;
    charset utf-8;
                      
    location / {
     proxy_pass http://localhost:3001;
     proxy_http_version 1.1;
     proxy_set_header Upgrade $http_upgrade;
     proxy_set_header Connection 'upgrade';
     proxy_set_header Host $host;
     proxy_cache_bypass $http_upgrade;
    }
}

I tried to see the microservices:

# curl localhost
<html>
<head><title>502 Bad Gateway</title></head>
<body>
<center><h1>502 Bad Gateway</h1></center>
<hr><center>nginx/1.25.4</center>
</body>
</html>

What is wrong? If I use curl localhost:3000, then the file will be executed from Node.js and not Nginx!

If you use a terminal at the server and sends curl localhost:3000 it is calling the Node directly without using the internet. If you call it from your computers browser you use the internet to contact Nginx which pass the request to Node at port 3000.

localhost never touches the internet and hence not the Nginx.

I am playing with micro services communicating on the same server only using localhost. Faster and less traffic (no internet or Nginx involved). And way safer.

Hi,
Thanks again.
The container configuration is as follows:

services:
    nodejs:
      container_name: Node-Micro
      build:
         context: .
         dockerfile: Dockerfile
      command: npm start
      volumes:
        - ./www:/usr/src/app
      expose:
        - "3000"
      ports:
        - '3000:3000'
    nginx:
      image: nginx:latest
      container_name: Nginx-Micro
      ports:
        - '80:80'
      volumes:
        - ./default.conf:/etc/nginx/conf.d/default.conf
        - ./www:/usr/share/nginx/html
      depends_on:
        - nodejs
      links:
        - nodejs

If I use the server address along with the port, the request still goes to Node.js.

The Nginx log is:

[error] 31#31: *7 connect() failed (111: Connection refused) while connecting to upstream, client: 172.19.0.1, server: server1, request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:3000/", host: "localhost"

I edited /etc/hosts and used 127.0.0.1 server1.

Does not docker complicates things until you get it to work without containers? What is the purpose of containers in this phase? Will it not be simpler to add containers later? Just wondering… :slight_smile:

Do you think this problem can be from the container? If you think so, then I will install an Nginx on the host and test with the same configuration.

Each layer of complexity makes most things harder to grasp. I use to make things extremely simple at the beginning in order to understand. Then I add complexity… :slight_smile:

2 Likes

Hi,
I installed the Nginx on my host. I created an /etc/nginx/conf.d/default.conf file with the following contents:

server {
    server_name server1;
    listen 80;
    error_log  /var/log/nginx/error.system-default.log;
    access_log /var/log/nginx/access.system-default.log;
    charset utf-8;
                   
    location /s1 {
     proxy_pass http://127.0.0.1:3000;
     proxy_http_version 1.1;
     proxy_set_header Upgrade $http_upgrade;
     proxy_set_header Connection 'upgrade';
     proxy_set_header Host $host;
     proxy_cache_bypass $http_upgrade;
    }
}

server {
    server_name server2;
    listen 80;
    error_log  /var/log/nginx/error.system-default.log;
    access_log /var/log/nginx/access.system-default.log;
    charset utf-8;
                      
    location /s2 {
     proxy_pass http://127.0.0.1:3001;
     proxy_http_version 1.1;
     proxy_set_header Upgrade $http_upgrade;
     proxy_set_header Connection 'upgrade';
     proxy_set_header Host $host;
     proxy_cache_bypass $http_upgrade;
    }
}

Then, I restarted then Nginx service:

# systemctl restart nginx

Then I checked the configuration file:

# /sbin/nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful

The index.js file is as follows:

const http = require('http')
const server = http.createServer((req, res) => {
	res.writeHead(200, { 'content-type': 'text/html' })

	if (req.url === '/index.js') {
		res.write('<h1>Node and Nginx on Docker is Working</h1>')
		res.end()
	} else {
		res.write('<h1>404 Nout Found</h1>')
		res.end()
	}
})

server.listen(process.env.PORT || 3000, () => console.log('server running on ${server.address().port}'))

I tried to see the Node.js applications:

# curl http://localhost
<html>
<head><title>403 Forbidden</title></head>
<body>
<center><h1>403 Forbidden</h1></center>
<hr><center>nginx/1.22.1</center>
</body>
</html>
#
# curl http://server1/s1
<h1>404 Nout Found</h1>

It seems to work, but…

but…?

Make hello world 1 and hello world 2 and try to contact them

  1. Same computer (localhost:port).
  2. Then remote (http://externalip:port).
  3. Finally you call via DNS (if set up) http://hello1.example.com
  4. If you add SSL support, you use https://hello1.example.com

And after configured the DNS, modify
server_name server1
to
server_name server1.example.com etc

Some micro services you may want to reach via the internet (DNS), Other micro services that is only called internally do not need neither Nginx nor DNS. Just call them internally via

fetch('http://127.0.0.1;3000')

Or what ever you are used to GET…

1 Like

Hi,
I’m really thankful for your time and help.
The problem with the containers was as follows:

1- I had to define a name for each container using hostname. For example:

hostname: App1

2- I changed the Nginx configuration as follows:

proxy_pass http://App1:3000;

Problem solved.

This topic was automatically closed 91 days after the last reply. New replies are no longer allowed.