How to Set Up a Reverse NGINX Proxy on Alibaba Cloud

    Lucero del Alba
    Share

    This article was created in partnership with Alibaba Cloud. Thank you for supporting the partners who make SitePoint possible.

    Think you got a better tip for making the best use of Alibaba Cloud services? Tell us about it and go in for your chance to win a MacBook Pro (plus other cool stuff). Find out more here.

    Need to serve many websites from a single Linux box, optimizing resources, and automating the site launch process? Let’s get serious then, and set up a production-ready environment using Ubuntu, NGINX, and Docker — all of it on Alibaba Cloud.

    This is a somewhat advanced tutorial, and we’ll assume some knowledge of networking, server administration, and software containers.

    Understanding the Scenario

    If you are looking at this guide, chances are that you need to manage a cluster of servers, or an increasing number of websites — if not both — and are looking at what your options are for a secure, performant, and flexible environment. Well then, you came to the right place!

    Why a Reverse Proxy

    In a nutshell, a reverse proxy takes a request from a client (normally from the Internet), forwards it to a server that can fulfill it (normally on an Intranet), and finally returns the server’s response back to the client.

    Those making requests to the proxy may not be aware of the internal network.

    So the reverse proxy is the “public face” sitting at the edge of the app’s network, handling all of the requests; so it is, in a way, similar to a load balancer. But while implementing a load balancer only makes sense when you have multiple servers, you can deploy a reverse proxy with just one web server hosting multiple sites, and this can be particularly useful when there are different configuration requirements for those sites.

    There are some benefits to this approach:

    • Performance. A number of web acceleration techniques that can be implemented, including:
      • Compression: server responses can be compressed before returning them to the client to reduce bandwidth.
      • SSL termination: decrypting requests and encrypting responses can free up resources on the back-end, while securing the connection.
      • Caching: returning stores copies of content for when the same request is placed by another client, can decrease response time and load on the back-end server.
    • Security. Malicious clients cannot directly access your web servers, with the proxy effectively acting as an additional defense; and the number of connections can be limited, minimizing the impact of distributed denial-of-service (DDoS) attacks.
    • Flexibility. A single URL can be the access point to multiple servers, regardless of the structure of the network behind them. This also allows requests to be distributed, maximizing speed and preventing overload. Clients also only get to know the reverse proxy’s IP address, so you can transparently change the configuration for your back-end as it better suits your traffic or architecture needs.

    Why NGINX

    NGINX logo

    NGINX Plus and NGINX are the best-in-class reverse-proxy solutions used by high-traffic websites such as Dropbox, Netflix, and Zynga. More than 287 million websites worldwide, including the majority of the 100,000 busiest websites, rely on NGINX Plus and NGINX to deliver their content quickly, reliably, and securely.

    What Is a Reverse Proxy Server? by NGINX.

    Apache is great and probably best for what it’s for — a multi-purpose web server, all batteries included. But because of this very reason, it can be more resource hungry as well. Also, Apache is multi-threaded even for single websites, which is not a bad thing in and of itself, especially for multi-core systems, but this can add a lot of overhead to CPU and memory usage when hosting multiple sites.

    Tweaking Apache for performance is possible, but it takes savvy and time. NGINX takes the opposite approach in its design — a minimalist web server that you need to tweak in order to add more features in, which to be fair, also takes some savvy.

    In short, NGINX beats Apache big time out-of-the-box performance and resource consumption-wise. For a single site you can chose not to even care, on a cluster or when hosting many sites, NGINX will surely make a difference.

    Why Alibaba Cloud

    Alibaba Cloud logo

    Part of the Alibaba Group (Alibaba.com, AliExpress), Alibaba Cloud has been around for nearly a decade at the time of this writing. It is China’s largest public cloud service provider, and the third of the world; so it isn’t exactly a “new player” in the cloud services arena.

    However, it hasn’t been until somewhat recently that Alibaba has decidedly stepped out of the Chinese and Asian markets to dive into the “Western world”. It’s a fully-featured offering: elastic computing, database services, storage and CDN, application service, domain and website, security, networking, analytics, Alibaba Cloud covers it all.

    Deploying to Alibaba Cloud

    You’ll need an Alibaba Cloud account before you can set up your Linux box. And the good news is that you can get one for free! For the full details see How to Sign Up and Get Started.

    For this guide will use Ubuntu Linux, so you can see the How to Set Up Your First Ubuntu 16.04 Server on Alibaba Cloud) guide. Mind you, you could use Debian, CentOS, and in fact, you can go ahead and check 3 Ways to Set Up a Linux Server on Alibaba Cloud.

    Once you get your Alibaba Cloud account and your Linux box is up and running, you’re good to go.

    Hands On!

    Installing NGINX

    If we wanted to use the whole process ourselves, we would first need to install NGINX.

    On Ubuntu we’d use the following commands:

    $ sudo apt-get update
    $ sudo apt-get install nginx
    

    And you can check the status of the web server with systemctl:

    $ systemctl status nginx    
    

    With systemctl you can also stop/start/restart the server, and enable/disable the launch of NGINX at boot time.

    These are the two main directories of interest for us:

    • /var/www/html NGINX default website location.
    • /etc/nginx NGINX configuration directory.

    Now, setting a reverse proxy can be a somewhat cumbersome enterprise, as there are a number of network settings we need to go through, and files we need to update as we add sites/nodes behind our proxy.

    That is, of course, unless we automate the whole thing using software containers…

    Docker to the Rescue

    Before we can start using software containers to automate our workflow, we first we need to install Docker, which for Ubuntu is a fairly straight forward process.

    Uninstall any old version:

    $ sudo apt-get remove docker docker-engine docker.io
    

    Install the latest Docker CE version:

    $ sudo apt-get update
    $ sudo apt-get install docker-ce
    

    If you want to install a specific Docker version, or set up the Docker repository, see Get Docker CE for Ubuntu.

    Setting the Network

    Part of setting a reverse proxy infrastructure is properly setting networking rules.

    So let’s create a network with Docker:

    $ docker network create nginx-proxy
    

    And believe or not, the network is set!

    NGINX-Proxy!

    Now that we have Docker running on our Ubuntu server, we can streamline the process of installing, setting up the reverse proxy, and launching new sites.

    Jason Wilder did an awesome job putting together a Docker image that does exactly that–jwilder/nginx-proxy, a automated NGINX proxy for Docker containers using docker-gen, that works perfectly out-of-the-box.

    Here’s how you can run the proxy:

    $ docker run -d -p 80:80 -p 443:443 --name nginx-proxy --net nginx-proxy -v /var/run/docker.sock:/tmp/docker.sock:ro jwilder/nginx-proxy
    

    Essentially:

    • we told Docker to run NGINX as a daemon/service (-d),
    • mapped the proxy’s HTTP and HTTPS ports (80 and 443) the web server(s) ports behind it (-p 80:80 -p 443:443),
    • named the NGINX proxy for future reference (--name nginx-proxy),
    • used the network we previously set (--net nginx-proxy),
    • mapped the UNIX socket that Docker daemon is listening to, to use it across the network (-v /var/run/docker.sock:/tmp/docker.sock:ro).

    And believe or not, the NGINX reverse proxy is up and running!

    Launching Sites, Lots of Sites

    Normally when using Docker you would launch a “containerized” application, being a standard WordPress site, a specific Moodle configuration, or your own images with your own custom apps.

    Launching a proxied container now is as easy as specifying the virtual your domain with VIRTUAL_HOST=subdomain.yourdomain.com:

    $ docker run -d --net nginx-proxy -e VIRTUAL_HOST=subdomain.yourdomain.com --name site_name your_docker_image
    

    Where your_docker_image may be a WordPress site, your own web app, etc.

    And believe or not, your web app is online!

    … but okay, let’s explain what just happened. jwilder/nginx-proxy transparently took care of creating all of the NGINX configuration files using the host name your provided, and doing all of the necessary network routing to the software container running your app; all of it with a single bash line — isn’t that crazy?

    Notice that the IP address that you’ll use to access your web apps will always be the same, the internal routing and setting for your sites has already been taken care of, but you’ll just need to make sure that you’ve mapped your domains and subdomains appropriately to the proxy.

    Extra Tip: Using Docker Compose

    For those of you who are Docker power users, we can take the automation a little further with Docker Compose.

    Putting everything together in a docker-compose.yml file:

    version: "2"
    
    services:
      nginx-proxy:
        image: jwilder/nginx-proxy
        networks:
          - nginx-proxy
        ports:
          - "80:80"
          - "443:443"
        volumes:
          - /var/run/docker.sock:/tmp/docker.sock:ro
    
      app_name:
        container_name: your_app_name
        image: your_docker_image
        environment:
          - VIRTUAL_HOST: subdomain.yourdomain.com
        networks:
          - nginx-proxy
    
    networks:
      nginx-proxy:
        external:
          name: nginx-proxy
      back:
        driver: bridge
    

    Additionally, with Docker Compose you can also set pretty much all of the infrastructure that you need — databases, all of your apps each with their own Apache/NGINX configuration, etc. That is out of the scope for this article, but you can find more info in Overview of Docker Compose.

    Resources

    About Alibaba Cloud:

    About NGINX:

    About Docker:

    Wrapping It Up

    We just accomplished something quite elaborate—an infrastructure capable of managing hundreds of sites from a single entry point, with a strong focus on resource optimization, and an automated pipeline. Kudos!

    As we mentioned, reverse proxies can be just the starting point when implementing server load balancing (SLB), a web application firewall (WAF), and even a content delivery network (CDN). We’ll get into more of that in the future.