How to Get Started with Envoy Proxy as a Load Balancer & HTTP Router

Creating a load balancer and HTTP router using Envoy


In this article we're going to configure Envoy proxy to do straightforward tasks like TCP load balancing, and HTTP path based routing.

Personally, I think Envoy is a really important tool that is somewhat difficult to get started with. This blog post should help with that.

Service meshes and API gateways are just configurations of Envoy.

What is Envoy Proxy?


Envoy Proxy is a high performance C++ distributed proxy, that is designed for single services and applications. It's also used heavily as a service mesh architecture, connecting multiple microservices and interacting with in Kubernetes in the Istio Ingress controller. It supports HTTP2, gRPC, websockets, and more.

Its adoption is increasing rapidly, with companies such as Dropbox, Niantic, and Tinder adopting Envoy.

It's the backbone of services like Ambassador, Gloo, and service meshes like Istio.

It's become popular due to its complete, open source nature, speed, and advanced features. For example, Envoy uses advanced load balancing techniques that are only found in ngninx plus.

One of the goals of envoy is to provide cutting edge performance with ease of operation in a production setting. We see in its best in class speed and advanced load balancing features, with other options such as hot restarting on configuration changes.

I think it's a great tool and I think will be in use more and more in the future.

It allows you a huge amount of statistics for logging and monitoring as well.

The Envoy Architecture


Envoy is written in C++, with a single process and multiple thread architecture. It uses a threading model with a primary and various worker threads. The primary acts as a coorindator of the platform to various services and the worker threads performing tasks of listening, forwarding, and filtering.

Every inbound request to envoy hits listeners, such as TCP or UDP. A filter chain follows that allows you to do things like rate limiting, JWT validation, proxying, HTTP connections, and more.

The outbound request goes to your cluster (groups of servers or nodes) which then go to a side car proxy or to a static endpoint.

The envoy configs are defined in YAML files, using terminology much as above as we will see below.

The Setup We're Building

envoy with three sidecar

We'll be building a simple docker compose file with envoy as a Load Balancer receiving requests that will route them to another sidecar proxy of a basic Node Express webapp.

Configuring the Load Balancer

Envoy configuration is all done via YAML (like kubernetes does also) file that defines your proxy configuration. You can edit some of the settings on the fly without needing a full restart, making it extremely powerful.

Every envoy configuration consists of defining your:

  • static resources: your listeners (ports and addresses) and filter chains (http, grpc, etc.).

    • From here also define our clusters, and the individual load balancing settings we want to use for them
  • admin: how the admin panel is deployed, and logging.

  • layered runtime: Configuration settings that can be altered without needing to restart Envoy or change the primary configuration go here.

    • things like request settings, http keepalive, active connection limits, etc.

For our front facing load balancer, we'll want to setup our listener to accept incoming connections on port 80 by doing the following:

  - address: #multiple addresses if you want
        address: #localhost
        port_value: 80
        # ...

Next we need to configure the listener filter chains. These tell us what do for things like http, grpc, kafka filters, db proxies, redis, external auth callouts, and more ().

To do network load balancing, we need to define the network filter we'll be using, and our clusters that the load balancer will point to(in this case, our NodeJS app).

Let's start with network filter:

    # ...see above
        - name: envoy.tcp_proxy # custom name for this
            # a loose interface, that points to the protocol buffer to use for this filter
            cluster: "nodeapp"

*You can view a full list here

Let's specify our clusters, that the load balancer will point to:

      - name: nodeapp
        - lbendpoints: