Set up vanity go get aliases

This guide was made from examples provided by the Kubernetes project, check it out here!

A little-known fact of Go, is that you can use meta tags to redirect go get requests. This allows for custom alias domains when using public git providers (eg. GitHub, GitLab) which consequently makes it easier to maintain a stable identity for your package and ease the process of moving your repositories in future, as well as hosting repository mirrors.

When go get makes a request, it appends the go-get=1 query parameter to the URL you specify, so go get abc.xyz/example will make a request to http(s)://abc.xyz/example?go-get=1. This page can specify meta tags that locate the repository to be retrieved.

An example response is as follows:

The go-import tag is used to locate the repository for go get. We specify the go-source information for godoc support. More information on go-source is available here

If running on Kubernetes, here’s a ready-made manifest for you to get started with!

nginx as a reverse proxy with letsencrypt SSL

Let’s Encrypt has just entered public beta, and so, is now generally available for the public to use. Whilst in public beta, there are a few access restrictions (5 certificates per domain per week).

Whilst official support for nginx is still in development, it is still possible to use nginx with automatically renewing certificates from Let’s Encrypt (including automatic verification!). Here’s how I’ve achieved it, all credit to ‘renchap’ on the letsencrypt community forums for the original guide.


1. Configuring nginx

The nginx configuration is quite straightforward. We simply tell nginx to server on port 80 for all domains, a path named /.well-known/acme-challenge. Let’s Encrypt uses this path to perform automatic verification of domain name ownership. We tell nginx to serve from the /tmp/letsencrypt-auto directory, which we will configure letsencrypt to use when generating a certificate.

1
2
3
4
5
6
7
8
9
10
11
12
13
    server {
        listen      80;
        listen      [::]:80;
        server_name ~(.*)$;
        location '/.well-known/acme-challenge' {
            default_type "text/plain";
            root        /tmp/letsencrypt-auto;
        }

        location / {
            return 302 https://$host$request_uri;
        }
    }

I place this snippet in the http block directly in my nginx.conf. This will match any domains that are not already configured to listen on port 80, (hence the server_name being set to ~(.*)$).

We also set up a 302 redirect to the HTTPS site - this enforces SSL on a domain by default (although it’s still possible to override this for individual domains).

2. Request a certificate

So after following the letsencrypt installation guide, we need to request our first certificate. With nginx set up and running from the above step, we run the following command:

letsencrypt-auto certonly --server https://acme-v01.api.letsencrypt.org/directory -a webroot --webroot-path=/tmp/letsencrypt-auto -d example.org -d www.example.org

This will generate a single certificate the is for both example.org and www.example.org. Please note that the DNS record for both example.org and www.example.org must resolve to your nginx instance you’ve just configured. This is because letsencrypt will make a request to your server for a file that is generated by the letsencrypt client in order to verify you control the domain name.

Once this process is succesful, a folder containing your certificate and private key (as well as certificate chain) will be generated. As of the time of writings, letsencrypt stores this folder at /etc/letsencrypt/live/.

3. Set up autorenewal

letsencrypt certificates expire after 90 days, and so it’s recommended you renew your certificates every 60 days to be safe. To do this, we’ll configure cron to run a renewal command every 2 months. This can be done with the following line in your crontab:

0 0 */2 * * /letsencrypt/letsencrypt-auto --renew certonly --server https://acme-v01.api.letsencrypt.org/directory -a webroot --webroot-path=/tmp/letsencrypt-auto -d example.org -d www.example.org && service nginx reload

This is the same as the command we ran in the previous step, with the addition of the --renew flag. We also reload nginx after renewing so that it begins using the new certificate immediately.

4. Configure nginx to use your certificate

So now that we have our new certificate, and we have cron configure to automatically renew them every 2 months, it’s time to configure nginx to actually use this new certificate.

This step should be trivial for anyone that’s configure nginx to use SSL before. Simply create a new file in your /etc/nginx/sites-enabled folder, containing something similar to the following:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
upstream backend {
    server server1;
}

server {
    listen 443;
    server_name example.org www.example.org;

    ssl on;
    ssl_certificate /etc/letsencrypt/live/example.org/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/example.org/privkey.pem;

    location / {
        proxy_pass                          http://backend;
        proxy_set_header  Host              $http_host;   # required for docker client's sake
        proxy_set_header  X-Real-IP         $remote_addr; # pass on real client's IP
        proxy_set_header  X-Forwarded-For   $proxy_add_x_forwarded_for;
        proxy_set_header  X-Forwarded-Proto $scheme;
        proxy_read_timeout                  900;
    }
}

Conclusion

Whilst not the easiest setup procedure, this is a nice and quick way to get up and running with letsencrypt before they add official support for nginx to the letsencrypt client. If you’ve noticed any issues as you follow this guide, please let me know!

Running HAProxy in front of Kubernetes

HAProxy is a free, very fast and reliable solution offering high availability, load balancing, and proxying for TCP and HTTP-based applications. It is particularly suited for very high traffic web sites and powers quite a number of the world’s most visited ones.”

We’re going to set up and use HAProxy to act as a load balancer in front of a Kubernetes cluster. HAProxy can proxy TCP and HTTP (although unfortunately not UDP) and optionally also provide SSL/TLS encryption for your HTTP backends. HAProxy can also use SNI (Server Name Indication) to load balance multiple (encrypted) sites, on a single IP address - useful in the increasingly scarce IPv4 world.


Setup

We’re going to be using HAProxy >= 1.5 in this post, as it includes support for SSL/TLS and SNI out of the box. The Debian 8 repositories as of today publish version 1.5.8 of HAProxy, so we’ll be using that.

1
2
apt-get update
apt-get install haproxy

The next sections shall explain configuring HAProxy through its /etc/haproxy/haproxy.cfg file.


Backends

In HAProxy we must define our ‘backends’, which are where HAProxy can connect to your internal Kubernetes services. Once you’ve exposed your Kubernetes services (through a service definition file) you should have a port on each Kubernetes node that should itself proxy and balance across your cluster. So, HAProxy will connect to any of the nodes in your cluster on the specified port for that particular service, and it will be routed through your cluster by Kubernetes itself to your pods.

So, say our service has been exposed on port 32000 of each node in the cluster - we must now instruct HAProxy to connect to any of the nodes in the cluster on port 32000.

1
2
3
4
5
6
7
8
9
backend james-munnelly-eu
    mode http
    balance leastconn
    option forwardfor
    cookie SRV_ID prefix

    server node1 10.20.40.60:32000 cookie check
    server node2 10.20.40.61:32000 cookie check
    server node3 10.20.40.61:32000 cookie check

Here, you can see we’ve defined 3 nodes that are each serving on port 32000. We also define a SRV_ID cookie, in order to stick sessions from a particular client, to a particular backend server (sticky sessions).


Frontends

Below is an example frontend for a basic HTTP virtual host configuration. A switch is performed on the server name and the appropriate backend is then chosen.

1
2
3
4
5
6
7
8
9
10
11
frontend http-in
    bind :80
    type http

    reqadd X-Forwarded-Proto:\ http

    use_backend marley-landing if { hdr(host) -i marley.xyz }
    use_backend james-munnelly-eu if { hdr(host) -i james.munnelly.eu }
    use_backend kube-ui if { hdr(host) -i manage.marley.xyz }

    default_backend marley-landing

First, we name our frontend http-in. This is just a familar name for us to remember what the frontend is for. We bind to port 80 on the load balancer with bind :80 and set the type to http. This allows HAProxy to set HTTP headers, including the X-Forwarded-Proto header (and any others you may want to add).

We then do a check on the field hdr(host), which is the hostname the client is accessing this frontend on. This is derived either from HTTP headers, or SNI. This field dictates which backend service is chosen. If no backend matches, the default marley-landing backend (not shown here) is chosen.


#### SSL with Server Name Indication

As an added bonus, HAProxy can also provide SSL termination for your services and encrypt traffic to the user. Here’s an example frontend configuration for this:

1
2
3
4
5
6
7
8
9
10
frontend https-in
    bind :443 ssl crt /var/lib/haproxy/ssl/certs.d

    reqadd X-Forwarded-Proto:\ https

    use_backend marley-landing if { hdr(host) -i marley.xyz }
    use_backend james-munnelly-eu if { hdr(host) -i james.munnelly.eu }
    use_backend kube-ui if { hdr(host) -i manage.marley.xyz }

    default_backend marley-landing

There’s only a few very small diffierences in this configuration. We (obviously) must change the name of our frontend, here I’ve chosen https-in to be descriptive as to what this frontend provides. We bind to port 443, the default HTTPS port, and also provide some extra parameters: ssl crt /var/lib/haproxy/ssl/certs.d. This instructs HAProxy to enable SSL, and find certificates in the /var/lib/haproxy/ssl/certs.d folder which should contain a set a pem encoded public/private keypairs named www.example.com.pem.

We also adjust the X-Forwarded-Proto header to reflect the HTTP scheme by setting it to https.


## Conclusion

This should get you up and running with HAProxy, in an albeit manual configuration. I may in future look to writing an auto-configuration tool for HAProxy, that’d monitor the Kubernetes API for new services and automatically setup a new HAProxy configuration to load balance it.