Works with JIRA too!

Proxies for all

It’s no secret that Atlassian applications run in Tomcat servlets, and that you should definitely use a reverse proxy in front of Tomcat. We won’t spend much time on this, but here’s why a reverse proxy should be used:

  1. Direct access by clients to Tomcat is generally considered insecure (SSL/TLS aside)
  2. By default, Tomcat binds on port 8080 and cannot bind on port 80 or 443 unless it runs as root, which Apache recommends against
  3. SSL termination in Tomcat is much harder than in other web servers
  4. A reverse proxy allows you to serve up helpful error pages or serve additional content at specific locations if your application is down

Historically people have used Apache webserver for this purpose.

Apache, Jump On It

We used to use Apache webserver, but noticed it lacked in performance. Confluence pages sometimes loaded slower behind Apache than with no reverse proxy. Slow is bad.

An immediate speed boost in Apache performance can be gained by enabling mod_deflate to gzip as much content as possible. My favourite tool for checking if gzip is working is gzipwtf.com – it is browser-based so all you have to do is enter the URL you want to test.

By default, Tomcat sends messages to Apache using the Coyote HTTP/1.1 connector. We got a measurable boost in speed by switching to the AJP connector. Sweet! However, this came with its own tradeoffs – with the version of Apache that was available on CentOS, we had some trouble getting HTTPS configured. And when we eventually did get it set up, it didn’t support TLS 1.2! (doh)

Security First

In the wake of several large vulnerabilities with SSL/TLS implementations including POODLE, FREAK, Heartbleed, etc., we wanted to do as much as possible to protect our data (especially people’s passwords!) during transmission across the internet. We went through pains to get Apache 2.4 available on the CentOS instances we run our Atlassian applications on, and made sure that openssl was properly updated and patched against known vulnerabilities. But then we discovered something disturbing – openssl is built into Apache when it is compiled. Even though we’d updated openssl on our server, Apache wasn’t using it! Apache used a much older version, one that had many of the holes we were trying to patch and didn’t support TLS 1.2. Pushing this out on our internet-facing Production servers felt like building a car with a fancy biometric security system, but then still using those old interior locks that you can grab with a coathanger and then always leaving the windows halfway down when you park.

Needless to say, I started looking for ways to ensure the openssl we had on the box was actually the one that got used for TLS.

nginx to the rescue! First released in 2004, nginx is an open source web server that according to netcraft currently serves 13.23% of the billion-plus sites they survey. It is a small package, and uses the openssl you have installed on your system. Aww yeah.

Why else nginx rocks your socks

Ok, the big kicker for us was that we needed to properly our internet-facing instances. What if everything you have is internal? Why should you still use nginx over Apache or no proxy at all?

      1. nginx is fast. Stupid fast. Using default settings (nginx has gzip enabled by default) and the Coyote HTTP/1.1 connector between nginx and Tomcat, we saw pageload speed increases of up to 2 seconds in some cases. In other words, out-of-the-box nginx serves pages significantly faster than a tuned Apache 2.4. For an average idea of our pageloads:
      2. The configuration files are easier to understand. Unless you’re a sysadmin with years of in-depth Apache experience, you’ll have a much better time working with the simplified nginx configuration files.
      3. HTTP/2 support. Just enabling this recently allowed us to shave another 0.5 seconds off our already-fast load times. On top of lower time overall, content rendering on complex pages was visibly faster – important elements load quickly and allow you to start reading before the pageload is even complete.
      4. The resources nginx requires to serve lots of connections are way lower than what Apache needs. Check out nginx’s RAM usage (static) as the number of connections increase, vs Apache’s increasing RAM hunger:

Install and Configure nginx

The nginx developers maintain builds for most major distributions. How you add the repositories for these builds depends on which distribution you’re using. For simplicity’s sake, we’ll just outline the steps for Ubuntu and CentOS. If you pull the packages yourself directly from nginx’s own install instructions, just note that we’re using the mainline branch.

CentOS

  1. Create the repo file for nginx
  2. Install nginx and update openssl
  3. Instruct CentOS to start nginx at boot

Ubuntu

  1. Add nginx’s repo signing key
  2. Create the repo file for nginx
  3. Update your repositories, install nginx, update openssl

    I say my good fellow, watch this steam locomotive fly!

Now you’ve got yourself a fancy state-of-the art nginx install done, let’s set it up for EXTRA SPEED!

I’ve helpfully provided all the files referenced below for your convenience – grab them here from this repo I maintain just for you. These files live in /etc/nginx/ on your machine.
A brief overview of the files:

File
Notes
Source
nginx.conf Basic configuration for nginx. Just copy and paste; no changes needed
dhparam.pem Contains sample Diffie-Helman primes to get you up and running on internal deployments.

  • If your application is internet-facing, you should generate your own primes.
Consider generating your own:
conf.d/http.conf Listens on port 80, redirects all traffic to HTTPS on port 443

  • Modify line 4 to your specific application’s URL
conf.d/ssl.conf Listens on port 443; contains all the information to reverse proxy your application

  • Update line 8 (proxy_pass) if your Confluence isn’t listening on 8090 or if you’re using this for JIRA
  • Modify lines 33-34 with your Public Certificate and Private key

Configuring your Atlassian Application

Almost done! Just a couple modifications to your Atlassian application and your users will start getting content so fast it’ll make their eyeballs hurt.

Update server.xml

Open up <your_application_install_dir>/conf/server.xml and find the existing connector. Contrary to the notes in the server.xml file, do not uncomment the HTTPS section. You want nginx doing the SSL termination, not Tomcat. So don’t uncomment that section. Instead, modify your existing connector:

Take note of the the options above:

  • port – 8090 for Confluence, 8080 for JIRA (using application defaults). Make sure this matches what you set in /etc/nginx/conf.d/ssl.conf
  • proxyName – Set this to match the base URL for your application, which you also set in /etc/nginx/conf.d/http.conf

Once you’ve finished modifying server.xml, restart your application.

Set your Base URL

Now set the base URL in your application. This is used for writing links, so it should have https:// instead of http.

  1. For Confluence – click on the configuration cog and go to General Configuration (or navigate to <your.application.com>/admin/editgeneralconfig.action )
  2. For JIRA – click on the configuration cog and go to System (or navigate to <your.application.com>/secure/admin/EditApplicationProperties!default.jspa )

Update the Base URL so it has https:// in the front, save, and you’re all set!