- Written by:
- Category: Uncategorized
Single Sign-On (SSO) is nothing new. For years, people have been able to use their Google or Facebook IDs to sign into various sites around the web. OpenID started in 2005 and was not a new idea at the time. Even SharePoint and IIS have allowed a sort of federated access through the use of NTLM, which itself has been around since Windows NT 4.0 (last millennium!)
It does seem somewhat unfortunate that after all this time, many of us are still logging in to each application we use. Separately. Many times a day. Most of them use the same username and password (and we’re still allowed to use insecure passwords at that). But as we move toward using more cloud services, you may find yourself using services that don’t necessarily have the same password – this can get confusing quickly! You need some way to enable those cloud services to use your “internal” (that is, what your organization has provided you) username and password to keep everything manageable. Luckily enough, such a service will also allow you to leverage single sign-on!
As we take a look at how this works, the assumption is that your organization has Active Directory and can use Microsoft’s Active Directory Federated Services (ADFS). This post assumes you have ADFS 3.0. The setup is still pretty similar if you’ve got Apache Directory Server or another directory service front-ended by Shibboleth. ADFS (or Shibboleth, if you have that) acts as both a proxy so those cloud applications can access your internal credentials, and as the One Ring that binds all your SSO applications together. Setting an ADFS (or Shibboleth) server up is outside the scope of this post, so find someplace else to get that info if you need it so we can start leveraging the excellent benefits of having SSO.
In a nutshell, your ADFS proxy sits between all your applications and where your credentials reside:
The magic glue that allows your applications to all talk to your ADFS proxy is SAML – Security Assertion Markup Language – and one of its requirements is that both ends of the connection are encrypted. If you’ve been meaning to put an application like JIRA behind a reverse proxy like nginx, here’s your excuse.
Now that the basics are out of the way, let’s move on.
Ain’t nobody got time
In pre-ADFS days, each application determined how long your session would last. For some applications, this would be a set amount of time like 8 hours. Other applications (think your bank website) boot you off after a certain amount of inactivity. Each application had its own system, and each system stood alone.
But now that ADFS is in the picture, it brings its own values to the party. At a high level, a SAML-enabled application now has 3 timeout values to contend with.
The two ADFS timeouts come in two flavors: the Global Timeout which affects all SAML-enabled applications (henceforth known as Relying Parties), and the Relying Party Timeouts which can be but is not necessarily a different timeout for all relying parties.
Since this difference has caused confusion for many, consider this: each relying party is a balloon. What’s a party without balloons, eh? Any one of these balloons could pop sooner than their neighbors. You never know – maybe one has a hole. Or the guy across the street is testing out his new high-powered laser pointer. Point is, there could be a lot of balloons. And a lot of Relying Party timeouts.
There’s just one* of these. In our illustration, it’s the woman’s hand. If she lets go, the whole party is done. All the balloons float away, and nobody gets the benefit of already being signed in when switching applications.
In ADFS 3.0, there are 3 values that control this timeout. Only one of these values applies each time a user logs in.
- Regular ‘ole logins are controlled by the SsoLifetime property. By default, this is 480 minutes (8 hours). This is a session cookie, so if a user completely quits their browser, they are logged out.
- If a user ticks the Keep Me Signed In box on the ADFS login screen, they will be subject to the KmsiLifetimeMins property. By default, this is 1440 minutes (24 hours) and the cookie is persistent, so even if a user quits their browser they will remain logged in. Note that by default, the Keep Me Signed In option in ADFS is disabled and has to be enabled before someone can use it to log in.
1Set-AdfsProperties –EnableKmsi True
- If the user is logging in from a Registered Device (i.e. Organization-owned machine that’s been specially configured in AD), the PersistentSsoLifetimeMins timeout applies, which is 10080 minutes (1 week) by default. This uses a persistent cookie, so a user remains signed in even if they quit their browser.
If you are really paranoid, you can disable the persistent cookies so that every time the user closes their browser regardless of login type, they are forced to log in again. This is accomplished with “Set-AdfsProperties –EnablePersistentSso False” in PowerShell, but take a good long look in the mirror before running this command.
*There are methods for using different timeout values for people inside your network and people outside your network. If you require this level of control, please read this lengthy blog post on the subject. And good luck to you; you’ll need it.
Relying Party Timeouts
Each relying party gets its own timeout value. Many balloons, many relying parties. Using default values, newly created relying parties will show their TokenLifetime value as 0, which turns out to be 10 hours (doesn’t make sense to me either). ADFS 3.0 is unfortunately poorly documented, so there doesn’t seem to be rationale for why this is. Anyway, it’s pretty simple to change the value of each relying party individually if you wanted to; “relying_party” below being replaced with exactly the name used to enable that particular relying party.
Set-ADFSRelyingPartyTrust -Targetname "relying_party" -TokenLifetime 480
This value controls how long a particular relying party can be authenticated for before it needs to check the global timeout. This value can be set lower than the global timeout. For example, if it’s set to 30 minutes and the global timeout is set to 2 hours, the following timeline would occur:
- 0 minutes – user logs in to the application by entering their credentials (global timeout is 2 hours, relying party timeout is 30 minutes)
- 30 minutes – relying party timeout expires. Relying party makes an authentication call; since there are 90 minutes left on the global timeout, the user is sent back to the relying party without having to enter their credentials
- 60 minutes – relying party timeout expires. Relying party makes an authentication call; since there are 60 minutes left on the global timeout, the user is sent back to the relying party without having to enter their credentials
- 90 minutes – relying party timeout expires. Relying party makes an authentication call; since there are 30 minutes left on the global timeout, the user is sent back to the relying party without having to enter their credentials
- 120 minutes – relying party timeout expires. Relying party makes an authentication call. Global timeout has expired. User must enter their credentials
Each relying party (or application) that you enable SAML logins for will already have its own timeout values. Many SaaS applications like NewRelic will allow you to modify these timeouts, but some do not. Since we are heavy users of Atlassian products here at ITHAKA, we’ll examine the application timeouts more closely for JIRA and Confluence.
JIRA and Confluence have two separate timeout values, only one of which is used for any given login:
- Regular ‘ole login – by default, the session expires after 5 hours for JIRA or 1 hour for Confluence. This is a session cookie, so if a user completely quits their browser they are also logged out. The value for this is stored in <application-install>/atlassian-jira/WEB-INF/web.xml for JIRA and <application-install>/confluence/WEB-INF/web.xml for Confluence. Search for the session-timeout property; the value is minutes.
- Remember Me – users get this simply by checking the “Remember Me” box during login. By default, this does not expire for 14 days. It is a persistent cookie, so users remain logged in even if they quit their browser. To change the default value of this timeout, you would need to modify <application-install>/atlassian-jira/WEB-INF/classes/seraph-config.xml for JIRA or <application-install>/confluence/WEB-INF/classes/seraph-config.xml for Confluence and insert the following (values are stored in seconds):
1234<init-param><param-name>autologin.cookie.age</param-name><param-value>2592000</param-value> <!-- The value of 30 days in seconds --></init-param>
We use the SAML SingleSignOn family of plugins for JIRA and Confluence to enable SAML in those applications. With SAML-enabled logins, JIRA and Confluence use the standard session timeout defined in <application-install>/atlassian-jira/WEB-INF/web.xml for JIRA and <application-install>/confluence/WEB-INF/web.xml for Confluence. Search for the session-timeout property; the value is minutes.
After the application session has timed out, any pageloads will result in an authentication request that gets redirected to ADFS.
Tying It All Together
Once you have an understanding of all the timeouts, you can adjust as necessary to get sessions as long as you need for particular applications. Here’s a flowchart to help you understand where to start looking for values to adjust. At the start of the chart, the user has just signed into an application.
- Written by:
- Category: Uncategorized
Works with JIRA too!
Proxies for all
It’s no secret that Atlassian applications run in Tomcat servlets, and that you should definitely use a reverse proxy in front of Tomcat. We won’t spend much time on this, but here’s why a reverse proxy should be used:
- Direct access by clients to Tomcat is generally considered insecure (SSL/TLS aside)
- By default, Tomcat binds on port 8080 and cannot bind on port 80 or 443 unless it runs as root, which Apache recommends against
- SSL termination in Tomcat is much harder than in other web servers
- A reverse proxy allows you to serve up helpful error pages or serve additional content at specific locations if your application is down
Historically people have used Apache webserver for this purpose.
Apache, Jump On It
We used to use Apache webserver, but noticed it lacked in performance. Confluence pages sometimes loaded slower behind Apache than with no reverse proxy. Slow is bad.
An immediate speed boost in Apache performance can be gained by enabling mod_deflate to gzip as much content as possible. My favourite tool for checking if gzip is working is gzipwtf.com – it is browser-based so all you have to do is enter the URL you want to test.
By default, Tomcat sends messages to Apache using the Coyote HTTP/1.1 connector. We got a measurable boost in speed by switching to the AJP connector. Sweet! However, this came with its own tradeoffs – with the version of Apache that was available on CentOS, we had some trouble getting HTTPS configured. And when we eventually did get it set up, it didn’t support TLS 1.2! (doh)
In the wake of several large vulnerabilities with SSL/TLS implementations including POODLE, FREAK, Heartbleed, etc., we wanted to do as much as possible to protect our data (especially people’s passwords!) during transmission across the internet. We went through pains to get Apache 2.4 available on the CentOS instances we run our Atlassian applications on, and made sure that openssl was properly updated and patched against known vulnerabilities. But then we discovered something disturbing – openssl is built into Apache when it is compiled. Even though we’d updated openssl on our server, Apache wasn’t using it! Apache used a much older version, one that had many of the holes we were trying to patch and didn’t support TLS 1.2. Pushing this out on our internet-facing Production servers felt like building a car with a fancy biometric security system, but then still using those old interior locks that you can grab with a coathanger and then always leaving the windows halfway down when you park.
Needless to say, I started looking for ways to ensure the openssl we had on the box was actually the one that got used for TLS.
nginx to the rescue! First released in 2004, nginx is an open source web server that according to netcraft currently serves 13.23% of the billion-plus sites they survey. It is a small package, and uses the openssl you have installed on your system. Aww yeah.
Why else nginx rocks your socks
Ok, the big kicker for us was that we needed to properly our internet-facing instances. What if everything you have is internal? Why should you still use nginx over Apache or no proxy at all?
- nginx is fast. Stupid fast. Using default settings (nginx has gzip enabled by default) and the Coyote HTTP/1.1 connector between nginx and Tomcat, we saw pageload speed increases of up to 2 seconds in some cases. In other words, out-of-the-box nginx serves pages significantly faster than a tuned Apache 2.4. For an average idea of our pageloads:
- The configuration files are easier to understand. Unless you’re a sysadmin with years of in-depth Apache experience, you’ll have a much better time working with the simplified nginx configuration files.
- HTTP/2 support. Just enabling this recently allowed us to shave another 0.5 seconds off our already-fast load times. On top of lower time overall, content rendering on complex pages was visibly faster – important elements load quickly and allow you to start reading before the pageload is even complete.
- The resources nginx requires to serve lots of connections are way lower than what Apache needs. Check out nginx’s RAM usage (static) as the number of connections increase, vs Apache’s increasing RAM hunger:
Install and Configure nginx
The nginx developers maintain builds for most major distributions. How you add the repositories for these builds depends on which distribution you’re using. For simplicity’s sake, we’ll just outline the steps for Ubuntu and CentOS. If you pull the packages yourself directly from nginx’s own install instructions, just note that we’re using the mainline branch.
- Create the repo file for nginx
Shell1sudo echo -e "[nginx]\nname=nginx repo\nbaseurl=http://nginx.org/packages/mainline/centos/\$releasever/\$basearch/\ngpgcheck=0\nenabled=1" > /etc/yum.repos.d/nginx.repo
- Install nginx and update openssl
Shell1sudo yum -y install nginx openssl
- Instruct CentOS to start nginx at boot
Shell1sudo chkconfig nginx on
- Add nginx’s repo signing key
Shell1sudo curl http://nginx.org/keys/nginx_signing.key | apt-key add -
- Create the repo file for nginx
Shell1sudo echo -e "deb http://nginx.org/packages/mainline/ubuntu/ `lsb_release -cs` nginx\ndeb-src http://nginx.org/packages/mainline/ubuntu/ `lsb_release -cs` nginx" > /etc/apt/sources.list.d/nginx.list
- Update your repositories, install nginx, update openssl
Shell1sudo apt-get update && sudo apt-get -y install nginx openssl
Now you’ve got yourself a fancy state-of-the art nginx install done, let’s set it up for EXTRA SPEED!
I’ve helpfully provided all the files referenced below for your convenience – grab them here from this repo I maintain just for you. These files live in /etc/nginx/ on your machine.
A brief overview of the files:
|nginx.conf||Basic configuration for nginx. Just copy and paste; no changes needed||
|dhparam.pem||Contains sample Diffie-Helman primes to get you up and running on internal deployments.
||Consider generating your own:
|conf.d/http.conf||Listens on port 80, redirects all traffic to HTTPS on port 443
|conf.d/ssl.conf||Listens on port 443; contains all the information to reverse proxy your application
Configuring your Atlassian Application
Almost done! Just a couple modifications to your Atlassian application and your users will start getting content so fast it’ll make their eyeballs hurt.
Open up <your_application_install_dir>/conf/server.xml and find the existing connector. Contrary to the notes in the server.xml file, do not uncomment the HTTPS section. You want nginx doing the SSL termination, not Tomcat. So don’t uncomment that section. Instead, modify your existing connector:
Take note of the the options above:
- port – 8090 for Confluence, 8080 for JIRA (using application defaults). Make sure this matches what you set in /etc/nginx/conf.d/ssl.conf
- proxyName – Set this to match the base URL for your application, which you also set in /etc/nginx/conf.d/http.conf
Once you’ve finished modifying server.xml, restart your application.
Set your Base URL
Now set the base URL in your application. This is used for writing links, so it should have https:// instead of http.
- For Confluence – click on the configuration cog and go to General Configuration (or navigate to <your.application.com>/admin/editgeneralconfig.action )
- For JIRA – click on the configuration cog and go to System (or navigate to <your.application.com>/secure/admin/EditApplicationProperties!default.jspa )
Update the Base URL so it has https:// in the front, save, and you’re all set!