Skip to main content

SSH: jump servers, MFA, Salt, and advanced configuration

Let’s take a short break from our discussion of Vagrant to talk about how we use SSH in production at Fictive Kin.

Recently, I went on a working vacation to visit my family in New Brunswick (think: east of the eastern time zone in Canada). While there, I needed to log in to a few servers to check on a few processes. I’ve done this in past years, and am frequently away from my sort-of-static home IP address. Usually, this required wrangling of AWS EC2 Security Groups to temporarily allow access from my tethered connection (whose IP changes at least a few times a day), but not this time. This time things were different.

Over the past year or so, we’ve been reworking most of our production architecture. We’ve moved everything into VPC, reworked tests, made pools work within auto scale groups, and generally made things better. And one of the better things we’ve done is set up SSH to work through a jump host.

This is certainly not a new idea. I’ve used hosts like this for many years. Even the way we’ve set it up is far from groundbreaking, but I thought it was worth sharing, since I've had people ask me about it, and it’s much more secure.

The short version is that we’ve set up a SSH “jump” host to allow global SSH access on a non-standard port, and that host — in turn — allows us to access our AWS servers, including QA and production if access has been granted. There is no direct SSH access to any servers except the jump host(s), and they are set up to require multi-factor authentication (“MFA”) with Google’s Authenticator PAM module.

This is more secure because nearly none of our servers are listening on the public Internet for SSH connections, and our jump host(s) listens on a non-standard port. This helps prevent compromise from non-targetted attacks such as worms, script kiddies, IBR. Additionally the server is configured with a minimal set of services, contains no secrets, requires public keys (no passwords) to log in, has a limited set of accounts, harshly rate-limits failed connections, and has the aforementioned MFA module set up, which we require our jump host users to set up.

In practice, this is pretty easy to set up and use, both from the server side and for our users.

From a user’s standpoint, we provision the account, including their public key, through configuration management (we use Salt). They then need to SSH directly to the jump host one time to configure google-authenticator, which asks a few questions, generates a TOTP seed/key, and gives the user a QR code (or seed codes) that they can scan into their MFA app of choice. We have users on the Google Authenticator app (both Android and iOS), as well as 1Password (which we acknowledge is not actually MFA, but it’s still better than single-factor).

Then, when they want to connect to a server in AWS, they connect via ssh — using their SSH private key — through the jump host (which asks for their current rotating TOTP/MFA code), and if successful allows them to proxy off to their desired server (which also requires their private key, but this is usually transparent to users).

To illustrate, let’s say a user (sean) wants to connect to their app’s QA server (, which is in a VPC that has a CIDR of 10.77/16, or has IP addresses in the 10.77.* range). If they have their SSH configuration file set up properly, they can issue a command that looks like it’s connecting directly:

~$ ssh
Authenticated with partial success.
Verification code: XXXXXX


This magic is possible through SSH’s ProxyCommand configuration directive. Here’s a sample configuration for

# jump host ; used for connecting directly to the jump host
  ForwardAgent yes
  Port 11122  # non-standard port

# for hosts such as, through jumphost01
Host *
  ForwardAgent yes
  ProxyCommand nohup ssh -p 11122 nc -w1 %h %p

# internal IP addresses for
Host 10.77.*
  ForwardAgent yes
  ProxyCommand nohup ssh -p 11122 nc -w1 %h %p

SSH transparently connects (via ssh) to the non-standard port (11122) on and invokes nc (netcat — look it up if you’re unfamiliar, and you’re welcome! (-: ) to proxy the connection’s stream over to the actual host (%h) specified on the command line.

Hope that all made sense. Please hit me up on Twitter (or email) if not.

Here are a couple bonus scenes for reading this far. Our Salt state for installing Google Authenticator’s PAM module looks like this, on Debian:

    - apt  # for backports
    - sshd-mfa.openssh  # for an updated version of sshd


        # from
        - name: libpam-google-authenticator
        - require:
            - pkg: libqrencode3

# see:
# nullok means that users without a ~/.google_authenticator will be
# allowed in without MFA; it's opt-in
# additionally, the user needs to log in to run `google-authenticator`
# before they'd have a configured MFA app/token anyway
        - pattern: '^@include common-auth$'
        - repl: |
            auth [success=done new_authtok_reqd=done default=die] nullok
            @include common-auth # modified
        - require:
            - pkg: libpam-google-authenticator
        - watch_in:
            - service: openssh6.7

        - pattern: 'ChallengeResponseAuthentication no'
        - repl: |
            ChallengeResponseAuthentication yes
            AuthenticationMethods publickey,keyboard-interactive:pam
        - append_if_not_found: True
        - watch_in:
            - service: openssh6.7

Finally, on this topic: I’ve been playing with assh to help manage my ssh config file, and it’s been working out pretty well. I suggest you give it a look.