tech

Keyboard.io’s Atreus

There’s something about computer keyboards…even in this age of tablets and phones they remain the primary way we get significant amounts of text and code into computers. I’ve spent time with the original IBM Model M “buckling spring”, the first generation Kinesis Advantage, ThinkPad keyboards (still one of the absolute best laptop keyboards ever), the abysmal Dell Latitude d610 (stiff as a board, required significant force just to press the keys), the Apple Extended Keyboard, various MacBook Pro keyboards, as well as lots of terrible rubber-dome and membrane keyboards.

I’m in the minority in that I actually like the 3rd generation butterfly keyboard on the 2019 and later MacBook Pros. I like its tactile feel and the short travel of the keys.

When I started at SaltStack I was introduced to the mechanical keyboard community by some co-workers. Some of the above might qualify as “mechanical” keyboards, but I had no idea that there was such a cult surrounding them.

I was intrigued by the ErgoDox and participated in the MassDrop for the unassembled keyboard, bought a soldering iron, and like a Jedi padawan, I constructed my own keyboard.

I used it for about 8 or 9 months, and sold it in favor of a Filco Majestouch 2 with Cherry MX Brown switches. That was a great keyboard, and I still have it.

When the Keyboard.io folks initiated their Kickstarter for the Model 01, I was completely hooked. I ended up getting two of those (one for work, one for home). The learning curve was surprisingly steep, but I grew to really appreciate the palm buttons.

Through a fortunate happenstance another co-worker participated in the kickstarter for Keyboard.io’s latest creation, the Atreus. He had irreconcilable differences with it, and was willing to sell it to me. I’ve spent the last few days with it, tweaking the layout and getting used to it, and I think it might be the best keyboard I have ever owned.

This unit came with Kailh BOX Brown switches (here’s a comparison article on Kailh switches), these are tactile but non-clicky like CherryMX Brown switches, which I had on my Majestouch and ErgoDox. I like these much more, however. They seem “tighter” somehow, there’s no discernible wiggle in the keycaps. Actuation force is slightly less than my other keyboards. My typing speed on the alpha characters actually increased over my Model 01.

I had been eyeing the Atreus for a while but was highly concerned that I would not be able to get used to the lack of a number row at the top of the keyboard. Losing real function and Escape keys to the Apple TouchBar has been a sore spot. I took a closer look at the layout and realized there were plenty of keys available to use for layer shifting.

Paradoxically I’ve noticed that fewer keys enables more accuracy for me. I think it’s because the keyboard is so small my fingers don’t get “lost” as easily.

It will take a while to get used to my custom layout. I include an image below for anyone who finds this article and is curious about how others have setup their Atreus.

The above is from this Google Sheet. Feel free to copy if you want to make your own layout. Many thanks to Github user mattmc3 who created that sheet and posted it in this PR discussion on adding Atreus layouts that are similar to the Model 01’s.

tech

LXC on OpenSUSE Tumbleweed

Author’s Note, 2018-09-28: This article is quite outdated. Docker and LXC have both matured significantly since I wrote it.

I’ve been enjoying OpenSUSE’s Tumbleweed distribution. It has all of the benefits of a rolling release like Arch without some of the instability. Unfortunately, my standby for lots of testing, LXC, doesn’t quite work out of the box. You can retrieve images with lxc-create -n name -t download but the images won’t start.

Extensive Googling did not reveal the specific reason for this, but I finally figured it out and decided to document it here.

SUSE has excellent support for libvirt, and libvirt has rapidly improving support for LXC. So, we’ll install the libvirt suite alongside LXC. A huge advantage here is that we’re going to get a single bridge (br0) that will work for libvirt and lxc. One frustration point I’ve had with LXC on other platforms is I’d often end up with an lxcbr0 alongside other bridges for other container/virtualization options.

To install the tools you need, it’s quickest to start with Yast. Start Yast as root, select Virtualization in the left pane, then Install Hypervisor and Tools. In the next dialog, pick just KVM Tools and libvirt LXC daemon — that’s all you need.

│ ┌Choose Hypervisor(s) to install
│ │Server: Minimal system to get a running Hypervisor
│ │Tools: Configure, manage and monitor virtual machines
│ └
│ ┌Xen Hypervisor
│ │[ ] Xen server [ ] Xen tools
│ └
│ ┌KVM Hypervisor
│ │[ ] KVM server [x] KVM tools
│ └
│ ┌libvirt LXC containers
│ │[x] libvirt LXC daemon
│ └
│ [Accept] [Cancel]

Then make sure you have lxc and apparmor installed with zypper in lxc.

# zypper in lxc apparmor apparmor-utils apparmor-abstractions

Next, we need to make sure that the apparmor profile for lxc containers is loaded

# apparmor_parser /etc/apparmor.d/lxc-containers

If you look in /etc/lxc/default.conf, you’ll see that there is no network type established. Things will work better if we add a more sane configuration there:

# Network configuration
lxc.network.type = veth
lxc.network.link = br0
lxc.network.flags = up

Now pull an image — let’s use Ubuntu 14.04:

lxc-create -B btrfs -n ubuntu -t download

Setting up the GPG keyring
Downloading the image index

<list of distros omitted>

Distribution: ubuntu
Release: trusty
Architecture: amd64

Using image from local cache
Unpacking the rootfs

You just created an Ubuntu container (release=trusty, arch=amd64, variant=default)

To enable sshd, run: apt-get install openssh-server

For security reason, container images ship without user accounts and without a root password.

Use lxc-attach or chroot directly into the rootfs to set a root password or create user accounts.

Let’s try to start and attach to it.

lxc-start -n ubuntu -F
lxc-start: utils.c: open_without_symlink: 1626 No such file or directory — Error examining fuse in /usr/lib64/lxc/rootfs/sys/fs/fuse/connections
lxc-start: lsm/apparmor.c: apparmor_process_label_set: 169 If you really want to start this container, set
lxc-start: lsm/apparmor.c: apparmor_process_label_set: 170 lxc.aa_allow_incomplete = 1
lxc-start: lsm/apparmor.c: apparmor_process_label_set: 171 in your container configuration file
lxc-start: sync.c: __sync_wait: 51 invalid sequence number 1. expected 4
lxc-start: start.c: __lxc_start: 1192 failed to spawn ‘ubuntu’
lxc-start: lxc_start.c: main: 344 The container failed to start.
lxc-start: lxc_start.c: main: 348 Additional information can be obtained by setting the — logfile and — logpriority options.

Ooh. Ouch. What is aa_allow_incomplete?

man 5 lxc.container.conf

[…]

lxc.aa_allow_incomplete

Apparmor profiles are pathname based. Therefore many file restrictions require mount restrictions to be effective against a determined attacker. However, these mount restrictions are not yet implemented in the upstream kernel. Without the mount restrictions, the apparmor profiles still protect against accidental damage.

If this flag is 0 (default), then the container will not be started if the kernel lacks the apparmor mount features, so that a regression after a kernel upgrade will be detected. To start the container under partial apparmor protection, set this flag to 1.

[…]

Well, I’m OK with that, since I use my containers basically for testing. You may not be, if you need more security inside your containers. So let’s add that to /etc/lxc/default.conf and try again.

# lxc-start -n ubuntu -F
…
Ubuntu 14.04.3 LTS ubuntu console

ubuntu login: _

QED.

Note that this setup attaches the machine’s primary ethernet adapter to the bridge, and adapters inside subsequent containers to the same bridge. This means the container will get an IP address via DHCP on the same network as the host. Also if you run VMware Workstation or Fusion, VMware will complain that a VM is placing a network adapter in promiscuous mode and will ask for administrator credentials.

EDIT: regarding admin credentials when Fusion VMs try to set network adapters into promiscuous mode, I had forgotten there is a checkbox in later Fusion versions (I’m on 8.1.0). Go to the Preferences dialog in Fusion, select the Network pref sheet, and in the bottom left corner there is a checkbox to turn off the credentials requirement. Note this does introduce the possibility that a malicious VM could monitor all network traffic to and from your host machine.

Resources:

tech

Mitigating GHOST with Salt

Using SaltStack to recover from CVE-2015–0235 (Qualys Security Advisory, GHOST: glibc gethostbyname buffer overflow)

Most of us sysadmin types were pounded with this announcement this morning. The GHOST vulnerability is worth patching against—most Linux distros have already released patches—but it’s useful to know if your machines are vulnerable, or if after patching, the patch was successful.

The canonical way to test for the vulnerability is with a short C program:

/* ghost.c */
/* Code taken from CVE announcement */
/* See
http://www.openwall.com/lists/oss-security/2015/01/27/9
*/
#include <netdb.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <errno.h>

#define CANARY "in_the_coal_mine"

struct {
    char buffer[1024];
    char canary[sizeof(CANARY)];
} temp = { "buffer", CANARY };

int main(void) {
    struct hostent resbuf;
    struct hostent *result;
    int herrno;
    int retval;

/*** strlen (name) = size_needed — sizeof (*host_addr) — sizeof (*h_addr_ptrs) — 1; ***/

    size_t len = sizeof(temp.buffer)
                 - 16*sizeof(unsigned char)
                 — 2*sizeof(char *) — 1;
    char name[sizeof(temp.buffer)];

    memset(name, '0', len);
    name[len] = '\0';
    retval = gethostbyname_r(name, &resbuf, temp.buffer,
                 sizeof(temp.buffer), &result, &herrno);
    if (strcmp(temp.canary, CANARY) != 0) {
        puts("vulnerable");
        exit(EXIT_FAILURE);
    }

    if (retval == ERANGE) {
        puts("not vulnerable");
        exit(EXIT_SUCCESS);
    }
    puts("test aborted: should not happen");
    exit(EXIT_FAILURE);
}

Which can then be saved to a file “ghost.c” and compiled on most Linux machines with

gcc ghost.c -o ghost

Running it with ./ghost should produce either “not vulnerable” with an exit code of 0, or “vulnerable” with an exit code of 1.

But let’s say you have 1000 machines, all with running salt-minions. How can we test for this on all of them?

We’ll assume first that they are all the same distro as your Salt master. Yes, I know that’s a degenerate case, but to start with let’s just consider the easy route.

First, save ghost.c to a directory on your master and compile it as describe above. Then put the executable in your /srv/salt directory (or wherever your file_roots points). Put this sls file in the same directory:

# /srv/salt/ghosttest.sls

/tmp/ghost:
  file.managed:
    - source: salt://ghost
    - owner: root
    - mode: '0644'

runghost:
  cmd.run:
    - name: /tmp/ghost

Now you can fire off this on all your minions with

salt \* state.sls ghosttest

Because Salt will treat the result of cmd.run as a failure if the executed command returns a non-zero exit status, all vulnerable minions will show “FAILED”. Successfully patched minions will show “SUCCESS.”

Note that all vulnerable services will need to be restarted after a patch (or the affected system will need to be rebooted). Salt can help with this if, in fact, you need to restart individual services rather than restart an entire box.

There are a couple of odd results you can get back from this. First, on one of my machines I got

w01:
—————
       ID: /tmp/ghost
 Function: file.managed
   Result: True
  Comment: File /tmp/ghost is in the correct state
  Started: 11:06:30.632664
 Duration: 779.398 ms
  Changes:
—————
       ID: runghost
 Function: cmd.run
     Name: /tmp/ghost
   Result: False
  Comment: Command “/tmp/ghost” run
  Started: 11:06:31.412444
 Duration: 60.247 ms
  Changes:
           —————
           pid:
               28508
           retcode:
               127
           stderr:
               /bin/bash: /tmp/ghost: No such file or directory
           stdout:

Salt told me the file was present and in the correct state, but bash said “No such file or directory.” Bug in Salt, right? I mean, that’s happened before.

No, not today! If I logged into the machine and ran the executable by hand I got the same message. In this case it was because all my other machines are 64-bit, but this one is 32-bit, and the test executable was linked against the 64-bit glibc. So the message was correct, but confusing since the missing file is not the executable but the library.

Let’s fix this. I happen to have development tools installed on that box, so let’s build a 32-bit compiled version there, put it back on the master, and also modify the sls file so the correct executable will get copied to 64 or 32 bit machines.

/tmp/ghost.c:
  file.managed:
    - source: salt://ghost.c

gcc ghost.c -o ghost:
  cmd.run:
    - user: root
    - cwd: /tmp

# Note this will not work unless file_recv is 'True' in the
# salt-master config
cp.push:
  module.run:
    - path: /tmp/ghost

Then, run this sls and copy the file out of the cache directory (see cp.push documentation)

# salt <32bitminion> state.sls ghostbuild
# cp /var/cache/salt/master/minions/<32bitminion> /tmp/ghost \
     /srv/salt/ghost32

(replace 32bitminion with the minion_id where you did the build)

Now change your ghostcheck.sls to look like this

/tmp/ghost:
  file.managed:
{% if grains['osarch'] == 'i386' %}
  — source: salt://ghost32
{% else %}
  — source: salt://ghost
{% endif %}
  — owner: root
  — mode: '0700'

runghost:
  cmd.run:
    — name: /tmp/ghost
    — cwd: /tmp
    — user: root
    — require:
      — file: /tmp/ghost

Now I get accurate results from all my minions, 32-bit or 64-bit.

Obviously the simpler way to do this would be to build and run ghost.c on all minions, but many folks don’t keep gcc and friends on things like webservers.

Finally, if you don’t want to reboot all your machines, you just want to restart affected services, you can do the following (props to the hackernews discussion for this snippet)

salt \* cmd.run 'netstat -lnp | grep -e "\(tcp.*LISTEN\|udp\)" | cut -d / -f 2- | sort -u'

which will tell you which services on which machines need to be restarted. Then for each of these services and machines you can say

salt <affectedminion> service.restart <affectedservice>

Finally, shameless plug for the awesome company I work for—if you want to learn more about Salt, SaltConf would be a great place to do it! March 3–5, 2015, Grand America Hotel, Salt Lake City.

tech

Installing macOS X 10.9.2 with Salt

Several weeks ago I installed Salt on all my Macs. I have 7 currently, two of which cannot run Mavericks and are stuck at Lion (10.7). I know you can configure them to install updates automatically, but a couple of these are development machines and one is a server, and I just don’t like the idea of having them install updates and reboot whenever they feel like it.

Furthermore, the 10.9.2 release contains an important fix—the so called ‘gotofail’ security vulnerablity, fully documented here: https://www.imperialviolet.org/2014/02/22/applebug.html. You can check to see if you are vulnerable with http://gotofail.com.

I was dreading manually going to each of these machines and running Software Update, waiting for it to figure out if there were really packages to install (why does that take so long, anyway?), and doing the click dance to get it installed.

Enter Salt.

(full disclaimer—I do work for SaltStack, the company behind open source Salt)

Using Salt turned probably an hour of updating into 3 commands executed at my leisure. Note, I run my salt-master on Ubuntu in a Fusion VM on my Mac Mini server. After downloading the combo updater from Apple’s support site, I mounted it and extracted the .pkg file from it, then copied that file to my Salt master’s /srv directory (/srv/salt/OSXUpd10.9.2.pkg).

Then:

salt-master# salt -C 'G@os:MacOS and G@osrelease:10.9.1' cp.get_file \ OSXUpd10.9.2.pkg /tmp/OSXUpd10.9.2.pkg salt-master# salt -C 'G@os:MacOS and G@osrelease:10.9.1' cmd.run \ 'installer -pkg /tmp/OSXUpd10.9.2.pkg -target /' salt-master# salt -C 'G@os:MacOS and G@osrelease:10.9.1' cmd.run \ 'shutdown -r now'

So what the above says is

  1. For all MacOS machines that are on 10.9.1, copy the package file to the /tmp directory on the machine (thus avoiding my Lion machines). The -C says this is a compound target, and the command will match against both the os grain (to be “MacOS”) and the osrelease grain (to be “10.9.1”).
  2. For those same machines, run Apple’s package utility in unattended mode on the package file, and install that to the boot volume.
  3. Finally, reboot the machine.

The response I got back was identical for each machine, and looks like

mini-server:
   installer: Package name is OS X Update
   installer: Installing at base path /
   installer: The install was successful.
   installer: The install requires restarting now.

So, did it work? After waiting for the machines to come back up (use salt-run manage.status on the Salt master to see when they are all online again), the following will show the OS release number for all my Macs.

salt-master# salt -C 'G@os:MacOS' grains.item osrelease

mini-server:
   osrelease:
     10.9.2
imac-01:
  osrelease:
     10.9.2
air-01:
  osrelease:
     10.9.2
mini-01:
  osrelease:
     10.7.5
macbookpro-01:
  osrelease:
     10.9.2
macbookpro-02:
  osrelease:
     10.9.2
white-macbook:
  osrelease:
     10.7.5

(Just to be clear, names sanitized)

Voila!

tech

Removing WireLurker with Salt

Claud Xiao from Palo Alto Networks has been in touch with me and I updated this script with his recommendations.

Please note I don’t plan to add Windows support, the anti-malware vendors do a great job maintaining signatures and removing stuff like this.

The news hit the fan early yesterday morning—lots of Apple haters were giddy with excitement at the revelation of the WireLurker trojan that infects iOS devices via their host Macintosh when the devices are plugged in via USB.

Publicized by Palo Alto Networks, details on WireLurker can be found at their website. Helpfully, Palo Alto also published a Python script that can detect the infection. Removing the infection from an iOS device is a matter of backing up the device, erasing it completely by restoring it to factory defaults, and then restoring the backup. Props to Topher Kessler of MacIssues for documenting this process.

I took Palo Alto’s script and modified it so it can either be run from the command line or as a Salt execution module. From the command line:

python wireunlurk.py

will scan your Mac for signs of WireLurker. -h for help (not much there) or -c for “clean”.

wireunlurk.py will move any infected files to a dynamically-created directory in /tmp that starts with wireunlurk_bk.

If you want to run this in your Salt infrastructure, put wireunlurk.py in /srv/salt/_modules (or equivalent directory if you have customized it) and run the following on your Salt master:

salt -G 'os:MacOS' saltutil.sync_modules salt -G'os:MacOS' wireunlurk.scan

Add clean=True if you want to clean up the infection as well.

This saved me a significant amount of time scanning my Macs just at home—we have 7 Macs on my home network and rather than ssh’ing to each one, or using a tool like csshX, as soon as I got the script running and ‘saltified’ I executed the above command and could sleep with peace of mind knowing none of our devices were infected.

You can find my modified script here: https://github.com/saltstack/salt-contrib/tree/master/modules/wireunlurk