Flashing the QNAP QNA-UC5G1T on a Mac

Note that since I managed to brick one adapter trying to do this I disclaim responsibility if you, dear reader, try it and fail. On the other hand, I searched the Internet high and low for anyone else that was successful flashing these adapters without a physical Windows machine. Finding nothing, it seemed wise to write up my success.


  1. Start Fusion 12 and a Windows 10 VM.
  2. Download the drivers for the adapter and the updated firmware onto the Windows VM
  3. Install the drivers for the adapter in Windows before plugging it in.
  4. Plug in the adapter. MacOS will claim it.
  5. Unload the driver and the ECM kext bundle from a Terminal prompt:
sudo kextunload /System/Library/DriverExtensions/AppleUserECM.dext
sudo kextunload -v -c

Finally, attach the adapter to the VM with the Virtual Machine->Bluetooth and USB menu option in Fusion. Run the autorun.bat file as an Administrator and it should flash successfully.

The long story.

I am in the process of reconstructing my home lab and have several NUC-form-factor machines that don’t have any free PCIe slots, but do have USB 3.1. As I just upgraded to a Mikrotik 10 GbE switch, I would like to have faster than 1 Gb ethernet to them, so I purchased 4 QNA-UC5G1T adapters. William Lam’s excellent website had an article on them indicating they should have their firmware flashed to 3.1.6 (available here) to get the best performance. Unfortunately the flash utility only works under Windows.

My first attempt at flashing the adapter under VMware Fusion 11 was a failure. I bricked the device, and QNAP was kind enough to RMA it for me and send me a new one.

I’ve since upgraded to Fusion 12. Starting my Windows 10 VM, I plugged the adapter in and told Fusion to attach it to the VM instead of the Mac. I was greeted with a message indicating the host machine had already claimed the device.

I found the device in the Network PrefPane and removed it, but that did not help. I realized I would have to go a little deeper.

MacOS’s System Information tool showed me that the driver being used for the adapter was /System/Library/DriverExtensions/AppleUserECM.dext. I unloaded this kext from the terminal

sudo kextunload /System/Library/DriverExtensions/AppleUserECM.dext

and also unloaded the ECM bundle

sudo kextunload

At that point I was able to attach the adapter to the Fusion VM. Following the instructions in the firmware zip file, I successfully flashed the adapter.

tech’s Atreus

There’s something about computer keyboards…even in this age of tablets and phones they remain the primary way we get significant amounts of text and code into computers. I’ve spent time with the original IBM Model M “buckling spring”, the first generation Kinesis Advantage, ThinkPad keyboards (still one of the absolute best laptop keyboards ever), the abysmal Dell Latitude d610 (stiff as a board, required significant force just to press the keys), the Apple Extended Keyboard, various MacBook Pro keyboards, as well as lots of terrible rubber-dome and membrane keyboards.

I’m in the minority in that I actually like the 3rd generation butterfly keyboard on the 2019 and later MacBook Pros. I like its tactile feel and the short travel of the keys.

When I started at SaltStack I was introduced to the mechanical keyboard community by some co-workers. Some of the above might qualify as “mechanical” keyboards, but I had no idea that there was such a cult surrounding them.

I was intrigued by the ErgoDox and participated in the MassDrop for the unassembled keyboard, bought a soldering iron, and like a Jedi padawan, I constructed my own keyboard.

I used it for about 8 or 9 months, and sold it in favor of a Filco Majestouch 2 with Cherry MX Brown switches. That was a great keyboard, and I still have it.

When the folks initiated their Kickstarter for the Model 01, I was completely hooked. I ended up getting two of those (one for work, one for home). The learning curve was surprisingly steep, but I grew to really appreciate the palm buttons.

Through a fortunate happenstance another co-worker participated in the kickstarter for’s latest creation, the Atreus. He had irreconcilable differences with it, and was willing to sell it to me. I’ve spent the last few days with it, tweaking the layout and getting used to it, and I think it might be the best keyboard I have ever owned.

This unit came with Kailh BOX Brown switches (here’s a comparison article on Kailh switches), these are tactile but non-clicky like CherryMX Brown switches, which I had on my Majestouch and ErgoDox. I like these much more, however. They seem “tighter” somehow, there’s no discernible wiggle in the keycaps. Actuation force is slightly less than my other keyboards. My typing speed on the alpha characters actually increased over my Model 01.

I had been eyeing the Atreus for a while but was highly concerned that I would not be able to get used to the lack of a number row at the top of the keyboard. Losing real function and Escape keys to the Apple TouchBar has been a sore spot. I took a closer look at the layout and realized there were plenty of keys available to use for layer shifting.

Paradoxically I’ve noticed that fewer keys enables more accuracy for me. I think it’s because the keyboard is so small my fingers don’t get “lost” as easily.

It will take a while to get used to my custom layout. I include an image below for anyone who finds this article and is curious about how others have setup their Atreus.

The above is from this Google Sheet. Feel free to copy if you want to make your own layout. Many thanks to Github user mattmc3 who created that sheet and posted it in this PR discussion on adding Atreus layouts that are similar to the Model 01’s.


Beware Amazon Teen Logins

If you pay for the privilege of Amazon Prime, Amazon has a new feature they have quietly rolled out that changes the way your family members interact with your Prime Membership.

The tl;dr is that if you are grandfathered into the older method of family management that enabled you to share your Prime benefits with three adults, do not touch your Prime settings. If you change anything you will be migrated to the new Households features and you will not be able to go back.

I’m not sure who at Amazon designed the new Teen Logins feature, but it is a wreck. If you read the basic information on the Teen Logins Parent’s Page it sounds like a great deal, seeming similar to the way Family Sharing works with Apple’s iOS ecosystem. Teens can pay with a parent’s card, and the parent gets a notification to approve or deny the request. They get Prime shipping benefits, and parents can restrict where a teen can ship. Teens can “Shop, stream and explore Amazon from your own login.” But here’s what the page does NOT say:

  • Teens can only access their Teen Login account from the Amazon App on a smartphone.
  • They cannot stream digital video to anything other than a smartphone
  • If a teen already has an Amazon login, to link their existing account to a new Teen Login account, the existing login cannot have any payment methods, nor any digital content.
  • Teen Logins cannot access shared family content.

So for those of us that already had accounts for our kids and had other safeguards in place to protect them, we basically cannot move them to the new Teen Logins feature and at least one member of our family loses access to Prime Shipping.

I was on Chat with Amazon representatives for 45+ minutes this morning. I only found out about these limitations from them, I could not find anything on the Amazon site about it. And the 5 representatives I talked to were either unable or unwilling to switch my account back to the way it was before. They were, however, perfectly willing to suggest that I spend an additional $59.50 per year for the “Prime Student” membership for one of my kids.

I know that people abuse the Prime Membership features, but ironically I was trying to do the right thing when I started this process — I was in the process of removing my oldest from our Amazon account because he was moving out.

My recommendation to Amazon would be to scrap the Teen Login program altogether and switch to simply allowing up to 5 or 6 household members to share a Prime subscription. Maybe only allow the first two members to ship to any address, and the remaining ones to only ship to the primary address. Or, at the very least, remove the draconian restrictions on the Teen program (can’t use a web browser? WHA?!?).

Amazon Teen Logins. Just say “no”.


LXC on OpenSUSE Tumbleweed

Author’s Note, 2018-09-28: This article is quite outdated. Docker and LXC have both matured significantly since I wrote it.

I’ve been enjoying OpenSUSE’s Tumbleweed distribution. It has all of the benefits of a rolling release like Arch without some of the instability. Unfortunately, my standby for lots of testing, LXC, doesn’t quite work out of the box. You can retrieve images with lxc-create -n name -t download but the images won’t start.

Extensive Googling did not reveal the specific reason for this, but I finally figured it out and decided to document it here.

SUSE has excellent support for libvirt, and libvirt has rapidly improving support for LXC. So, we’ll install the libvirt suite alongside LXC. A huge advantage here is that we’re going to get a single bridge (br0) that will work for libvirt and lxc. One frustration point I’ve had with LXC on other platforms is I’d often end up with an lxcbr0 alongside other bridges for other container/virtualization options.

To install the tools you need, it’s quickest to start with Yast. Start Yast as root, select Virtualization in the left pane, then Install Hypervisor and Tools. In the next dialog, pick just KVM Tools and libvirt LXC daemon — that’s all you need.

│ ┌Choose Hypervisor(s) to install
│ │Server: Minimal system to get a running Hypervisor
│ │Tools: Configure, manage and monitor virtual machines
│ └
│ ┌Xen Hypervisor
│ │[ ] Xen server [ ] Xen tools
│ └
│ ┌KVM Hypervisor
│ │[ ] KVM server [x] KVM tools
│ └
│ ┌libvirt LXC containers
│ │[x] libvirt LXC daemon
│ └
│ [Accept] [Cancel]

Then make sure you have lxc and apparmor installed with zypper in lxc.

# zypper in lxc apparmor apparmor-utils apparmor-abstractions

Next, we need to make sure that the apparmor profile for lxc containers is loaded

# apparmor_parser /etc/apparmor.d/lxc-containers

If you look in /etc/lxc/default.conf, you’ll see that there is no network type established. Things will work better if we add a more sane configuration there:

# Network configuration = veth = br0 = up

Now pull an image — let’s use Ubuntu 14.04:

lxc-create -B btrfs -n ubuntu -t download

Setting up the GPG keyring
Downloading the image index

<list of distros omitted>

Distribution: ubuntu
Release: trusty
Architecture: amd64

Using image from local cache
Unpacking the rootfs

You just created an Ubuntu container (release=trusty, arch=amd64, variant=default)

To enable sshd, run: apt-get install openssh-server

For security reason, container images ship without user accounts and without a root password.

Use lxc-attach or chroot directly into the rootfs to set a root password or create user accounts.

Let’s try to start and attach to it.

lxc-start -n ubuntu -F
lxc-start: utils.c: open_without_symlink: 1626 No such file or directory — Error examining fuse in /usr/lib64/lxc/rootfs/sys/fs/fuse/connections
lxc-start: lsm/apparmor.c: apparmor_process_label_set: 169 If you really want to start this container, set
lxc-start: lsm/apparmor.c: apparmor_process_label_set: 170 lxc.aa_allow_incomplete = 1
lxc-start: lsm/apparmor.c: apparmor_process_label_set: 171 in your container configuration file
lxc-start: sync.c: __sync_wait: 51 invalid sequence number 1. expected 4
lxc-start: start.c: __lxc_start: 1192 failed to spawn ‘ubuntu’
lxc-start: lxc_start.c: main: 344 The container failed to start.
lxc-start: lxc_start.c: main: 348 Additional information can be obtained by setting the — logfile and — logpriority options.

Ooh. Ouch. What is aa_allow_incomplete?

man 5 lxc.container.conf



Apparmor profiles are pathname based. Therefore many file restrictions require mount restrictions to be effective against a determined attacker. However, these mount restrictions are not yet implemented in the upstream kernel. Without the mount restrictions, the apparmor profiles still protect against accidental damage.

If this flag is 0 (default), then the container will not be started if the kernel lacks the apparmor mount features, so that a regression after a kernel upgrade will be detected. To start the container under partial apparmor protection, set this flag to 1.


Well, I’m OK with that, since I use my containers basically for testing. You may not be, if you need more security inside your containers. So let’s add that to /etc/lxc/default.conf and try again.

# lxc-start -n ubuntu -F
Ubuntu 14.04.3 LTS ubuntu console

ubuntu login: _


Note that this setup attaches the machine’s primary ethernet adapter to the bridge, and adapters inside subsequent containers to the same bridge. This means the container will get an IP address via DHCP on the same network as the host. Also if you run VMware Workstation or Fusion, VMware will complain that a VM is placing a network adapter in promiscuous mode and will ask for administrator credentials.

EDIT: regarding admin credentials when Fusion VMs try to set network adapters into promiscuous mode, I had forgotten there is a checkbox in later Fusion versions (I’m on 8.1.0). Go to the Preferences dialog in Fusion, select the Network pref sheet, and in the bottom left corner there is a checkbox to turn off the credentials requirement. Note this does introduce the possibility that a malicious VM could monitor all network traffic to and from your host machine.



Was This Information Helpful?

After helping a friend troubleshoot issues stemming from the difference between Microsoft’s MSI and Click-to-Run installers, I am now convinced that all knowledge base articles that ask for feedback need an additional checkbox.

(background, friend had legal Office 2013 license installed via MSI. Bought Project as a downloadable online. Project uses Click-to-Run installer. This article says that MSI and CTR versions cannot co-exist, and to fix the problem he needs to uninstall Office.)

No wonder Google Docs is taking over the world.


Mitigating GHOST with Salt

Using SaltStack to recover from CVE-2015–0235 (Qualys Security Advisory, GHOST: glibc gethostbyname buffer overflow)

Most of us sysadmin types were pounded with this announcement this morning. The GHOST vulnerability is worth patching against—most Linux distros have already released patches—but it’s useful to know if your machines are vulnerable, or if after patching, the patch was successful.

The canonical way to test for the vulnerability is with a short C program:

/* ghost.c */
/* Code taken from CVE announcement */
/* See
#include <netdb.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <errno.h>

#define CANARY "in_the_coal_mine"

struct {
    char buffer[1024];
    char canary[sizeof(CANARY)];
} temp = { "buffer", CANARY };

int main(void) {
    struct hostent resbuf;
    struct hostent *result;
    int herrno;
    int retval;

/*** strlen (name) = size_needed — sizeof (*host_addr) — sizeof (*h_addr_ptrs) — 1; ***/

    size_t len = sizeof(temp.buffer)
                 - 16*sizeof(unsigned char)
                 — 2*sizeof(char *) — 1;
    char name[sizeof(temp.buffer)];

    memset(name, '0', len);
    name[len] = '\0';
    retval = gethostbyname_r(name, &resbuf, temp.buffer,
                 sizeof(temp.buffer), &result, &herrno);
    if (strcmp(temp.canary, CANARY) != 0) {

    if (retval == ERANGE) {
        puts("not vulnerable");
    puts("test aborted: should not happen");

Which can then be saved to a file “ghost.c” and compiled on most Linux machines with

gcc ghost.c -o ghost

Running it with ./ghost should produce either “not vulnerable” with an exit code of 0, or “vulnerable” with an exit code of 1.

But let’s say you have 1000 machines, all with running salt-minions. How can we test for this on all of them?

We’ll assume first that they are all the same distro as your Salt master. Yes, I know that’s a degenerate case, but to start with let’s just consider the easy route.

First, save ghost.c to a directory on your master and compile it as describe above. Then put the executable in your /srv/salt directory (or wherever your file_roots points). Put this sls file in the same directory:

# /srv/salt/ghosttest.sls

    - source: salt://ghost
    - owner: root
    - mode: '0644'

    - name: /tmp/ghost

Now you can fire off this on all your minions with

salt \* state.sls ghosttest

Because Salt will treat the result of as a failure if the executed command returns a non-zero exit status, all vulnerable minions will show “FAILED”. Successfully patched minions will show “SUCCESS.”

Note that all vulnerable services will need to be restarted after a patch (or the affected system will need to be rebooted). Salt can help with this if, in fact, you need to restart individual services rather than restart an entire box.

There are a couple of odd results you can get back from this. First, on one of my machines I got

       ID: /tmp/ghost
 Function: file.managed
   Result: True
  Comment: File /tmp/ghost is in the correct state
  Started: 11:06:30.632664
 Duration: 779.398 ms
       ID: runghost
     Name: /tmp/ghost
   Result: False
  Comment: Command “/tmp/ghost” run
  Started: 11:06:31.412444
 Duration: 60.247 ms
               /bin/bash: /tmp/ghost: No such file or directory

Salt told me the file was present and in the correct state, but bash said “No such file or directory.” Bug in Salt, right? I mean, that’s happened before.

No, not today! If I logged into the machine and ran the executable by hand I got the same message. In this case it was because all my other machines are 64-bit, but this one is 32-bit, and the test executable was linked against the 64-bit glibc. So the message was correct, but confusing since the missing file is not the executable but the library.

Let’s fix this. I happen to have development tools installed on that box, so let’s build a 32-bit compiled version there, put it back on the master, and also modify the sls file so the correct executable will get copied to 64 or 32 bit machines.

    - source: salt://ghost.c

gcc ghost.c -o ghost:
    - user: root
    - cwd: /tmp

# Note this will not work unless file_recv is 'True' in the
# salt-master config
    - path: /tmp/ghost

Then, run this sls and copy the file out of the cache directory (see cp.push documentation)

# salt <32bitminion> state.sls ghostbuild
# cp /var/cache/salt/master/minions/<32bitminion> /tmp/ghost \

(replace 32bitminion with the minion_id where you did the build)

Now change your ghostcheck.sls to look like this

{% if grains['osarch'] == 'i386' %}
  — source: salt://ghost32
{% else %}
  — source: salt://ghost
{% endif %}
  — owner: root
  — mode: '0700'

    — name: /tmp/ghost
    — cwd: /tmp
    — user: root
    — require:
      — file: /tmp/ghost

Now I get accurate results from all my minions, 32-bit or 64-bit.

Obviously the simpler way to do this would be to build and run ghost.c on all minions, but many folks don’t keep gcc and friends on things like webservers.

Finally, if you don’t want to reboot all your machines, you just want to restart affected services, you can do the following (props to the hackernews discussion for this snippet)

salt \* 'netstat -lnp | grep -e "\(tcp.*LISTEN\|udp\)" | cut -d / -f 2- | sort -u'

which will tell you which services on which machines need to be restarted. Then for each of these services and machines you can say

salt <affectedminion> service.restart <affectedservice>

Finally, shameless plug for the awesome company I work for—if you want to learn more about Salt, SaltConf would be a great place to do it! March 3–5, 2015, Grand America Hotel, Salt Lake City.


Installing macOS X 10.9.2 with Salt

Several weeks ago I installed Salt on all my Macs. I have 7 currently, two of which cannot run Mavericks and are stuck at Lion (10.7). I know you can configure them to install updates automatically, but a couple of these are development machines and one is a server, and I just don’t like the idea of having them install updates and reboot whenever they feel like it.

Furthermore, the 10.9.2 release contains an important fix—the so called ‘gotofail’ security vulnerablity, fully documented here: You can check to see if you are vulnerable with

I was dreading manually going to each of these machines and running Software Update, waiting for it to figure out if there were really packages to install (why does that take so long, anyway?), and doing the click dance to get it installed.

Enter Salt.

(full disclaimer—I do work for SaltStack, the company behind open source Salt)

Using Salt turned probably an hour of updating into 3 commands executed at my leisure. Note, I run my salt-master on Ubuntu in a Fusion VM on my Mac Mini server. After downloading the combo updater from Apple’s support site, I mounted it and extracted the .pkg file from it, then copied that file to my Salt master’s /srv directory (/srv/salt/OSXUpd10.9.2.pkg).


salt-master# salt -C 'G@os:MacOS and G@osrelease:10.9.1' cp.get_file \ OSXUpd10.9.2.pkg /tmp/OSXUpd10.9.2.pkg salt-master# salt -C 'G@os:MacOS and G@osrelease:10.9.1' \ 'installer -pkg /tmp/OSXUpd10.9.2.pkg -target /' salt-master# salt -C 'G@os:MacOS and G@osrelease:10.9.1' \ 'shutdown -r now'

So what the above says is

  1. For all MacOS machines that are on 10.9.1, copy the package file to the /tmp directory on the machine (thus avoiding my Lion machines). The -C says this is a compound target, and the command will match against both the os grain (to be “MacOS”) and the osrelease grain (to be “10.9.1”).
  2. For those same machines, run Apple’s package utility in unattended mode on the package file, and install that to the boot volume.
  3. Finally, reboot the machine.

The response I got back was identical for each machine, and looks like

   installer: Package name is OS X Update
   installer: Installing at base path /
   installer: The install was successful.
   installer: The install requires restarting now.

So, did it work? After waiting for the machines to come back up (use salt-run manage.status on the Salt master to see when they are all online again), the following will show the OS release number for all my Macs.

salt-master# salt -C 'G@os:MacOS' grains.item osrelease


(Just to be clear, names sanitized)



Removing WireLurker with Salt

Claud Xiao from Palo Alto Networks has been in touch with me and I updated this script with his recommendations.

Please note I don’t plan to add Windows support, the anti-malware vendors do a great job maintaining signatures and removing stuff like this.

The news hit the fan early yesterday morning—lots of Apple haters were giddy with excitement at the revelation of the WireLurker trojan that infects iOS devices via their host Macintosh when the devices are plugged in via USB.

Publicized by Palo Alto Networks, details on WireLurker can be found at their website. Helpfully, Palo Alto also published a Python script that can detect the infection. Removing the infection from an iOS device is a matter of backing up the device, erasing it completely by restoring it to factory defaults, and then restoring the backup. Props to Topher Kessler of MacIssues for documenting this process.

I took Palo Alto’s script and modified it so it can either be run from the command line or as a Salt execution module. From the command line:


will scan your Mac for signs of WireLurker. -h for help (not much there) or -c for “clean”. will move any infected files to a dynamically-created directory in /tmp that starts with wireunlurk_bk.

If you want to run this in your Salt infrastructure, put in /srv/salt/_modules (or equivalent directory if you have customized it) and run the following on your Salt master:

salt -G 'os:MacOS' saltutil.sync_modules salt -G'os:MacOS' wireunlurk.scan

Add clean=True if you want to clean up the infection as well.

This saved me a significant amount of time scanning my Macs just at home—we have 7 Macs on my home network and rather than ssh’ing to each one, or using a tool like csshX, as soon as I got the script running and ‘saltified’ I executed the above command and could sleep with peace of mind knowing none of our devices were infected.

You can find my modified script here: