Tag Archives: Linux

Configuring Multiple Networked OR USB APC UPS devices

As any proper IT-nerd would agree, UPS devices are critical pieces of equipment not only in the data center, but also at home. However, most home users are not in a position to acquire a massive 10+ kilovolt-amp UPS capable of protecting all circuits by which our various “personal devices” are powered; rather, most home/small office UPS installations are often small desktop units usually under 3kVA. In this scenario, these smaller units are typically allocated to individual devices / ares, and are typically only responsible for signaling the status of the incoming utility power to one device.

What about using multiple UPS devices for different components of the same “workspace”? Or home networks with access points and switches in more than one location (therefore each having its own battery backup)? How would one monitor multiple distributed battery backup units (presuming each UPS unit has only USB connectivity)?


Enter apcupsd: “A daemon for controlling APC UPSes.” Unfortunately, the plurality of this utility’s tagline indicates a wide range of supported devices rather than multiple concurrently connected devices. To date, I’ve found one article describing how to configure apcupsd to support multiple USB-attached UPS devices, and it’s not really pretty. The gist of the process is as follows:

  1. Configure udev rules to ensure a consistent mapping (by UPS serial number) to a named mount point
  2. Create multiple apcupsd configuration files for each connected UPS
  3. Create new “action” and “control” scripts for each UPS
  4. Re-configure the apcupsd init.d/systemd scripts to launch multiple instances of the daemon (one for each UPS)

I’m generally not a fan of creating large “custom” configuration files in obscure locations with great variance from the distributed package, so this process seemed a little “hackey” to me; especially since the end result of all of these configuration files was to have “isolated processes” for each UPS to monitor.

Dockerizing APCUPSD

At this point, I decided to take a different approach to isolating each apcupsd process: an approach with far greater discoverability, version-control potential, and scalability. Docker.

I decided to use the first step outlined in the apcupsd guide on the Debian Wiki (creating udev rules to ensure physical devices are given a persistent path on boot/attach). UPS devices are generally mounted at /dev/usb/hiddev*, so we should confirm that we have a few present:

# ls /dev/usb
hiddev0  hiddev1

# lsusb
Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 004 Device 002: ID 051d:0002 American Power Conversion Uninterruptible Power Supply
Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 003 Device 002: ID 051d:0002 American Power Conversion Uninterruptible Power Supply
Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub

Great! we’ve got things that look like UPS devices on /dev/usb/hiddev0 and /dev/usb/hiddev1. Now to get the serialnumbers:

# udevadm info --attribute-walk --name=/dev/usb/hiddev0 | egrep 'manufacturer|product|serial'
    ATTRS{manufacturer}=="American Power Conversion"
    ATTRS{product}=="Back-UPS BX1500G FW:866.L5 .D USB FW:L5 "
    ATTRS{manufacturer}=="Linux 4.4.0-170-generic ohci_hcd"
    ATTRS{product}=="OHCI PCI host controller"
# udevadm info --attribute-walk --name=/dev/usb/hiddev1 | egrep 'manufacturer|product|serial'
    ATTRS{manufacturer}=="American Power Conversion"
    ATTRS{product}=="Back-UPS NS 1100M2 FW:953.e3 .D USB FW:e3     "
    ATTRS{manufacturer}=="Linux 4.4.0-170-generic ohci_hcd"
    ATTRS{product}=="OHCI PCI host controller"

With the now known serial numbers, we create udev rules to persist these devices to known map points:

## FILE AT /lib/udev/rules.d/ups.rules

KERNEL=="hiddev*", ATTRS{manufacturer}=="American Power Conversion", ATTRS{serial}=="8975309", OWNER="root", SYMLINK+="usb/ups-screen"

# ComputeAndNetwork UPS
KERNEL=="hiddev*", ATTRS{manufacturer}=="American Power Conversion", ATTRS{serial}=="8675310", OWNER="root", SYMLINK+="usb/ups-compute-and-network"

And now to re-run the udev rules:

udevadm trigger --verbose --sysname-match=hiddev*

Now, we should have some “nicely named” UPS USB devices:

# ls -la /dev/usb
total 0
drwxr-xr-x  2 root root    120 Dec 18 19:55 .
drwxr-xr-x 22 root root   4280 Dec 18 19:55 ..
crwxrwxrwx  1 root root 180, 0 Dec 18 19:55 hiddev0
crwxrwxrwx  1 root root 180, 1 Dec 18 19:55 hiddev1
lrwxrwxrwx  1 root root      7 Dec 18 19:55 ups-compute-and-network -> hiddev1
lrwxrwxrwx  1 root root      7 Dec 18 19:55 ups-screen -> hiddev0

Excellent! Now, anytime these devices are plugged/unplugged, we shouldn’t have to guess which is hiddev0 and which is hiddev1, since udev will automagically provide us named mount points for these USB devices, which will be critical to the next steps

Next, I created a docker-compose file with the three “services” I decided I’d like for this setup:

  • APCUPSD for the “screens” UPS
  • APCUPSD for the “Compute and Network” UPS
  • Apache/Multimon to provide an HTTP based interface

This docker-compose file also contained pointers to specific Dockerfiles to actually build an image for each service (hint: the two apcupsd services use the same container with different configurations).

The apcupsd container is nothing more than the latest Alpine linux image; apcupsd from the apk repository and a very lightweight apcupsd configuration files (configured to watch onlythe UPS at /dev/ups – more on this later)

The multimon container uses the latest Apache/alpine image, and adds apcupsd-webif from the apk repository along with a few configuration files for multimon. Additionally, I wrote an entrypoint.sh script to parse environment variables and generate a configuration file for multimon so that the UPS(s) displayed on the web interface can be set from the docker-compose file.

Having now covered the build process, let’s put together the docker-compose services:

Full Docker setup here: https://github.com/crossan007/APCUPSD-Multimon-Docker

Now, instead of attempting to create custom init scripts, multiplex processes within systemd, and override the packaged mechanisms for apcupsd‘s configuration discovery, I instead have a cleanly defined interface for isolating instances of apcupsd to provide a status page for my two APC UPS devices.

Thanks for reading, and hopefully this helps you in some way!

VAInfo: Verify Hardware Accelerated Video Support

On Ubuntu (and possibly other) Linux distros, run vainfo to see which Intel QuickSync profiles are supported.

For example, these profiles are supported on an Intel Haswell chip:

libva info: VA-API version 0.39.0
libva info: va_getDriverName() returns 0
libva info: Trying to open /usr/lib/x86_64-linux-gnu/dri/i965_drv_video.so
libva info: Found init function __vaDriverInit_0_39
libva info: va_openDriver() returns 0
vainfo: VA-API version: 0.39 (libva 1.7.0)
vainfo: Driver version: Intel i965 driver for Intel(R) Haswell Desktop - 1.7.0
vainfo: Supported profile and entrypoints
      VAProfileMPEG2Simple            : VAEntrypointVLD
      VAProfileMPEG2Simple            : VAEntrypointEncSlice
      VAProfileMPEG2Main              : VAEntrypointVLD
      VAProfileMPEG2Main              : VAEntrypointEncSlice
      VAProfileH264ConstrainedBaseline: VAEntrypointVLD
      VAProfileH264ConstrainedBaseline: VAEntrypointEncSlice
      VAProfileH264Main               : VAEntrypointVLD
      VAProfileH264Main               : VAEntrypointEncSlice
      VAProfileH264High               : VAEntrypointVLD
      VAProfileH264High               : VAEntrypointEncSlice
      VAProfileH264MultiviewHigh      : VAEntrypointVLD
      VAProfileH264MultiviewHigh      : VAEntrypointEncSlice
      VAProfileH264StereoHigh         : VAEntrypointVLD
      VAProfileH264StereoHigh         : VAEntrypointEncSlice
      VAProfileVC1Simple              : VAEntrypointVLD
      VAProfileVC1Main                : VAEntrypointVLD
      VAProfileVC1Advanced            : VAEntrypointVLD
      VAProfileNone                   : VAEntrypointVideoProc
      VAProfileJPEGBaseline           : VAEntrypointVLD


Display HTTPS X509 Cert from Linux CLI

Recently, while attempting a git pull, I was confronted with the following error:

Peer's certificate issuer has been marked as not trusted by the user.

The operation worked on a browser on my dev machine, and closer inspection revealed that the cert used to serve the GitLab service was valid, but for some reason the remote CentOS Linux server couldn’t pull from the remote.

I found this post on StackOverflow detailing how to retrieve the X509 cert used to secure an HTTPS connection:

echo | openssl s_client -showcerts -servername MyGitServer.org -connect MyGitServer.org:443 2>/dev/null | openssl x509 -inform pem -noout -text

This was my ticket to discover why Git on my CentOS server didn’t like the certificate: the CentOS host was resolving the wrong DNS host name, and therefore using an invalid cert for the service.

And now a Haiku:


Backup Google Authenticator Database

Two factor authentication is great – I wish everything would use it.   My personal 2FA (specifically TOTP)  mobile app is Google Authenticator.  It allows you to scan a barcode, or manually enter a 2FA initilization token, and gives you a nice display of all of your stored 2FA tokens, with a great countdown of the token’s expiration.  However, it does have one critical flaw feature:  You can’t export your accounts.

Let me re-state that:  Your 2FA tokens are locked away in your mobile device.  Without the device, you’re locked out of your accounts (Hopefully you created backup codes).  If your device becomes inoperable, good luck!

However, if you have root access to your device, you can grab the Google Authenticator database and stow it away for safe keeping by grabbing it from the following location on your phone:


If you have ADB enabled, you can just run the following command:

 adb pull /data/data/com.google.android.apps.authenticator2 

Keep this information very secure, as it can be used to generate 2FA codes for all of your accounts!

Troubleshooting OwnCloud index.php

Sometimes OwnCloud includes “index.php” in the shared links.  It’s annoying and ugly.  Here’s some things to check:

  1. Is mod rewrite enabled in the apache config?
    <Directory /var/www/html/owncloud/>
     Options Indexes FollowSymLinks MultiViews
     AllowOverride All
     Order allow,deny
     Allow from all
     <IfModule mod_dav.c>
      Dav off
     SetEnv HOME /var/www/html/owncloud
     SetEnv HTTP_HOME /var/www/html/owncloud
  2. Is the .htaccess correct?  The ###DO NOT EDIT### Section must contain this line (Generally the last line in the IfModule for mod_rewrite
    RewriteRule .* index.php [PT,E=PATH_INFO:$1]
  3. .htaccess must also contain this block for the web app to generate URLs without “index.php”
    <IfModule mod_rewrite.c>
      RewriteBase /
      <IfModule mod_env.c>
        SetEnv front_controller_active true
        <IfModule mod_dir.c>
          DirectorySlash off

Those are my findings for making sure OwnCloud URLs stay pretty.

Unifi Controller on 16.04

Steps to install the UniFi controller on Ubuntu 16.04.  Note that the package depends on JRE7, so we must add the ppa repo to apt.

echo "deb http://www.ubnt.com/downloads/unifi/debian stable ubiquiti" > /etc/apt/sources.list.d/ubnt.list
apt-key adv --keyserver keyserver.ubuntu.com --recv C0A52C50

sudo add-apt-repository ppa:openjdk-r/ppa
sudo apt-get update

sudo apt-get install unifi

Expand Ubuntu LVM

Expand an existing Ubuntu LVM without creating additional partitions, or adding to LVM VGs:

  1. Expand the physical device (VMware, HyperV, DD to a new physical device,  etc)
  2. Use offline GParted cd to re size the extended partition on which LVM lives
  3. In live OS, use parted “resizepart” to extend the logical partition inside of the previously re sized extended partition
    (parted) resizepart
    Partition number? 5
    End? [268GB]? 1099GB
  4. reboot
  5. use LVM to resize the PV:
    pvresize /dev/sda5
  6. resize the filesystem in the LV:



Managing ZFS Snapshots

ZFS is a great file system, providing much flexibility, and simple administration.   It was originally developed for Sun operating systems, but has been ported to Linux, and support is now baked in to Ubuntu Server 15.10.

ZFS natively supports snapshots, but there is a tool for automagically creating, and aging snapshots: https://github.com/zfsnap/zfsnap.  (Man page: http://www.zfsnap.org/zfsnap_manpage.html)

Installation is fairly straight forward:

  • clone the Git repo
  • copy the files to /usr/local/src
  • create a link to the binaries path:
    ln -s /usr/local/src/zfsnap/sbin/zfsnap.sh /usr/local/sbin/zfsnap

After that, set up a crontab for automatic snapshot creation:

crontab -e
26 * * * * /usr/local/sbin/zfsnap snapshot -a 5d -r tank

And finally, set up crontab for automatic snapshot deletion:

 0  1 * * * /usr/local/sbin/zfsnap destroy -r tank 

April Fools Day 2015 – What’s in Pandora’s Box

The Idea

So, I’ve seen the Upside-Down-Ternet many times, and I began thinking…How can I leverage this idea on one of my wife’s favorite websites – Pandora.

She listens to Pandora for a good part of the day from our home internet connection… Perfect! I can set up a transparent http proxy, and manipulate requests for Pandora as they come through.

Now, what should I play?   This of course. And may more things.  And maybe What does the spleen do?

The Implementation

Determining an “attack vector”

I fired up Chrome’s developer tools while listening to a pandora stream, and was quite pleasantly surprised: the audio is transferred over HTTP (correct – no encryption), in MP3 format. (And I discovered a little too late that the Pandora ONE Player will play audio/mp3 streams, while the free pandora player will only play audio/mp4 streams – This is important later on!)  How easy this will be!  All I’ll need to do is watch for the  specially crafted URL requesting resources from http://audio-*.pandora.com/ (and *.p-cdn.com) access and respond accordingly – In this case, with an mp3 pre-staged on my intercepting server.

Base Environment

My “host” in this scenario is a VM running on Hyper-V on my Windows 8.1 Desktop.  The VM is running Ubuntu 14 as a guest OS, and has  2 cores with 256 MB ram, and one network adapter.

Phase 1: Configuring Squid3 & iptables

Squid3 is a proxy server that supports something called “transparent mode.”  In conjunction with iptables, squid can be a very effective content filter, caching proxy, or the perfect tool to carry out an April fools prank.

In this scenario, we’ll be setting up our linux machine to “Masquerade” as the machines that will be passing traffic to (through) it. In much the same manner as how your existing home router works: You have one public IP address, and all of the requests from computers within your network (using private IP addresses)  appear to come from that one public IP. This is called NAT.

Since this linux machine will facilitate the transfer of all traffic from the “victim” machines to the internet, It’s in the perfect location to identify (and manipulate) Pandora requests.

OK, OK, enough theory, let’s get some code


  1. Enable ip_forwarding (this is temporary, and will go away after a reboot of the  “host” machine)
    echo 1 &gt; /proc/sys/net/ipv4/ip_forward
  2. Configure iptables to pass traffic (Never configure it this way if you’re actually building an edge device.  Since all of my devices – both “host” and “victim” machines are on the same physical network, I took some liberties with security)
    iptables -F
    iptables -t nat -F
    iptables -P INPUT ACCEPT
    iptables -P OUTPUT ACCEPT
    iptables -P FORWARD ACCEPT
  3. Next, we need to tell iptables to “masquerade,” or that is to “NAT” the traffic that comes from the local subnet, and is destined for the internet.
    iptables -t nat -A POSTROUTING -s -j MASQUERADE
  4. Great, but what about our prank?   Let’s explicitly redirect traffic destined for the IP segment owned by Pandora (you can find this using whois)
    iptables -t nat -A PREROUTING -d -p tcp --dport 80 -j DNAT --to-destination


  1. First, install squid using your favorite packaging tooapt-get install squid3
  2. Configure Squid.  I’ve taken the liberty of trimming down the config file as thin as possible for this scenario.  5 lines!
    redirect_program /home/administrator/pandora.pl
    http_access allow all
    http_port 3128 transparent
    strip_query_terms off
    coredump_dir /var/spool/squid3
  3. Next, we need to write the redirect_program.  Having not actually read the Squid3 documentation, and surmising based on operation – This is loaded at the time the Squid3 service is started, and continually runs in the background.  Squid3 then passes URLs from clients into the script through the pipeline.  The script then passes a URL back to Squid3.  In this circumstance, we use some regex to identify all requests for a Pandora song (http://audio.*?pandora\.com and http://.*\.p-cdn\.com)
    use strict;
    $| = 1;
    while (&lt;&gt;) {
    my @elems = split;
    my $url = $elems[0];
    if ($url =~ m#^http://audio.*?pandora\.com#i) {
    $url = "";
    print "$url\n";
    if ($url =~ m#^http://.*\.p-cdn\.com#i) {
    $url = "";
    print "$url\n";
    print "$url\n";
  4. Restart Squid3
    service squid3 restart


Since we’re actually replacing the song in Pandora with a “payload” track, we need some way of hosting this audio.  Additionally, we need the host to respond with the “payload” track for any and all incoming requests.  Queue: Apache mod_rewrite.

  1. Edit the  /etc/apache2/sites-enabled/000-default.conf file, and add these three lines.  This causes any inbound HTTP requests to return the test.mp4 file (with the correct MIME association, so as not to break “free” Pandora)
    RewriteEngine on
    RewriteRule .* /test.mp4
    AddType audio/mp4 .mp4
  2. Place the test.mp4 file at /var/www/html

Phase 1.5: Test Proof of Concept

  1. Set a host on the LAN to use the afforementioned box as a default gateway.
  2. Launch Pandora
  3. Validate that only the payload song will play.

Phase 2: Deploy to LAN

I have a standard FiOS router as my default gateway, and the device does not give total control over the DHCP server settings.  Of particular interest here is the option routers parameter.  This allows the DHCP server to dictate to the clients what IP address they should use as a default gateway.  Obviously if this prank is going to affect more than my sandbox, I need the other devices on the LAN to pass all of their traffic through the “host”

Configure isc-dhcp-server

  1. Install isc-dhcp-server using your favorite package manager
    apt-get install isc-dhcp-server
  2. modify the lines below in the /etc/dhcp/dhcpd.conf file.  Define some hosts if you’d like to exclude them from the prank.  All hosts with a host block will be issued an IP in the deny unknown clients pool:  this is not; however, what determines their gateway, but rather the options routers clause in the  host  block.   One very important thing here is to set the lease time rather low.  I don’t want this prank to cause some random device to get an IP and hold onto it for the default of 8 days. Bumblebee happens to be my desktop:
    option domain-name "ccrossan.com";
    option domain-name-servers,,;default-lease-time 100;
    max-lease-time 100;
    host bumblebee
    hardware ethernet 00:24:8C:93:7C:EE;
    option routers;
    }subnet netmask
    option routers;
    pool {
    deny unknown clients;
    option routers;
    allow unknown clients;
    option routers;
  3. Re-start the DHCP server
  4. Disable DHCP on the FiOS router.
  5. Watch hilarity ensue as users launch Pandora in their browsers only to hear your specially selected track!

Thanks for reading.  If you stuck with it this far, you’re a trooper.

Please leave any comments or suggestions you may have below!