Category Archives: Uncategorized

Configuring Multiple Networked OR USB APC UPS devices

As any proper IT-nerd would agree, UPS devices are critical pieces of equipment not only in the data center, but also at home. However, most home users are not in a position to acquire a massive 10+ kilovolt-amp UPS capable of protecting all circuits by which our various “personal devices” are powered; rather, most home/small office UPS installations are often small desktop units usually under 3kVA. In this scenario, these smaller units are typically allocated to individual devices / ares, and are typically only responsible for signaling the status of the incoming utility power to one device.

What about using multiple UPS devices for different components of the same “workspace”? Or home networks with access points and switches in more than one location (therefore each having its own battery backup)? How would one monitor multiple distributed battery backup units (presuming each UPS unit has only USB connectivity)?

APCUPSD

Enter apcupsd: “A daemon for controlling APC UPSes.” Unfortunately, the plurality of this utility’s tagline indicates a wide range of supported devices rather than multiple concurrently connected devices. To date, I’ve found one article describing how to configure apcupsd to support multiple USB-attached UPS devices, and it’s not really pretty. The gist of the process is as follows:

  1. Configure udev rules to ensure a consistent mapping (by UPS serial number) to a named mount point
  2. Create multiple apcupsd configuration files for each connected UPS
  3. Create new “action” and “control” scripts for each UPS
  4. Re-configure the apcupsd init.d/systemd scripts to launch multiple instances of the daemon (one for each UPS)

I’m generally not a fan of creating large “custom” configuration files in obscure locations with great variance from the distributed package, so this process seemed a little “hackey” to me; especially since the end result of all of these configuration files was to have “isolated processes” for each UPS to monitor.

Dockerizing APCUPSD

At this point, I decided to take a different approach to isolating each apcupsd process: an approach with far greater discoverability, version-control potential, and scalability. Docker.

I decided to use the first step outlined in the apcupsd guide on the Debian Wiki (creating udev rules to ensure physical devices are given a persistent path on boot/attach). UPS devices are generally mounted at /dev/usb/hiddev*, so we should confirm that we have a few present:

# ls /dev/usb
hiddev0  hiddev1

# lsusb
Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 004 Device 002: ID 051d:0002 American Power Conversion Uninterruptible Power Supply
Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 003 Device 002: ID 051d:0002 American Power Conversion Uninterruptible Power Supply
Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub

Great! we’ve got things that look like UPS devices on /dev/usb/hiddev0 and /dev/usb/hiddev1. Now to get the serialnumbers:

# udevadm info --attribute-walk --name=/dev/usb/hiddev0 | egrep 'manufacturer|product|serial'
    ATTRS{manufacturer}=="American Power Conversion"
    ATTRS{product}=="Back-UPS BX1500G FW:866.L5 .D USB FW:L5 "
    ATTRS{serial}=="8975309"
    ATTRS{manufacturer}=="Linux 4.4.0-170-generic ohci_hcd"
    ATTRS{product}=="OHCI PCI host controller"
    ATTRS{serial}=="0000:00:02.0"
# udevadm info --attribute-walk --name=/dev/usb/hiddev1 | egrep 'manufacturer|product|serial'
    ATTRS{manufacturer}=="American Power Conversion"
    ATTRS{product}=="Back-UPS NS 1100M2 FW:953.e3 .D USB FW:e3     "
    ATTRS{serial}=="8675310"
    ATTRS{manufacturer}=="Linux 4.4.0-170-generic ohci_hcd"
    ATTRS{product}=="OHCI PCI host controller"
    ATTRS{serial}=="0000:00:04.0"

With the now known serial numbers, we create udev rules to persist these devices to known map points:

## FILE AT /lib/udev/rules.d/ups.rules

# SCREEN UPS
KERNEL=="hiddev*", ATTRS{manufacturer}=="American Power Conversion", ATTRS{serial}=="8975309", OWNER="root", SYMLINK+="usb/ups-screen"

# ComputeAndNetwork UPS
KERNEL=="hiddev*", ATTRS{manufacturer}=="American Power Conversion", ATTRS{serial}=="8675310", OWNER="root", SYMLINK+="usb/ups-compute-and-network"

And now to re-run the udev rules:

udevadm trigger --verbose --sysname-match=hiddev*

Now, we should have some “nicely named” UPS USB devices:

# ls -la /dev/usb
total 0
drwxr-xr-x  2 root root    120 Dec 18 19:55 .
drwxr-xr-x 22 root root   4280 Dec 18 19:55 ..
crwxrwxrwx  1 root root 180, 0 Dec 18 19:55 hiddev0
crwxrwxrwx  1 root root 180, 1 Dec 18 19:55 hiddev1
lrwxrwxrwx  1 root root      7 Dec 18 19:55 ups-compute-and-network -> hiddev1
lrwxrwxrwx  1 root root      7 Dec 18 19:55 ups-screen -> hiddev0

Excellent! Now, anytime these devices are plugged/unplugged, we shouldn’t have to guess which is hiddev0 and which is hiddev1, since udev will automagically provide us named mount points for these USB devices, which will be critical to the next steps

Next, I created a docker-compose file with the three “services” I decided I’d like for this setup:

  • APCUPSD for the “screens” UPS
  • APCUPSD for the “Compute and Network” UPS
  • Apache/Multimon to provide an HTTP based interface

This docker-compose file also contained pointers to specific Dockerfiles to actually build an image for each service (hint: the two apcupsd services use the same container with different configurations).

The apcupsd container is nothing more than the latest Alpine linux image; apcupsd from the apk repository and a very lightweight apcupsd configuration files (configured to watch onlythe UPS at /dev/ups – more on this later)

The multimon container uses the latest Apache/alpine image, and adds apcupsd-webif from the apk repository along with a few configuration files for multimon. Additionally, I wrote an entrypoint.sh script to parse environment variables and generate a configuration file for multimon so that the UPS(s) displayed on the web interface can be set from the docker-compose file.

Having now covered the build process, let’s put together the docker-compose services:

Full Docker setup here: https://github.com/crossan007/APCUPSD-Multimon-Docker

Now, instead of attempting to create custom init scripts, multiplex processes within systemd, and override the packaged mechanisms for apcupsd‘s configuration discovery, I instead have a cleanly defined interface for isolating instances of apcupsd to provide a status page for my two APC UPS devices.

Thanks for reading, and hopefully this helps you in some way!

Override Jenkins stage Function

Recently, I needed a mechanism to identify, as part of a try/catch block, which stage in a Jenkins Groovy Scripted Pipeline was the last to execute before the catch block was called.

Jenkins does not currently store details about the last stage to run outside of the context of that specific stage. So, in other words env.STAGE_NAME is valid with a particular stage("I'm a stage"){ //valid here} block, but not in, say, a catch(Exception e) { // where was I called from? } block.

To get around this, I found a few examples, and cobbled together something that I believe will provide future functionality. I present to you the extensibleContextStage:

// Jenkins groovy stages are somewhat lacking in their ability to persist 
// context state beyond the lifespan of the stage
// For example, to obtain the name of the last stage to run, 
// one needs to store the name in an ENV varialble (JENKINS 48315)
// https://issues.jenkins-ci.org/browse/JENKINS-48315

// We can create an extensible stage to provide attitional context to the pipeline
// about the state of the currently running stage.

// This also provides a capability to extend pre- and post- stage operations

// Idea / base code borrowed from https://stackoverflow.com/a/51081177/11125318
// and from https://issues.jenkins-ci.org/browse/JENKINS-48315?focusedCommentId=321366&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-321366

def call(name, Closure closure) {
    env.BUILD_LAST_STAGE_STARTED = name
    try {
        stage(name) {
            def result = closure.call()
            return result
        }
        env.BUILD_LAST_STAGE_SUCCEEDED = name
    }
    catch(Exception ex) {
        env.BUILD_LAST_STAGE_FAILED = name
        throw ex;
    }
}

This is a drop-in replacement for stage(string name){ closure} blocks in a Jenkins Groovy Scripted Pipeline, but with the added benefit of additional environment variables:

  • env.BUILD_LAST_STAGE_STARTED
  • env.BUILD_LAST_STAGE_SUCCEEDED
  • env.BUILD_LAST_STAGE_FAILED

So, as a full example, one can now do this (which was previously awkward):


try {
    extensibleContextStage("Do some things")
    {
        //whatever
    }
    extensibleContextStage("Do some More things")
    {
       throw new Exception("MAYHEM!")
    }
    extensibleContextStage("Do some final things")
    {
        //whatever
    }
}
catch(Exception e){
    // at this point, with normal stage, we wouldn't know where MAYHEM came from,
    // but with extensibleContextStage, we can look at either
    // env.BUILD_LAST_STAGE_FAILED or  env.BUILD_LAST_STAGE_STARTED
    // to know that "Do some More things" was the offendign stage.
    // this is super handy to send "helpful" notifications to slack/email
}

I hope this helps someone (if even my future self)

Windows 10 Password Recovery

DISCLAIMER: DO NOT EXECUTE THIS PROCESS WITHOUT EXPLICIT APPROVAL FROM THE SYSTEM OWNERS.  I AM NOT ENDORSING OR APPROVING ANY ILLEGAL ACTIVITY WHICH COULD BE ACCOMPLISHED FOLLOWING THESE STEPS

An older friend forgot his computer password; asked me for help.

I booted the machine, and saw an email address where the Windows 10 username normally would be;  my first thought was “oh, great; this is a Microsoft Online  joined computer, password recovery probably won’t happen”

I did a little research, and found some evidence that suggests my seemingly outdated knowledge about passwords being stored in the SAM seems to still stand.  However, Windows 10 Anniversary Update changed the encryption algorithm used on the SAM: https://twitter.com/gentilkiwi/status/762465220132384770

This algorithm change broke my normal tool (OPHCRACK), since it was unable to read the NTLM hashes from the SAM.  SAM encryption caused OPHCRACK to incorrectly read every account hash as 31d6cfe0d16ae931b73c59d7e0c089c0.  So, I copied the SAM and SYSTEM files (at C:\Windows\System32\config) from the target machine to my desktop for additional processing.

Mimikatz has a module `lsadump::sam` which accepts parameters for offline SYSTEM and SAM decryption.  Easy command line:

lsadump::sam /system:c:\users\charles\documents\system /sam:c:\users\charles\documents\sam

This returned decrypted NTLM hashes for easy cracking.

I decided to try a new tool here to crack the plain text password from the NTLM hashes: Hashcat.  There’s a Windows 64bit compiled version (I know, I know don’t run random binaries…) which made it easy to get cracking quickly.

I copied the hash from the output of Mimikaz into a text file called hashes.txt and ran the command

.\hashcat64.exe -m 1000 -a 3 -O -o pass1.txt .\hashes.hash

My 10 year old computer cracked the Microsoft Online account NTLM Windows 10 password hash in ~8 minutes. It was two dictionary words and a two-digit number for a total of 8 characters.  I was using brute-force in this scenario, so the fact that dictionary words were used is of no consequence.  Had I been using a dictionary, the attack would have likely concluded sooner.

Just for fun, I generated a new NTLM hash, but replacing vowels with numbers (i with 1 and the e with 3 and so fourth), the attack took the same amount of time.


import hashlib
print hashlib.new('MD4', 'password'.encode('utf-16le')).hexdigest()

Moral of the story:  USE STRONG PASSWORDS AND A PASSWORD MANAGER

Let’s Encrypt Setup

The “Let’s Encrypt” setup process is very painless – Just clone a Git repo, run a comand, and edit some apache config files.

  1. sudo apt-get install git
  2.  sudo git clone https://github.com/letsencrypt/letsencrypt /opt/letsencrypt
  3. cd /opt/letsencrypt
  4. ./letsencrypt-auto –apache -d ccrossan.com -d www.ccrossan.com
  5. edit the sites-enabled config files so that the appropriate virtual host uses the correct ssl certs.
  6. delete the newly generated ssl.conf file
  7. restart apache
  8. [Optional] Set up Cron for auto-renew
  9. sudo crontab -e
  10. 30 2 * * 1 /opt/letsencrypt/letsencrypt-auto renew >> /var/log/le-renew.log

 

These commands taken from the DigitalOcean Let’s Encrypt Setup Guide