All posts by crossan007

Configuring Multiple Networked OR USB APC UPS devices

As any proper IT-nerd would agree, UPS devices are critical pieces of equipment not only in the data center, but also at home. However, most home users are not in a position to acquire a massive 10+ kilovolt-amp UPS capable of protecting all circuits by which our various “personal devices” are powered; rather, most home/small office UPS installations are often small desktop units usually under 3kVA. In this scenario, these smaller units are typically allocated to individual devices / ares, and are typically only responsible for signaling the status of the incoming utility power to one device.

What about using multiple UPS devices for different components of the same “workspace”? Or home networks with access points and switches in more than one location (therefore each having its own battery backup)? How would one monitor multiple distributed battery backup units (presuming each UPS unit has only USB connectivity)?

APCUPSD

Enter apcupsd: “A daemon for controlling APC UPSes.” Unfortunately, the plurality of this utility’s tagline indicates a wide range of supported devices rather than multiple concurrently connected devices. To date, I’ve found one article describing how to configure apcupsd to support multiple USB-attached UPS devices, and it’s not really pretty. The gist of the process is as follows:

  1. Configure udev rules to ensure a consistent mapping (by UPS serial number) to a named mount point
  2. Create multiple apcupsd configuration files for each connected UPS
  3. Create new “action” and “control” scripts for each UPS
  4. Re-configure the apcupsd init.d/systemd scripts to launch multiple instances of the daemon (one for each UPS)

I’m generally not a fan of creating large “custom” configuration files in obscure locations with great variance from the distributed package, so this process seemed a little “hackey” to me; especially since the end result of all of these configuration files was to have “isolated processes” for each UPS to monitor.

Dockerizing APCUPSD

At this point, I decided to take a different approach to isolating each apcupsd process: an approach with far greater discoverability, version-control potential, and scalability. Docker.

I decided to use the first step outlined in the apcupsd guide on the Debian Wiki (creating udev rules to ensure physical devices are given a persistent path on boot/attach). UPS devices are generally mounted at /dev/usb/hiddev*, so we should confirm that we have a few present:

# ls /dev/usb
hiddev0  hiddev1

# lsusb
Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 004 Device 002: ID 051d:0002 American Power Conversion Uninterruptible Power Supply
Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 003 Device 002: ID 051d:0002 American Power Conversion Uninterruptible Power Supply
Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub

Great! we’ve got things that look like UPS devices on /dev/usb/hiddev0 and /dev/usb/hiddev1. Now to get the serialnumbers:

# udevadm info --attribute-walk --name=/dev/usb/hiddev0 | egrep 'manufacturer|product|serial'
    ATTRS{manufacturer}=="American Power Conversion"
    ATTRS{product}=="Back-UPS BX1500G FW:866.L5 .D USB FW:L5 "
    ATTRS{serial}=="8975309"
    ATTRS{manufacturer}=="Linux 4.4.0-170-generic ohci_hcd"
    ATTRS{product}=="OHCI PCI host controller"
    ATTRS{serial}=="0000:00:02.0"
# udevadm info --attribute-walk --name=/dev/usb/hiddev1 | egrep 'manufacturer|product|serial'
    ATTRS{manufacturer}=="American Power Conversion"
    ATTRS{product}=="Back-UPS NS 1100M2 FW:953.e3 .D USB FW:e3     "
    ATTRS{serial}=="8675310"
    ATTRS{manufacturer}=="Linux 4.4.0-170-generic ohci_hcd"
    ATTRS{product}=="OHCI PCI host controller"
    ATTRS{serial}=="0000:00:04.0"

With the now known serial numbers, we create udev rules to persist these devices to known map points:

## FILE AT /lib/udev/rules.d/ups.rules

# SCREEN UPS
KERNEL=="hiddev*", ATTRS{manufacturer}=="American Power Conversion", ATTRS{serial}=="8975309", OWNER="root", SYMLINK+="usb/ups-screen"

# ComputeAndNetwork UPS
KERNEL=="hiddev*", ATTRS{manufacturer}=="American Power Conversion", ATTRS{serial}=="8675310", OWNER="root", SYMLINK+="usb/ups-compute-and-network"

And now to re-run the udev rules:

udevadm trigger --verbose --sysname-match=hiddev*

Now, we should have some “nicely named” UPS USB devices:

# ls -la /dev/usb
total 0
drwxr-xr-x  2 root root    120 Dec 18 19:55 .
drwxr-xr-x 22 root root   4280 Dec 18 19:55 ..
crwxrwxrwx  1 root root 180, 0 Dec 18 19:55 hiddev0
crwxrwxrwx  1 root root 180, 1 Dec 18 19:55 hiddev1
lrwxrwxrwx  1 root root      7 Dec 18 19:55 ups-compute-and-network -> hiddev1
lrwxrwxrwx  1 root root      7 Dec 18 19:55 ups-screen -> hiddev0

Excellent! Now, anytime these devices are plugged/unplugged, we shouldn’t have to guess which is hiddev0 and which is hiddev1, since udev will automagically provide us named mount points for these USB devices, which will be critical to the next steps

Next, I created a docker-compose file with the three “services” I decided I’d like for this setup:

  • APCUPSD for the “screens” UPS
  • APCUPSD for the “Compute and Network” UPS
  • Apache/Multimon to provide an HTTP based interface

This docker-compose file also contained pointers to specific Dockerfiles to actually build an image for each service (hint: the two apcupsd services use the same container with different configurations).

The apcupsd container is nothing more than the latest Alpine linux image; apcupsd from the apk repository and a very lightweight apcupsd configuration files (configured to watch onlythe UPS at /dev/ups – more on this later)

The multimon container uses the latest Apache/alpine image, and adds apcupsd-webif from the apk repository along with a few configuration files for multimon. Additionally, I wrote an entrypoint.sh script to parse environment variables and generate a configuration file for multimon so that the UPS(s) displayed on the web interface can be set from the docker-compose file.

Having now covered the build process, let’s put together the docker-compose services:

Full Docker setup here: https://github.com/crossan007/APCUPSD-Multimon-Docker

Now, instead of attempting to create custom init scripts, multiplex processes within systemd, and override the packaged mechanisms for apcupsd‘s configuration discovery, I instead have a cleanly defined interface for isolating instances of apcupsd to provide a status page for my two APC UPS devices.

Thanks for reading, and hopefully this helps you in some way!

Metal embossed oak plaque

This is my first official “metal casting blog post.” It’s been a small hobby of mine since February of 2019, and I’ve only shared some pictures on Facebook/Twitter, but I’ve decided I’m going to be more intentional about chronicling my projects on my personal blog.

A while back, I saw a really neat video on YouTube where the author created a look of embossed metal letters against a mahogany wood block: https://youtu.be/15cE1SfwlQo?t=1019

After some time, I realized this would be a really neat technique to use for hand-crafted gifts. My dad loves “all things constitution,” so the idea began to create a plaque proclaiming “We The People” in this charred-wood-embossed-metal-style.

I went to the nearby hardware store to pick up a nice piece of oak; I printed out an authentic-looking rendition of the words, and got to work.

My first challenge was transposing the printed text onto the oak board. I first tried using a Sharpie marker to “trace” the text, and follow it with a hacksaw:

This process was really really slow and laborious, so brainstormed a bit to figure out how to make this go quicker. I ended up using my table saw to “thin-out” the sections of the (originally 3/4″ thick) board down to about 1/4″:

I also picked up some new wood-cutting bits for my Dremel and was able to make quick work of the remainder of the text (also without needing to first “transpose” the printout via Sharpie to the face of the wood).

I did make a small mistake here – I didn’t consider the need for “inner-letter support” (take a look at the inner section of the “W”); I’m chalking that up to “lessons learned.” I wasn’t too concerned with this at first, since the next phase of the project should have provided a degree of “masking” over this mistake.

After completing the cutouts (and proving to myself that I was capable of actually representing the text fairly accurately), I wanted to add some color: Red and Blue (thinking that the light colored wood would infer a “white” stripe. I found some water-based stain on Amazon (Minwax Red and Blue) that provided the colors I was looking for:

Next, I had to create a new “flask” to facilitate a sand-mold of this size (all my previous sand casting have been much smaller). So I constructed a new cope (top) and drag (bottom) with keyed alignment posts to ensure both sides would align after being separated and joined. I also added grooves to the inner faces of the flask so that the ptreobond sand would have some surfaces to grab onto.

After building the flask (and ordering more petrobond to account for my newly expanded flask), I packed the sand around a non-cutout replica of the plaque (since I wanted the metal to “fill in” the back section of the plaque, I couldn’t have the sand create a negative of that void)

Next, I swapped out the “blank” piece of wood for the text-engraved piece, and impressed the lettering from the wood cutout into the petrobond. I also carved a riser and a very simple gate system for the molten metal to flow.

Since molten metal will shrink 6-12% while cooling, it is important to have a “reservoir” of still-hot molten metal to back-flow into the important pieces of your casting: this allows you to control where the shrinkage will occur: in the riser and sprue, instead of in your part.

After getting the flask all-set, I heated my plaque up on top of my kerosene heater for a few hours to dry-out any trapped moisture. Trapped moisture is very dangerous in a molten-metal scenario, as it vaporizes instantly into steam and expands rapidly in volume and can cause explosions, so I was cautious to avoid this scenario.

Finally, I put the top back on the flask, and pored the metal. Unfortunately, I didn’t capture any photographs of the closed-and-finished flask, nor of the actual pour. Almost immediately as I was pouring the metal, smoke began flowing from the vent holes and from the riser hole.

After pouring, I let the metal cool covered in the flask for a few minutes. This elapsed “cooling time” may have been my project-ending mistake. When I finally removed the cope from my flask, I was dismayed at the havoc the molten metal wreaked on the oak plank

The metal stuck together nicely, and carried a faint image of the previously well-defined text; however, the thin section of wood was completely burnt beyond salvage.

I could kind of tell where the original text was lined up with the remaining slim sections of red and blue wood

Unfortunately, there’s no saving this project.

If I ever decide to mix metal and wood again, I’ll be sure to take away a few lessons/thoughts from this project:

  • Thinner wood sections burn faster
  • Oak isn’t the most flame-resistant wood available (check out cedar or epe, or manufactured materials)
  • Limit exposure time of metal to wood; maybe pour into an open-topped drag and dunk the compound piece into cooling water as soon as the metal solidifies
  • The charred color against the stain sections actually looks decent
  • Use the right tools from the beginning (Dremel, not coping saw)
  • Try “riskier” actions on a small-sample before investing a lot of time in the final piece

Override Jenkins stage Function

Recently, I needed a mechanism to identify, as part of a try/catch block, which stage in a Jenkins Groovy Scripted Pipeline was the last to execute before the catch block was called.

Jenkins does not currently store details about the last stage to run outside of the context of that specific stage. So, in other words env.STAGE_NAME is valid with a particular stage("I'm a stage"){ //valid here} block, but not in, say, a catch(Exception e) { // where was I called from? } block.

To get around this, I found a few examples, and cobbled together something that I believe will provide future functionality. I present to you the extensibleContextStage:

// Jenkins groovy stages are somewhat lacking in their ability to persist 
// context state beyond the lifespan of the stage
// For example, to obtain the name of the last stage to run, 
// one needs to store the name in an ENV varialble (JENKINS 48315)
// https://issues.jenkins-ci.org/browse/JENKINS-48315

// We can create an extensible stage to provide attitional context to the pipeline
// about the state of the currently running stage.

// This also provides a capability to extend pre- and post- stage operations

// Idea / base code borrowed from https://stackoverflow.com/a/51081177/11125318
// and from https://issues.jenkins-ci.org/browse/JENKINS-48315?focusedCommentId=321366&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-321366

def call(name, Closure closure) {
    env.BUILD_LAST_STAGE_STARTED = name
    try {
        stage(name) {
            def result = closure.call()
            return result
        }
        env.BUILD_LAST_STAGE_SUCCEEDED = name
    }
    catch(Exception ex) {
        env.BUILD_LAST_STAGE_FAILED = name
        throw ex;
    }
}

This is a drop-in replacement for stage(string name){ closure} blocks in a Jenkins Groovy Scripted Pipeline, but with the added benefit of additional environment variables:

  • env.BUILD_LAST_STAGE_STARTED
  • env.BUILD_LAST_STAGE_SUCCEEDED
  • env.BUILD_LAST_STAGE_FAILED

So, as a full example, one can now do this (which was previously awkward):


try {
    extensibleContextStage("Do some things")
    {
        //whatever
    }
    extensibleContextStage("Do some More things")
    {
       throw new Exception("MAYHEM!")
    }
    extensibleContextStage("Do some final things")
    {
        //whatever
    }
}
catch(Exception e){
    // at this point, with normal stage, we wouldn't know where MAYHEM came from,
    // but with extensibleContextStage, we can look at either
    // env.BUILD_LAST_STAGE_FAILED or  env.BUILD_LAST_STAGE_STARTED
    // to know that "Do some More things" was the offendign stage.
    // this is super handy to send "helpful" notifications to slack/email
}

I hope this helps someone (if even my future self)

Terminate a stuck Jenkins job

Sometimes (especially after an un-graceful process restart), Jenkins jobs will be stuck in a running state, and cancelling through the UI just doesn’t work.

Fortunately, jobs can be stopped via the Jenkins script console with this command (courtesy of https://stackoverflow.com/a/26306081/11125318):

Jenkins.instance.getItemByFullName("JobName")
                .getBuildByNumber(JobNumber)
                .finish(
                        hudson.model.Result.ABORTED,
                        new java.io.IOException("Aborting build")
                );

Obtaining Git Repo URL from Jenkins Changeset: Unsolved

I’m attempting to obtain a Git Repo URL from a Jenkins Change set in a Groovy Scripted Pipeline, but I keep running into the same issue: the browser property (obtained via .getBrowser()) on my hudson.plugins.git.GitChangeSetList object is undefined.

I’m running the below code (with inline “status comments”) from the Jenkins groovy script console in an attempt to extract the RepoUrl from Chagesets in a Jenkins Multi branch groovy scripted pipeline:

def job = Jenkins.instance.getItem("MyJenkinsJob") 
def branch = job.getItems().findAll({ 
            item -> item.getDisplayName().contains("Project/CreateChangeLogs")
        })

printAllMethods(branch[0].getFirstBuild()) //this works, and is a org.jenkinsci.plugins.workflow.job.WorkflowRun

def builds = branch[0].getBuilds()
def currentBuild = builds[0]

currentBuild.changeSets.collect { 
  printAllMethods(it) // this works too, and is a hudson.plugins.git.GitChangeSetList.
  // enumerated methods are equals(); getClass(); hashCode(); notify(); notifyAll(); toString(); wait(); createEmpty(); getBrowser(); getItems(); getKind(); getRun(); isEmptySet(); getLogs(); iterator(); 

  it.getBrowser().repoUrl // this fails
  // the error is java.lang.NullPointerException: Cannot get property 'repoUrl' on null object
}

I found the utility class for PrintAllMethods here (https://bateru.com/news/2011/11/code-of-the-day-groovy-print-all-methods-of-an-object/):

  void printAllMethods( obj ){
    if( !obj ){
        println( "Object is null\r\n" );
        return;
    }
    if( !obj.metaClass && obj.getClass() ){
        printAllMethods( obj.getClass() );
        return;
    }
    def str = "class ${obj.getClass().name} functions:\r\n";
    obj.metaClass.methods.name.unique().each{ 
        str += it+"(); "; 
    }
    println "${str}\r\n";
}

The API spec for GitChangeSetList indicates that in extends hudson.scm.ChangeLogSet which implements getBrowser(), so the call should be valid.

Additionally, the source for GitChangeSetList invokes the super() constructor with the passed browser object.

At this point, I’m probably going to continue diving through the source code until I figure it out.

It looks like this is a documented issue with Jenkins: https://issues.jenkins-ci.org/browse/JENKINS-52747

And a (somewhat) related StackOverflow post: https://devops.stackexchange.com/questions/3798/determine-the-url-for-a-scm-trigger-from-inside-the-build

VAInfo: Verify Hardware Accelerated Video Support

On Ubuntu (and possibly other) Linux distros, run vainfo to see which Intel QuickSync profiles are supported.

For example, these profiles are supported on an Intel Haswell chip:

libva info: VA-API version 0.39.0
libva info: va_getDriverName() returns 0
libva info: Trying to open /usr/lib/x86_64-linux-gnu/dri/i965_drv_video.so
libva info: Found init function __vaDriverInit_0_39
libva info: va_openDriver() returns 0
vainfo: VA-API version: 0.39 (libva 1.7.0)
vainfo: Driver version: Intel i965 driver for Intel(R) Haswell Desktop - 1.7.0
vainfo: Supported profile and entrypoints
      VAProfileMPEG2Simple            : VAEntrypointVLD
      VAProfileMPEG2Simple            : VAEntrypointEncSlice
      VAProfileMPEG2Main              : VAEntrypointVLD
      VAProfileMPEG2Main              : VAEntrypointEncSlice
      VAProfileH264ConstrainedBaseline: VAEntrypointVLD
      VAProfileH264ConstrainedBaseline: VAEntrypointEncSlice
      VAProfileH264Main               : VAEntrypointVLD
      VAProfileH264Main               : VAEntrypointEncSlice
      VAProfileH264High               : VAEntrypointVLD
      VAProfileH264High               : VAEntrypointEncSlice
      VAProfileH264MultiviewHigh      : VAEntrypointVLD
      VAProfileH264MultiviewHigh      : VAEntrypointEncSlice
      VAProfileH264StereoHigh         : VAEntrypointVLD
      VAProfileH264StereoHigh         : VAEntrypointEncSlice
      VAProfileVC1Simple              : VAEntrypointVLD
      VAProfileVC1Main                : VAEntrypointVLD
      VAProfileVC1Advanced            : VAEntrypointVLD
      VAProfileNone                   : VAEntrypointVideoProc
      VAProfileJPEGBaseline           : VAEntrypointVLD

References:

Display HTTPS X509 Cert from Linux CLI

Recently, while attempting a git pull, I was confronted with the following error:

Peer's certificate issuer has been marked as not trusted by the user.

The operation worked on a browser on my dev machine, and closer inspection revealed that the cert used to serve the GitLab service was valid, but for some reason the remote CentOS Linux server couldn’t pull from the remote.

I found this post on StackOverflow detailing how to retrieve the X509 cert used to secure an HTTPS connection:

echo | openssl s_client -showcerts -servername MyGitServer.org -connect MyGitServer.org:443 2>/dev/null | openssl x509 -inform pem -noout -text

This was my ticket to discover why Git on my CentOS server didn’t like the certificate: the CentOS host was resolving the wrong DNS host name, and therefore using an invalid cert for the service.

And now a Haiku:

http://i.imgur.com/eAwdKEC.png

Git: Replace Root Commit with Second Commit

While migrating code between version control systems (in my case SourceGear Vault to Git using an open-source c# program called vault2git), it’s sometimes necessary to pre-populate the first commit in the target system.

This yields an empty commit (git commit -m "initial commit" --allow-empty) with a timestamp of today, which is chronologically out of order of the incoming change set migration.

After completing the migration, the second commit is actually the commit which I’d like to be the root.

It took me a while to figure this out, but thanks to
Greg Hewgill on Stack Overflow, I was able to replace the first commit of my branch with the second commit (and subsequently update the SHA1 hashes of all child commits) using this command:

git filter-branch --parent-filter "sed 's/-p <the__root_commit>//'" HEAD

Intermittently Slow IIS web site

TL;DR:

An issue in the Windows Management Instrumentation (WMI) performance counter collection process caused periodic system-wide performance degradation.


This issue became visible when our infrastructure monitoring software invoked specific WMI queries.


We disabled the specific WMI query set which was causing the performance issues, and the problem went away.

A few days ago one of our clients began reporting performance issues on one of their web sites. This site is an IIS web application responsible for rendering visualizations of very large data sets (hundreds of gigabytes). As such, the application pool consumes a corresponding amount of RAM (which is physically available on the server).

Normally, these sites (I manage a few hundred instances) are fast, with most queries returning in under 300ms; however, this one instance proved difficult. To make matters worse, the performance issues were intermittent: most of the time, the site was blazing fast, but sometimes the site would hang for minutes.

Given a few hours of observation, one of my team members noticed a correlation between the performance issues of the site and a seemingly unrelated process on the host: WmiPrvSe.exe

I began digging in, and was able to corroborate this correlation by looking at the process’s CPU usage over time (using ELK / MetricBeat to watch windows processes). Sure enough, there’s a direct correlation between WmiPrvSe.exe using ~3-4% CPU, and IIS logs indicating a timeTaken of greater than 90 seconds. This correlation also established an interval between instances of the issue: 20 minutes.

I fired up Sysinternals’ ProcMon.exe to get a better handle on what exactly WmiPrvSe.exe was doing during these so-called “spikes”. I observed an obscene count of Registry queries to things looking like Performance counters (RegQueryValue, RegCloseKey, RegEnumKey, RegOpenKey). Note that there are multiple instances of WmiPrvSe.exe running on the sytsem, but only one instance was “misbehaving:” the one running as NT AUTHORITY\SYTSEM (which also happens to have the lowest PID). The instances running as NT AUTHORITY\NETWORK SERVICE and as NT AUTHORITY\LOCAL SERVICE did not seem to be misbehaving.

Almost all of the registry keys in question contained the string Performance or PERFLIB; many (but not all) queries were against keys within HKLM\System\CurrentControlSet\Services.

I know that I have the Elastic Co.’s “Beats” agents installed on this host; could Metricbeat, or one of my other monitoring tools be the culprit? So, I tried disabling all of the “beats” agents (filebeat, metricbeat, winlogbeat, etc), but was still seeing these intermittent spikes in WmiPrvSe.exe CPU usage correlating with slow page loads from IIS.

Stumped, I searched for how to capture WMI application logs, and found this article: https://docs.microsoft.com/en-us/windows/desktop/wmisdk/tracing-wmi-activity.

I ran the suggested command (Wevtutil.exe sl Microsoft-Windows-WMI-Activity/Trace /e:true) and fired up Event Veiwer (as admin) to the above path. Bingo.

Log hits inMicrosoft-Windows-WMI-Activity/Trace included mostly checks against the networking devices:select __RELPATH, Name, BytesReceivedPersec, BytesSentPersec, BytesTotalPersec from Win32_PerfRawData_Tcpip_NetworkInterface

These WMI queries were executed by the ClientProcessId owned by nscp.exe.

I perused the source code for NSCP a bit, and discovered that the Network queries for NSCP are executed through WMI(
https://github.com/mickem/nscp/blob/master/modules/CheckSystem/check_network.cpp#L105 ); while the standard performance counter queries were executed through PDH (https://github.com/mickem/nscp/blob/master/modules/CheckSystem/pdh_thread.cpp#L132):

Something else I noticed was that the Microsoft-Windows-WMI-Activity/Operational log contained events directly corresponding to the issue at hand: WMIProv provider started with result code 0x0. HostProcess = wmiprvse.exe; ProcessID = 3296; ProviderPath = %systemroot%\system32\wbem\wmiprov.dll

Some more creative google searches yielded me an interesting issue in a GitHub repo for a different project: CPU collector blocks every ~17 minutes on call to wmi.Query #89 .

Sounds about right.

Skimming through the issue I see this; which sets off the “ah-ha” moment:

Perfmon uses the PDH library, not WMI. I did not test with Perfmon, but PDH is not affected.


leoluk commented on Feb 16, 2018 (https://github.com/martinlindhe/wmi_exporter/issues/89#issuecomment-366195581)

Now knowing that only NSCP’s check_network uses WMI, I found the documentation to disable the network routine in nscp’s CheckSystem module: https://docs.nsclient.org/reference/windows/CheckSystem/#disable-automatic-checks

I added the bits to my nsclient.ini config to disable automatic network checks, restarted NSCP, and confirmed the performance issue was gone:

[/settings/system/windows]
# Disable automatic checks
disable=network

I’ve opened an issue on NSCP’s GitHub page for this problem: https://github.com/mickem/nscp/issues/619

Related Reading:

Tail a file on Windows

On almost every Unix system, we have tail -f to watch the end of *really really big* files.

When faced with a 36GB log file on Windows the tooling is often lacking.

I borrowed / adapted a little PowerShell function to extract the last n log lines from a file, and write to a new file:
https://gist.github.com/crossan007/b5e8ac4579ba61eb1967315657406751

Partially borrowed from: https://stackoverflow.com/questions/36507343/get-last-n-lines-or-bytes-of-a-huge-file-in-windows-like-unixs-tail-avoid-ti