Installing FreeBSD on a SoYouStart Dedicated Server Over IPMI

And Converting a 4-Way ZFS Mirror Into a RAIDZ Array Via a Network Reinstall

I’ve had great luck over the last few years renting SoYouStart’s super-cheap, but high powered dedicated servers. The prices are phenomenal, and the sale prices are even better. The SYS web UI is outstanding, especially if you are moving up from one of the heard of half-baked WHMCS installs. Everything is wonderfully automated. Their technical support is… Well their prices are really great, and better servers come with IPMI. Love it.

I recent ordered a pair of new servers that were a little different that the one’s I’d previously gotten. All my previous machines had been running FreeBSD on ZFS mirrors, but these new machines came with four big drives each. However upon delivery, I noticed that they’d been provisioned as a four-way ZFS mirror. That’s four disks all mirroring each other, quadruple redundant, with the total usable space of a little less than one disk. Sad trombone.

What I actually wanted was one RAIDZ array, and one RAID10 array.

The Dark Ages

So I tried to reinstall through their web UI with a custom layout, which made it easy to set the swap to something more reasonable than their stock offerings but the only options were to install on a single drive or an N-way mirror. No-go.

Then I checked out their “install template” options. They don’t seem to be documented anywhere, and they don’t really give you any low-level ZFS options, just zvol choices. Fail.

But I have IPMI, so I can do anything I want, right? I saved their installed network settings (important!) and launched their IPMI java console. Off the bat, it was a lot easier to use than other providers. I have had to jump through so many dumb hoops to get IPMI actually working on a server where it was sold as a fearture, but treated like a annoying inconvenience by the data center. And the console *actually* worked in OSX! Oh my!

Except it didn’t because it doesn’t support media redirection (mounting CDs over the network) on OSX. But I have run into this before, so I booted VirtualBox on my MacBook, installed Windows, and connected to their IPMI console. Nope, need to install java first. I always forget that. So many swears. Ok, with that installed I could use my OSX laptop, to run a Windows VM, to run a KVM console in a Java virtual machine, to install FreeBSD on a server in another country. So may yaks! But that got it working!

Until it didn’t. The boot disk took *forever* to load, and then seemed to hang after it loaded “ums0” the virtual keyboard/mouse device. It just hung there forever. I tried different ISOs and different versions of FreeBSD, but they all hung somewhere late in the boot. I tried the memstick mini image but that seemed to reliably crash the java console. Disaster. Now what?

Im going to skip ahead through a lot of hair pulling and failed schemes, and I’m not even going to mention trying to break three disks out of the mirror on a running system and build a RAIDZ with a fake memory disk in the fourth slot. Oof!

The Enlightenment

I finally got on Freenode #FreeBSD and begged for help. RhodiumToad pointed out that so late in the boot, when the machine was already in multi-user, it was a very strange place to hang, a he felt that it was booting fine, but the console output was getting sent to the wrong console. I poked around at his direction, with a very slow boot in between each try, and he finally suggested that we should disable serial output during boot, and that’s how we fixed it!

I booted from the v12.1 bootonly.iso, and then at the boot menu, I selected option #3, “Escape to the loader prompt”. That drops you into a command line “OK” prompt. I could see that the serial console was enabled with:

> show boot_serial

So I disabled it, and then continued the boot with:

> set boot_serial=NO
> boot

And I was in the installer! SYS actually had two console options. In addition to the Java IPMI console, there is an option for a browser-based serial console. At some point during the boot, FreeBSD switched the console to that and I could no longer interact with the Java console. I couldn’t simply use the serial console because it doesn’t support media redirection.

Run Multiple Versions of PHP-FPM on FeeeBSD

Let’s install two version of PHP-FPM on the same FreeBSD server! This is not something that Ports or Packages really supports, but with a few cheats, we can make Ports do most of the hard work, so we don’t have to install them from scratch like cavemen. I’m installing 5.6 and 7.2 but this should probably work with other versions.

The first and dumbest step is to install and immediately delete php 5.6 and 7.2 via ports. This gets all of the build and run dependencies installed, in a very normal way, before we start building things with weird flags that muck everything up when they cascade down to dependencies in the build process. No, you can’t (easily) use packages for this because you want those build dependancies installed. There is probably a smarter way to do this.

cd /usr/ports/lang/php56
make install clean
pkg del php56

cd /usr/ports/lang/php72
make install clean
pkg del php72

Now build them again, with flags that change where they are installed and disable the Ports system’s normal good sense about installing software that conflicts. This is the smart bit that I stole from the FreeBSD Forums.

cd /usr/ports/lang/php56
make PREFIX=/usr/local/php56 PHPBASE=/usr/local/php56 install clean DISABLE_CONFLICTS=1

cd /usr/ports/lang/php72
make PREFIX=/usr/local/php72 PHPBASE=/usr/local/php72 install clean DISABLE_CONFLICTS=1

So now you have everything for each of the two versions installed under /usr/local/php56 and /usr/local/php72, respectively. They install rc scripts that we will need to adapt so we can run them at the same time:

cp -p /usr/local/php56/etc/rc.d/php_fpm /usr/local/etc/
perl -pi -e 's/php_fpm/php56_fpm/' /usr/local/etc/
perl -pi -e 's/' /usr/local/php56/etc/
perl -pi -e 's/' /usr/local/php56/etc/php-fpm.conf
perl -pi -e 's#listen = = /tmp/php56-fpm-www.sock#' /usr/local/php56/etc/php-fpm.conf
echo 'php56_enable="YES"' >> /etc/rc.conf
service php56-fpm start

The perl one-liners are just a compact and copy-paste-able way to say you need to edit files to replace certain things. The rc files need to be edited to differentiate them, so they each have their own rcvars, and pid files. The php-fpm.conf files are edited to sync that pid file change and differentiate the listen sockets.

cp -p /usr/local/php72/etc/rc.d/php_fpm /usr/local/etc/
perl -pi -e 's/php_fpm/php72_fpm/g' /usr/local/etc/
perl -pi -e 's/' /usr/local/php72/etc/
perl -pi -e 's/' /usr/local/php72/etc/php-fpm.conf
perl -pi -e 's#listen = = /tmp/php72-fpm-www.sock#' /usr/local/php72/etc/php-fpm.d/www.conf
echo 'php72_enable="YES"' >> /etc/rc.conf
service php72-fpm start

In this example, I have opted to change the “listen” directive to a unix socket. You could use tcp sockets too, but you will have to choose a different port for each version of PHP-FPM that you are running.

cd /usr/ports/www/apache24
make install clean
echo 'apache24_enable="YES"' >> /etc/rc.conf
vi /usr/local/etc/apache24/httpd.conf

Obviously you’ll want to edit httpd.conf to suit your needs. That is beyond the scope of this post, but I will be uncommenting the “proxy_fcgi_module” line so apache can talk to PHP-FPM.

Now add FPM pools to one PHP-FPM or the other, or both. For example:

user = haroldp
group = haroldp
listen = /tmp/php72-fpm-haroldp.test.sock
listen.mode = 0666
chroot = /home/haroldp
pm = ondemand
pm.max_children = 50
php_admin_value[doc_root] = /haroldp.test/htdocs
php_admin_value[cgi.fix_pathinfo] = 0
php_admin_value[sendmail_path] = /bin/mini_sendmail -t

And add vhosts to apache, pointing to a php-fpm 5.6 socket or a php-fpm 7.2 socket, at your option. Maybe something like:

<VirtualHost *:80>
  ServerName haroldp.test
  DocumentRoot /home/haroldp/haroldp.test/htdocs
  SuexecUserGroup haroldp haroldp
  ErrorLog /home/haroldp/haroldp.test/logs/haroldp.test.error_log
  CustomLog /home/haroldp/haroldp.test/logs/haroldp.test.access_log combined
  <Directory /home/haroldp/haroldp.test/htdocs">
    Order allow,deny
    Allow from all
    Options +Indexes +FollowSymLinks +ExecCGI +Includes +MultiViews
    AllowOverride All
  <FilesMatch .php$>
    ProxyFCGIBackendType GENERIC
    SetHandler "proxy:unix:/tmp/php72-fpm-haroldp.test.sock|fcgi://localhost/home/haroldp/haroldp.test/htdocs$1"

Switching between them is a two byte change to the apache vhost config.

service apache24 start
service php72-fpm restart
service php56-fpm restart


Running something like:
cd /usr/ports/lang/php56
make PREFIX=/usr/local/php56 PHPBASE=/usr/local/php56 deinstall reinstall DISABLE_CONFLICTS=1

for each version should upgrade it.


My big worry with this approach is that a future update may create a requirement for incompatible dependancies between the two ports.

I would prefer an approach that builds each PHP-FPM in it’s own jail, but of course they need access to the file system that hosts the websites. Is there a smart way to do that? Put /home on its own ZFS volume and mount it in each PHP jail? And share a /tmp between all of them for the unix sockets? You’d need to keep user accounts synced or use LDAP for authentication.


“Kreplach” Excerpt from Gravity’s Rainbow by Thomas Pynchon

Remember the story about the kid who hates kreplach? Hates and fears the dish, breaks out in these horrible green hives that shift in relief maps all across his body, in the mere presence of kreplach. Kid’s mother takes him to the psychiatrist. “Fear of the unknown,” diagnoses this gray eminence, “let him watch you making the kreplach, that’ll ease him into it.” Home to Mother’s kitchen. “Now,” sez Mother, “I’m going to make us a delicious surprise!” “Oh, boy!” cries the kid, “that’s keen, Mom!” “See, now I’m sifting the flour and salt into a nice little pile.” “What’s that, Mom, hamburger? oh, boy!” “Hamburger, and onions. I’m frying them here, see, in this frying pan.” “Making a little volcano in the flour here, and breaking these eggs into it.” “Can I help ya mix it up? Oh, boy!” “Now, I’m going to roll the dough out, see? into a nice flat sheet, now I’m cutting it up into squares-” “This is terrif, Mom!” “Now I spoon some of the hamburger into this little square, and now I fold it over into a tri-” “GAAHHHH!” screams the kid, in absolute terror-“kreplach!”

.dev is Dead!

If you read my posts on building a local dev server, you will have seen that I have been using .dev as a local-only TLD. Well .dev was bought by Google and they will apparently be selling domains on it. And to make matters worse, both Chrome and now Firefox have preloaded their browsers with unalterable HSTS settings that require HTTPS for any request to a .dev domain. If you type “” into your address bar it will ignore what you typed and request, “” instead. That killed my local sites. More details here:

I guess I will be switching to .test in the middle of a work day.

Gawd Damnit.

Setting up Apache on OSX 10.12 (Sierra) for no-setup wildcard virtual hosts

I want to host a copy of all the websites on which I work, right on the computer where I do my coding. I don’t want to depend on a server on my LAN that won’t be there when I am working from out of the office. I don’t want to work on a remote server that requires a slow (S)FTP loop to try out every change. And I also don’t want to work entirely from the command-line on a remote server. So I set up my macbook with wildcard DNS that points any hostname *.test to localhost ( If I am working on’s website, I can use “example.test” as a hostname that points right back to my machine. Now I need to set up Apache to host example.test. But adding a virtual host config for every website on which I work is going to be extremely laborious. Luckily Apache supports no-config mass virtual hosts. All I’ll need to do to add a new website for example.test is create an “example/” directory in the right spot.

Apache includes cool “mass virtual hosting” features that will allow it to suss out the DocumentRoot from the request hostname. First, let’s create a directory where our virtual hosts will live:

mkdir ~/Documents/vhosts

I put mine in a folder inside my Documents folder. For my login, that’s /Users/haroldp/Documents/vhosts. But you can put it just about anywhere. I added the following to my /etc/apache2/httpd.conf:

<VirtualHost *:80>
VirtualDocumentRoot /Users/haroldp/Documents/vhosts/%-2/htdocs
AddType application/x-httpd-php .php
DirectoryIndex index.php index.html

<Directory /Users/haroldp/Documents/vhosts>
Require all granted
AllowOverride All
Options +FollowSymLinks

First, note that /Users/haroldp/Documents/vhosts makes sense for me on my computer, but it’s going to be different for everyone, so you can’t just copy & paste. Note that I tacked an /htdocs directory onto the end of my vhosts VirtualDocumentRoot directive. This is not necessary at all, but I like to have directories associated with a website, but outside the webspace. You don’t have to do that if you don’t want. Note too that I setup PHP, because I’ll be using that. You may or may not want those directives.

Then I uncommented the mod_vhost_alias module to enable apache’s mass vhosting directives:

LoadModule vhost_alias_module libexec/apache2/

While I was in there I told apache to Listen only on localhost:


Because I don’t want anyone else to be able to hit the webserver on my laptop. Just me.

Next I enabled the PHP module by uncommenting:

LoadModule php5_module libexec/apache2/

I set the ServerName so Apache won’t whine about it:

ServerName www.test

I set the ServerAdmin so I know whose fault it is when something doesn’t work:


And finally, I changed the user that Apache runs as to my login:

User haroldp
Group staff

WARNING, HIGH VOLTAGE!! This is very dangerous. I’m setting up apache to do its work, including running PHP scripts as my own UID. This means that a naughty script could do anything on my machine that I could, including very bad stuff. I am doing this so things like WordPress will create files with my login instead of the web user, which avoids a lot of hassles, and makes upgrades much easier. I am not worried too much about the security implications because I am running my own code, and the server is only available on localhost. If you are already on the machine, there are easier ways to do bad things.

Ok, let’s check our work:

sudo apachectl configtest

Fix any errors and rerun until Apache starts without issue. Then simply:

sudo apachectl start

If that worked, you should get a (404) page if you go to . But let’s try out our virtual hosting:

mkdir ~/Documents/vhosts/foo
mkdir ~/Documents/vhosts/foo/htdocs
cat '<?php phpinfo(); ?>' > ~/Documents/vhosts/foo/htdocs/index.php

You should get a phpinfo() page if you go to http://foo.test/ .


Edit 2/13/2018:

This article previously used the “.dev” top level domain. However, Google has bought and deployed .dev. So .dev is dead and all references have been changed to .test.

Edit 9/16/2020:

WHAT YEAR IS IT? I got a new macbook running Catalina and this setup required two updates. First, the default PHP version is 7, so you will have to adjust that config. Second, when I tried to access my vhosts I got an error like: AH00035: access to / denied (filesystem path ‘/Users/haroldp/Documents/vhosts’) because search permissions are missing on a component of the path”. I fought that for a while thinking that file permsiions had changed, but it fact it was a system setting. I needed to allow apache full access to the hard drive. Details here.

Edit: 1/30/2024:

I got a new macbook running Sonora. First you need to install PHP, as it’s no longer included in the system. I installed via Homebrew. Next I got an error starting Apache because the PHP module wasn’t signed. I followed this tutorial to sign it.

Using dnsmasq on OSX 10.12 (Sierra) for local dev domain wildcards

We want to develop websites or other internet services on our OSX computer. It’s convenient to point wildcard DNS for a whole (imaginary) top level domain to localhost, so we can invent as many domains as we want without having to edit any config files or do any work.

I used Homebrew to install dnsmasq:

% brew install dnsmasq

And then set up the config file:

cp /usr/local/opt/dnsmasq/dnsmasq.conf.example /usr/local/etc/dnsmasq.conf
vi /usr/local/etc/dnsmasq.conf

To the end of that file I simply added:


Which tells dnsmasq to resolve any hostname ending in .test to (localhost).

Now we need to tell the OS to start dnsmasq automatically. Again, brew will do all the hard work for us:

sudo brew services start dnsmasq

Let’s test to see if it’s working:

dig foo.test @
; <<>> DiG 9.8.3-P1 <<>> foo.test @
;; global options: +cmd
;; Got answer:
;; ->>HEADER< ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0
;foo.test. IN A
foo.test. 0 IN A

We asked the resolver running @ (dnsmasq) the address for “foo.test” and it returned “” which is just what we want.

Now we need to get OSX to use that resolver for DNS lookups on the .test TLD. OSX makes this pretty easy:

sudo mkdir -p /etc/resolver
sudo vi /etc/resolver/test

And insert a line in that file just like you might in /etc/resolv.conf:


That’s it. Anything ending with .test will point to localhost. So our next step is to run a server there.

Edit 2/13/2018:

This article previously used the “.dev” top level domain. However, Google has bought and deployed .dev. So .dev is dead and all references have been changed to .test.

Safe Jail Upgrades With ZFS Clones

So I had a FreeBSD jail today running a geriatric version of MySQL that was long past time to update. But sometimes when you perform big updates you discover problems along the way, and it takes much longer than usual. This particular jail can’t tolerate a lot of downtime so I thought it would be nice to perform a “practice” upgrade to flush out all of the problems and have solutions ready when I’m ready to perform the live upgrade. My idea was to clone the jail and perform a practice upgrade on that. It was easy.

First, clone the file system. That’s easy because I keep all of my jails on their own ZFS file systems and maintain daily snapshots:

zfs clone zroot/jails/vhost2@daily.0 zroot/jails/vhost2upgrade

And of course that happens pretty much instantly.

Next I found a free IP address on the server, created a new jail and started it up:

ezjail-admin create -x vhost2upgrade
ezjail-admin start vhost2upgrade
ezjail-admin console vhost2upgrade

Now I’m logged into a clone of my server jail and I can perform my MySQL upgrade and figure out all the little details before I perform it on the live server jail.

All done? Clean up after yourself. Log out of the jail and then from the host system:

ezjail-admin stop vhost2upgrade
ezjail-admin delete vhost2upgrade
zfs destroy zroot/jails/vhost2upgrade

An even better setup would be if I had set up my jail with server and user data on separate partitions. Then, once I had the server data working the way I wanted, I could zfs promote the clone’s server partition. To do.

I Want My File System Back, Apple

You have a large (90MB) PDF on your Mac (OSX Yosemite) and your iPad (iOS 9.2) would be the perfect place to read it. How do you get this file from your desktop computer onto a *decent* reader on your tablet?


Things that don’t work:

Plug it into your computer via Thunderbolt/USB and copy it over. NO PEASANT! The iPad does NOT have a file system! Stop pretending it does! Do not look behind the curtain! This does work on my Kindle Fire, and takes about 8 seconds, of course. And then you discover that there isn’t a decent PDF reader… on the computer sold by the book company. Ugh.

Use iTunes to move it over. Nope. iTunes (v12.x) no longer manages “books”. There is no “Books” tab in iTunes anymore. Also, as a side note, iTunes as a user interface for playing music has gotten iteratively worse every single version since they bought (and killed) SoundJam to create it. The user interface organization is now *profoundly* bad. I am using WinAmp to play MP3s now. I kid you not.

Use iBooks to copy it over. Nope. iBooks (v1.2) no longer has a “Devices” list that allows you to simply drag things around.

Email it to yourself. Just, “No”. I said it’s 90M – SMTP will explode. Honestly, I seriously considered just upping my email size limits to 90M (over 100M after my mail client base64 encodes that) for a day to try this anyway, since I run my own mail server, hah. Emailing things to myself makes me sad though. It’s basically admitting defeat.

Download it through your iPad’s web browser. Nope. Then I have to READ it in the browser, and the browser is a shitty reader. This is a 700+ page PDF. There doesn’t seem to be any way to move the PDF to a decent reader. Maybe if I wrote a script that specifically set a “Content-disposition” header, mobile Safari would give me the option to download/save/open in iBooks? Don’t think I won’t code that for one PDF!

Use AirDrop. Nope, I can copy the file over via AirDrop, and I have the option to open it with iBooks or Kindle reader, but it never shows up in either. Also, as the iPad has no file system, and I downloaded it three times trying to make this happen, do I now have 270MB of mystery bloat that I can’t see or delete? I wonder, all tolled, how many completely unavailable copies of this 90MB file I now have on the iPad.

Send to Kindle on your desktop, and sync it on your iPad. Nope, Amazon has a 50MB limit. Books copied to your Kindles via the file system aren’t part of Amazon’s “system” and don’t get synced to other devices.

Import the book into Calibre and use it’s “Send to device” function. Nope. Calibre used iTunes to send books to iPads, and iTunes has removed it’s book support so that doesn’t work anymore.



Upload the file to ownCloud/Dropbox/Box and then sync on your iPad. DING! DING! Copy the PDF to your ownCloud (etc…) folder, upload the #@$$# thing to your server in a colo in another state, then on your iPad, re-download the file back to your LAN via the ownCloud app. ownCloud is great, but it’s a poor book reading app of course. Next click the unlabeled and inscrutable [^] “Share” button and find the “Open in…” option, and choose iBooks. Jesus-fuck!

Apple and many others have seized on this idea that the file system is the most difficult thing in computing for new/casual users to understand, and the only way to save those users from the pitfalls of a real file system is to abstract it away and hide it. Fuck you! I won’t deny that dealing with a file system *is* a problem, but sweeping it under the rug is an insane solution. It’s like saying that the hardest part of marriage is communication, so married couples should ONLY communicate through approved Halmark brand greeting card quotations or Top 40 song lyrics. Fucking no! The solution is to learn to communicate! Computers save stuff in files, and they organize those files in a folder tree. Learn that, gawd damnit!

I feel like I am constantly beating my head against Apple’s insistance that the filesystem must be hidden from the user.

Twenty Books in 2015

I present once again, the List of Books I Read Last Year! 2015 was a banner year for me, as I managed to read 20 books! That’s double last year’s count. If you are wondering what my secret was, I just read much shorter books in 2015. I should have thought of that sooner. Almost everything I read on my now “old” Kindle Keyboard. It lasted another year.

Trout Fishing in America / The Pill vs. the Springhill Mine Disaster / In Watermelon Sugar
Richard Brautigan

This is an old hippy classic. I have to say, I didn’t care for “Trout Fishing” much. I liked a few of the poems in the middle section, but I found them uneven. But the Watermelon Sugar novella was just completely delightful – a unique, possibly post-apocalyptic, fantasy story. I can’t think of anything like it.

JavaScript: The Good Parts
Douglas Crockford

A dense technical book on what to use, and what to avoid in JS. I think I read through the Object chapter four times. I’m still blown away by his method for building private properties with closures.

David Foster Wallace’s Infinite Jest: A Reader’s Guide
Stephen J. Burn

Meh, I think this suffered from being written too soon. The Infinite Jest wiki is a lot better in many ways.

Goodbye to All That
Robert Graves

I enjoyed this WWI memoir, though I felt like he wrote about his time in combat with too much detachment.

The Trial
Franz Kafka

You know that dream you have where you are in your home, except everything is different and confusing, and there are things hidden behind doors that you didn’t know were there, and you have some strange imperative you must complete? The vicious bureaucracy – dream-like in its intricacy and absurdity in The Trial – have exactly that quality. This was one of the best books I read in 2015, and it’s a recommended read for absolutely everyone. This is the book that defines, Kafkaesque.

Reaper Man
Terry Pratchett

We lost Terry Pratchett last year, but thankfully he wrote many books, and this was another fun one.

Dive Into HTML5
Mark Pilgrim

We have been using HTML5 for years already, of course, but have you done your homework and really studied the whole thing? If not, this is available for free on the internets.

Iron Council
China Miéville

This was the third (and last) novel set in the rich, sinister, dark, dirty, straight-up filthy Bas-Lag. I can’t recommend this series enough. It’s “fantasy” I guess, but definately its own unique variety. It ain’t “swords and wizards” fantasy. This book hasn’t been as popular as the preceding two, and maybe I’d only give it 9/10, which is to say, I absolutely loved it, and loved going back to Bas-Lag.

Civil War Stories
Ambrose Bierce

Bierce is probably most famous now for his, “Devil’s Dictionary.” But this is a wonderful collection of fictional episodes from the Civil War, most with a “twist” ending. It’s a great book from beginning to end, but An Occurrence at Owl Creek Bridge was particularly striking.

Homage to Catalonia
George Orwell

This is Orwell’s memoir of his time fighting with the anarchists against the fascists during the Spanish Civil War. It adds so much depth to his politics and the loathing he developed for Soviet communism. It starts out as story of deprivation in WWI style trench warfare, but picks up steam as he finds himself in the city fighting the propaganda and treachery of his supposed allies.

The Sheltering Sky
Paul Bowles

This was beautifully written, but I had a hard time identifying with the main characters. It reminded me of Camus’ The Stranger: people in North Africa making terrible decisions.

The Big Sleep
Raymond Chandler

This was fun, and I was a little surprised by how closely the old Bogart movie stuck to the book. The notable difference was that Chandler’s vices, victims and villains were all a little dirtier. General Sternwood says, “A nice state of affairs when a man has to indulge his vices by proxy,” which is of course just what I’m doing reading a noir detective novel.

Things Fall Apart
Chinua Achebe

I enjoyed this story of the colonization and missionary conversion in East Africa from the point of view of an African tribesman.

George Orwell

So the whole family (Bean, Annabel and I) read or re-read 1984 this year. It was very interesting to put this novel, which is so important to me, in context with Homage to Catalonia and Orwell’s experiences in the Spanish Civil War. And it’s cool that I now have the experience of this book in common with my daughter.

Blood Meridian, or the Evening Redness in the West
Cormac McCarthy

This western begins with the tightest, most tersely told history of a boy growing up to become an outlaw. Every step unfolds like math, so clearly coming out of the last. Every page contains a horror more shocking that the previous. It was an amazing read.

Brideshead Revisited
Evelyn Waugh

I mostly enjoyed the book, but the ending felt like a shabby advertisement for Catholicism. I felt pretty let down. Along with Goodbye to All that, this book surprised me a little with just how gay the English aristocracy of a century ago was.

The Lathe of Heaven
Ursula K Le Guin

I enjoyed this imaginative science fiction novel about dreams coming true in the worst possible ways, by one of my favorite authors.

Civil Disobedience
Henry David Thoreau

Every other sentence in this tract is another perfect poem about liberty. I felt like jumping up an down in agreement most of the time I was reading. It’s amazing to think that myself and all of the good an kind people I know, take a significant portion of their income, and give it to people so they can use it to murder an oppress other people. We pay money so the government will put utterly harmless people in cages for victimless “crimes”. We pay money to bomb brown people out in West Dirtistan with robot airplanes. We pay to have our phone calls and emails recorded. If I do not pay my taxes because supporting that bruises my concise too much, I’ll be just another “tax dodger” who’s trying to avoid doing his, “fair share”. Well Thoreau went to jail rather than support slavery and the Mexican-American war and wrote this book about why that was the right thing to do.

Sacred and Secular Elegies
George Barker

Ok, book porn here! So I was listening to the audio book of Hunter S Thompson’s Songs of the Doomed, and there’s an out-take of some chatter that’s a little hard to hear between stories where Thompson reads a few lines from the last stanza of Sacred Elegy #5, by George Barker. The poem is pretty amazing, so I tracked it down. It’s way out of print, but I found a copy retired from some high school library. When I punched it into, it had never been submitted before, so the picture of the book on goodreads is actually mine. You can see the library marks in the corner. The book itself is very short, but the poems are so dense and impressionistic, it actually took a while to get through it. And amusingly enough, I felt that the last poem, Sacred Elegy #5, was easily the best.

One Hundred Years of Solitude
Gabriel García Márquez

I just finished this before the new year, and enjoyed it tremendously. Keep a family tree handy for the family that this book of magical realism follows through seven generations. They keep naming one generation after the last, and it’s easy to get lost. Great book, in any case.

Configuring a Dev Box Mail Server

I develop websites on my laptop using a local web server.  Often those sites have functions that send out email, and that needs to be tested, along with everything else.  It can be a problem when some function sends out lots of emails to customers, admins, affiliates – a bunch of people.  If I’m working with a copy of the “live” database to debug some problem, it might try to send emails to places I don’t want (real customers).  What would be nice is if it generated those emails, but just wrote them to a file on disk where I could look at them.

This is pretty easy to do with Postfix, my MTA of choice and the one that ships with my OSX laptop.  First, add a line to the end of /etc/postfix/

fs_mail unix - n n - - pipe flags=F user=_www argv=tee /Users/haroldp/Documents/Projects/localmail/htdocs/spool/${queue_id}.${recipient}.txt

Let’s break that down.

  • I am adding a new service that I’m naming, fs_mail.
  • It accepts mail from the pipe service (works like a unix pipe).
  • It should run as user _www, which is the UID my web server runs as.  The mail files created with have 0600 permissions, so only the owner can read them.  More on that later.
  • The pipe argv is set to tee (tee has a man page you can read), to split output to a file.  And that file is in a directory in my websites folder.  Each file will be named using the postfix queue ID and the recipient.  I thought that would be sufficiently unique for my needs.

When that is saved, we need to create the directory to collect those emails and make it writable by the fs_mail process:

mkdir /Users/haroldp/Documents/Projects/localmail/htdocs/spool
chmod 777 /Users/haroldp/Documents/Projects/localmail/htdocs/spool

If you are setting this up on your own computer, you will want to adjust the directory location to your suit your needs.

Now we need to tell postfix to use our new service for all outgoing email. Edit /etc/postfix/ adding the following:

default_transport = fs_mail

That should do it. Restart postfix and check your mail log for any errors:

sudo postfix stop
sudo postfix start
tail /var/log/mail.log

If that all looks good we can test by sending an email from the command line:

% mail
Subject: test #42
This is a test message. End it by typing a period (.) on its own line, and hitting return.

Check your mail.log again to see that it worked without error. Check your new spool directory to see if there is a mail file in there.

If that is working, then you are done! But remember that we saved those messages as UID _www? That is the default user ID of apache web processes on OSX, so my local web server can read those files. For extra credit build a web page to view the 10 newest emails in your spool dir:

# number of messages to display:
$max_messages = 10;

if ( isset($_POST['filename']) ) {
    # deleting a file
    $filename = './spool/' . $_POST['filename'];
    if ( file_exists($filename) ) {
    else {
        die("File $filename not found");

# get a list of all the files, then sort them by age, newest first
$files = array();
if ($handle = opendir('./spool/')) {
    while (false !== ($entry = readdir($handle))) {
        if ( $entry !== '.'  && $entry !== '..' ) {
            $stat = stat('./spool/' . $entry);
            $files[$stat['size']] = array(
                'filename' => $entry, 
                'lastmod'  => $stat['mtime'],
                'size'     => $stat['size']
usort($files, "sortinator");

# get To, From and Subject from each of our $max_messages files
$messages = array();
$display_count = 0;
foreach ($files as $file) {
    if ( $display_count < $max_messages ) {
        $file['subject'] = null;
        $file['to']      = null;
        $file['from']    = null;
        $handle = @fopen('./spool/' . $file['filename'], "r");
        if ($handle) {
            while (($buffer = fgets($handle, 4096)) !== false) {
                foreach ( array('Subject', 'To', 'From') as $header ) {
                    $h_len = strlen($header);
                    if ( substr($buffer, 0, $h_len + 1) == $header . ':' ) {
                        $file[strtolower($header)] = substr($buffer, $h_len + 2);
                if (
                       ! is_null($file['subject']) 
                    && ! is_null($file['to']) 
                    && ! is_null($file['from']) 
                    ) {
                    break; # quit looking after we match all three
        else {
            die("Couldn't open ./spool/" . $file['filename']);
        $messages[] = $file;
    else {

function sortinator($a,$b) {
    return $a['lastmod'] < $b['lastmod'];

<!DOCTYPE html>
<html lang="en">
    <meta charset="utf-8" />
    <title>local mail</title>
    <link rel="stylesheet" href="/bootstrap/css/bootstrap.min.css">
    <meta http-equiv="refresh" content="60">

<div class="container">

    <div class="alert alert-warning">
        Total Emails <span class="badge"><?= HtmlSpecialChars($total_count); ?></span>

    <table class="table table-striped">
<? FOREACH ($messages as $message): ?>
            <a href="/spool/<?= HtmlSpecialChars($message['filename']); ?>">
            <?= HtmlSpecialChars($message['to']); ?>
            <?= HtmlSpecialChars($message['from']); ?>
        <td><?= date('n/j/y h:i', $message['lastmod']); ?> </td>
        <td><?= HtmlSpecialChars($message['size']); ?> bytes</td>
            <?= HtmlSpecialChars($message['subject']); ?>
            <form method="post">
            <button type="submit" class="btn btn-primary trash-msg" 
                name="filename" value="<?= HtmlSpecialChars($message['filename']); ?>">
            <span class="glyphicon glyphicon-trash"></span></button>


<script src="/jquery-1.11.2.min.js"></script>
<script src="/bootstrap/js/bootstrap.min.js"></script>


You’ll end up with something that looks like this: