Installing FreeBSD on a SoYouStart Dedicated Server Over IPMI

And Converting a 4-Way ZFS Mirror Into a RAIDZ Array Via a Network Reinstall

I’ve had great luck over the last few years renting SoYouStart’s super-cheap, but high powered dedicated servers. The prices are phenomenal, and the sale prices are even better. The SYS web UI is outstanding, especially if you are moving up from one of the heard of half-baked WHMCS installs. Everything is wonderfully automated. Their technical support is… Well their prices are really great, and better servers come with IPMI. Love it.

I recent ordered a pair of new servers that were a little different that the one’s I’d previously gotten. All my previous machines had been running FreeBSD on ZFS mirrors, but these new machines came with four big drives each. However upon delivery, I noticed that they’d been provisioned as a four-way ZFS mirror. That’s four disks all mirroring each other, quadruple redundant, with the total usable space of a little less than one disk. Sad trombone.

What I actually wanted was one RAIDZ array, and one RAID10 array.

The Dark Ages

So I tried to reinstall through their web UI with a custom layout, which made it easy to set the swap to something more reasonable than their stock offerings but the only options were to install on a single drive or an N-way mirror. No-go.

Then I checked out their “install template” options. They don’t seem to be documented anywhere, and they don’t really give you any low-level ZFS options, just zvol choices. Fail.

But I have IPMI, so I can do anything I want, right? I saved their installed network settings (important!) and launched their IPMI java console. Off the bat, it was a lot easier to use than other providers. I have had to jump through so many dumb hoops to get IPMI actually working on a server where it was sold as a fearture, but treated like a annoying inconvenience by the data center. And the console *actually* worked in OSX! Oh my!

Except it didn’t because it doesn’t support media redirection (mounting CDs over the network) on OSX. But I have run into this before, so I booted VirtualBox on my MacBook, installed Windows, and connected to their IPMI console. Nope, need to install java first. I always forget that. So many swears. Ok, with that installed I could use my OSX laptop, to run a Windows VM, to run a KVM console in a Java virtual machine, to install FreeBSD on a server in another country. So may yaks! But that got it working!

Until it didn’t. The boot disk took *forever* to load, and then seemed to hang after it loaded “ums0” the virtual keyboard/mouse device. It just hung there forever. I tried different ISOs and different versions of FreeBSD, but they all hung somewhere late in the boot. I tried the memstick mini image but that seemed to reliably crash the java console. Disaster. Now what?

Im going to skip ahead through a lot of hair pulling and failed schemes, and I’m not even going to mention trying to break three disks out of the mirror on a running system and build a RAIDZ with a fake memory disk in the fourth slot. Oof!

The Enlightenment

I finally got on Freenode #FreeBSD and begged for help. RhodiumToad pointed out that so late in the boot, when the machine was already in multi-user, it was a very strange place to hang, a he felt that it was booting fine, but the console output was getting sent to the wrong console. I poked around at his direction, with a very slow boot in between each try, and he finally suggested that we should disable serial output during boot, and that’s how we fixed it!

I booted from the v12.1 bootonly.iso, and then at the boot menu, I selected option #3, “Escape to the loader prompt”. That drops you into a command line “OK” prompt. I could see that the serial console was enabled with:

> show boot_serial

So I disabled it, and then continued the boot with:

> set boot_serial=NO
> boot

And I was in the installer! SYS actually had two console options. In addition to the Java IPMI console, there is an option for a browser-based serial console. At some point during the boot, FreeBSD switched the console to that and I could no longer interact with the Java console. I couldn’t simply use the serial console because it doesn’t support media redirection.

Run Multiple Versions of PHP-FPM on FeeeBSD

Let’s install two version of PHP-FPM on the same FreeBSD server! This is not something that Ports or Packages really supports, but with a few cheats, we can make Ports do most of the hard work, so we don’t have to install them from scratch like cavemen. I’m installing 5.6 and 7.2 but this should probably work with other versions.

The first and dumbest step is to install and immediately delete php 5.6 and 7.2 via ports. This gets all of the build and run dependencies installed, in a very normal way, before we start building things with weird flags that muck everything up when they cascade down to dependencies in the build process. No, you can’t (easily) use packages for this because you want those build dependancies installed. There is probably a smarter way to do this.

cd /usr/ports/lang/php56
make install clean
pkg del php56

cd /usr/ports/lang/php72
make install clean
pkg del php72

Now build them again, with flags that change where they are installed and disable the Ports system’s normal good sense about installing software that conflicts. This is the smart bit that I stole from the FreeBSD Forums.

cd /usr/ports/lang/php56
make PREFIX=/usr/local/php56 PHPBASE=/usr/local/php56 install clean DISABLE_CONFLICTS=1

cd /usr/ports/lang/php72
make PREFIX=/usr/local/php72 PHPBASE=/usr/local/php72 install clean DISABLE_CONFLICTS=1

So now you have everything for each of the two versions installed under /usr/local/php56 and /usr/local/php72, respectively. They install rc scripts that we will need to adapt so we can run them at the same time:

cp -p /usr/local/php56/etc/rc.d/php_fpm /usr/local/etc/
perl -pi -e 's/php_fpm/php56_fpm/' /usr/local/etc/
perl -pi -e 's/' /usr/local/php56/etc/
perl -pi -e 's/' /usr/local/php56/etc/php-fpm.conf
perl -pi -e 's#listen = = /tmp/php56-fpm-www.sock#' /usr/local/php56/etc/php-fpm.conf
echo 'php56_enable="YES"' >> /etc/rc.conf
service php56-fpm start

The perl one-liners are just a compact and copy-paste-able way to say you need to edit files to replace certain things. The rc files need to be edited to differentiate them, so they each have their own rcvars, and pid files. The php-fpm.conf files are edited to sync that pid file change and differentiate the listen sockets.

cp -p /usr/local/php72/etc/rc.d/php_fpm /usr/local/etc/
perl -pi -e 's/php_fpm/php72_fpm/g' /usr/local/etc/
perl -pi -e 's/' /usr/local/php72/etc/
perl -pi -e 's/' /usr/local/php72/etc/php-fpm.conf
perl -pi -e 's#listen = = /tmp/php72-fpm-www.sock#' /usr/local/php72/etc/php-fpm.d/www.conf
echo 'php72_enable="YES"' >> /etc/rc.conf
service php72-fpm start

In this example, I have opted to change the “listen” directive to a unix socket. You could use tcp sockets too, but you will have to choose a different port for each version of PHP-FPM that you are running.

cd /usr/ports/www/apache24
make install clean
echo 'apache24_enable="YES"' >> /etc/rc.conf
vi /usr/local/etc/apache24/httpd.conf

Obviously you’ll want to edit httpd.conf to suit your needs. That is beyond the scope of this post, but I will be uncommenting the “proxy_fcgi_module” line so apache can talk to PHP-FPM.

Now add FPM pools to one PHP-FPM or the other, or both. For example:

user = haroldp
group = haroldp
listen = /tmp/php72-fpm-haroldp.test.sock
listen.mode = 0666
chroot = /home/haroldp
pm = ondemand
pm.max_children = 50
php_admin_value[doc_root] = /haroldp.test/htdocs
php_admin_value[cgi.fix_pathinfo] = 0
php_admin_value[sendmail_path] = /bin/mini_sendmail -t

And add vhosts to apache, pointing to a php-fpm 5.6 socket or a php-fpm 7.2 socket, at your option. Maybe something like:

<VirtualHost *:80>
  ServerName haroldp.test
  DocumentRoot /home/haroldp/haroldp.test/htdocs
  SuexecUserGroup haroldp haroldp
  ErrorLog /home/haroldp/haroldp.test/logs/haroldp.test.error_log
  CustomLog /home/haroldp/haroldp.test/logs/haroldp.test.access_log combined
  <Directory /home/haroldp/haroldp.test/htdocs">
    Order allow,deny
    Allow from all
    Options +Indexes +FollowSymLinks +ExecCGI +Includes +MultiViews
    AllowOverride All
  <FilesMatch .php$>
    ProxyFCGIBackendType GENERIC
    SetHandler "proxy:unix:/tmp/php72-fpm-haroldp.test.sock|fcgi://localhost/home/haroldp/haroldp.test/htdocs$1"

Switching between them is a two byte change to the apache vhost config.

service apache24 start
service php72-fpm restart
service php56-fpm restart


Running something like:
cd /usr/ports/lang/php56
make PREFIX=/usr/local/php56 PHPBASE=/usr/local/php56 deinstall reinstall DISABLE_CONFLICTS=1

for each version should upgrade it.


My big worry with this approach is that a future update may create a requirement for incompatible dependancies between the two ports.

I would prefer an approach that builds each PHP-FPM in it’s own jail, but of course they need access to the file system that hosts the websites. Is there a smart way to do that? Put /home on its own ZFS volume and mount it in each PHP jail? And share a /tmp between all of them for the unix sockets? You’d need to keep user accounts synced or use LDAP for authentication.

Setting up Apache on OSX 10.12 (Sierra) for no-setup wildcard virtual hosts

I want to host a copy of all the websites on which I work, right on the computer where I do my coding. I don’t want to depend on a server on my LAN that won’t be there when I am working from out of the office. I don’t want to work on a remote server that requires a slow (S)FTP loop to try out every change. And I also don’t want to work entirely from the command-line on a remote server. So I set up my macbook with wildcard DNS that points any hostname *.test to localhost ( If I am working on’s website, I can use “example.test” as a hostname that points right back to my machine. Now I need to set up Apache to host example.test. But adding a virtual host config for every website on which I work is going to be extremely laborious. Luckily Apache supports no-config mass virtual hosts. All I’ll need to do to add a new website for example.test is create an “example/” directory in the right spot.

Apache includes cool “mass virtual hosting” features that will allow it to suss out the DocumentRoot from the request hostname. First, let’s create a directory where our virtual hosts will live:

mkdir ~/Documents/vhosts

I put mine in a folder inside my Documents folder. For my login, that’s /Users/haroldp/Documents/vhosts. But you can put it just about anywhere. I added the following to my /etc/apache2/httpd.conf:

<VirtualHost *:80>
VirtualDocumentRoot /Users/haroldp/Documents/vhosts/%-2/htdocs
AddType application/x-httpd-php .php
DirectoryIndex index.php index.html

<Directory /Users/haroldp/Documents/vhosts>
Require all granted
AllowOverride All
Options +FollowSymLinks

First, note that /Users/haroldp/Documents/vhosts makes sense for me on my computer, but it’s going to be different for everyone, so you can’t just copy & paste. Note that I tacked an /htdocs directory onto the end of my vhosts VirtualDocumentRoot directive. This is not necessary at all, but I like to have directories associated with a website, but outside the webspace. You don’t have to do that if you don’t want. Note too that I setup PHP, because I’ll be using that. You may or may not want those directives.

Then I uncommented the mod_vhost_alias module to enable apache’s mass vhosting directives:

LoadModule vhost_alias_module libexec/apache2/

While I was in there I told apache to Listen only on localhost:


Because I don’t want anyone else to be able to hit the webserver on my laptop. Just me.

Next I enabled the PHP module by uncommenting:

LoadModule php5_module libexec/apache2/

I set the ServerName so Apache won’t whine about it:

ServerName www.test

I set the ServerAdmin so I know whose fault it is when something doesn’t work:


And finally, I changed the user that Apache runs as to my login:

User haroldp
Group staff

WARNING, HIGH VOLTAGE!! This is very dangerous. I’m setting up apache to do its work, including running PHP scripts as my own UID. This means that a naughty script could do anything on my machine that I could, including very bad stuff. I am doing this so things like WordPress will create files with my login instead of the web user, which avoids a lot of hassles, and makes upgrades much easier. I am not worried too much about the security implications because I am running my own code, and the server is only available on localhost. If you are already on the machine, there are easier ways to do bad things.

Ok, let’s check our work:

sudo apachectl configtest

Fix any errors and rerun until Apache starts without issue. Then simply:

sudo apachectl start

If that worked, you should get a (404) page if you go to . But let’s try out our virtual hosting:

mkdir ~/Documents/vhosts/foo
mkdir ~/Documents/vhosts/foo/htdocs
cat '<?php phpinfo(); ?>' > ~/Documents/vhosts/foo/htdocs/index.php

You should get a phpinfo() page if you go to http://foo.test/ .


Edit 2/13/2018:

This article previously used the “.dev” top level domain. However, Google has bought and deployed .dev. So .dev is dead and all references have been changed to .test.

Edit 9/16/2020:

WHAT YEAR IS IT? I got a new macbook running Catalina and this setup required two updates. First, the default PHP version is 7, so you will have to adjust that config. Second, when I tried to access my vhosts I got an error like: AH00035: access to / denied (filesystem path ‘/Users/haroldp/Documents/vhosts’) because search permissions are missing on a component of the path”. I fought that for a while thinking that file permsiions had changed, but it fact it was a system setting. I needed to allow apache full access to the hard drive. Details here.

Edit: 1/30/2024:

I got a new macbook running Sonora. First you need to install PHP, as it’s no longer included in the system. I installed via Homebrew. Next I got an error starting Apache because the PHP module wasn’t signed. I followed this tutorial to sign it.

Using dnsmasq on OSX 10.12 (Sierra) for local dev domain wildcards

We want to develop websites or other internet services on our OSX computer. It’s convenient to point wildcard DNS for a whole (imaginary) top level domain to localhost, so we can invent as many domains as we want without having to edit any config files or do any work.

I used Homebrew to install dnsmasq:

% brew install dnsmasq

And then set up the config file:

cp /usr/local/opt/dnsmasq/dnsmasq.conf.example /usr/local/etc/dnsmasq.conf
vi /usr/local/etc/dnsmasq.conf

To the end of that file I simply added:


Which tells dnsmasq to resolve any hostname ending in .test to (localhost).

Now we need to tell the OS to start dnsmasq automatically. Again, brew will do all the hard work for us:

sudo brew services start dnsmasq

Let’s test to see if it’s working:

dig foo.test @
; <<>> DiG 9.8.3-P1 <<>> foo.test @
;; global options: +cmd
;; Got answer:
;; ->>HEADER< ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0
;foo.test. IN A
foo.test. 0 IN A

We asked the resolver running @ (dnsmasq) the address for “foo.test” and it returned “” which is just what we want.

Now we need to get OSX to use that resolver for DNS lookups on the .test TLD. OSX makes this pretty easy:

sudo mkdir -p /etc/resolver
sudo vi /etc/resolver/test

And insert a line in that file just like you might in /etc/resolv.conf:


That’s it. Anything ending with .test will point to localhost. So our next step is to run a server there.

Edit 2/13/2018:

This article previously used the “.dev” top level domain. However, Google has bought and deployed .dev. So .dev is dead and all references have been changed to .test.

Safe Jail Upgrades With ZFS Clones

So I had a FreeBSD jail today running a geriatric version of MySQL that was long past time to update. But sometimes when you perform big updates you discover problems along the way, and it takes much longer than usual. This particular jail can’t tolerate a lot of downtime so I thought it would be nice to perform a “practice” upgrade to flush out all of the problems and have solutions ready when I’m ready to perform the live upgrade. My idea was to clone the jail and perform a practice upgrade on that. It was easy.

First, clone the file system. That’s easy because I keep all of my jails on their own ZFS file systems and maintain daily snapshots:

zfs clone zroot/jails/vhost2@daily.0 zroot/jails/vhost2upgrade

And of course that happens pretty much instantly.

Next I found a free IP address on the server, created a new jail and started it up:

ezjail-admin create -x vhost2upgrade
ezjail-admin start vhost2upgrade
ezjail-admin console vhost2upgrade

Now I’m logged into a clone of my server jail and I can perform my MySQL upgrade and figure out all the little details before I perform it on the live server jail.

All done? Clean up after yourself. Log out of the jail and then from the host system:

ezjail-admin stop vhost2upgrade
ezjail-admin delete vhost2upgrade
zfs destroy zroot/jails/vhost2upgrade

An even better setup would be if I had set up my jail with server and user data on separate partitions. Then, once I had the server data working the way I wanted, I could zfs promote the clone’s server partition. To do.

Configuring a Dev Box Mail Server

I develop websites on my laptop using a local web server.  Often those sites have functions that send out email, and that needs to be tested, along with everything else.  It can be a problem when some function sends out lots of emails to customers, admins, affiliates – a bunch of people.  If I’m working with a copy of the “live” database to debug some problem, it might try to send emails to places I don’t want (real customers).  What would be nice is if it generated those emails, but just wrote them to a file on disk where I could look at them.

This is pretty easy to do with Postfix, my MTA of choice and the one that ships with my OSX laptop.  First, add a line to the end of /etc/postfix/

fs_mail unix - n n - - pipe flags=F user=_www argv=tee /Users/haroldp/Documents/Projects/localmail/htdocs/spool/${queue_id}.${recipient}.txt

Let’s break that down.

  • I am adding a new service that I’m naming, fs_mail.
  • It accepts mail from the pipe service (works like a unix pipe).
  • It should run as user _www, which is the UID my web server runs as.  The mail files created with have 0600 permissions, so only the owner can read them.  More on that later.
  • The pipe argv is set to tee (tee has a man page you can read), to split output to a file.  And that file is in a directory in my websites folder.  Each file will be named using the postfix queue ID and the recipient.  I thought that would be sufficiently unique for my needs.

When that is saved, we need to create the directory to collect those emails and make it writable by the fs_mail process:

mkdir /Users/haroldp/Documents/Projects/localmail/htdocs/spool
chmod 777 /Users/haroldp/Documents/Projects/localmail/htdocs/spool

If you are setting this up on your own computer, you will want to adjust the directory location to your suit your needs.

Now we need to tell postfix to use our new service for all outgoing email. Edit /etc/postfix/ adding the following:

default_transport = fs_mail

That should do it. Restart postfix and check your mail log for any errors:

sudo postfix stop
sudo postfix start
tail /var/log/mail.log

If that all looks good we can test by sending an email from the command line:

% mail
Subject: test #42
This is a test message. End it by typing a period (.) on its own line, and hitting return.

Check your mail.log again to see that it worked without error. Check your new spool directory to see if there is a mail file in there.

If that is working, then you are done! But remember that we saved those messages as UID _www? That is the default user ID of apache web processes on OSX, so my local web server can read those files. For extra credit build a web page to view the 10 newest emails in your spool dir:

# number of messages to display:
$max_messages = 10;

if ( isset($_POST['filename']) ) {
    # deleting a file
    $filename = './spool/' . $_POST['filename'];
    if ( file_exists($filename) ) {
    else {
        die("File $filename not found");

# get a list of all the files, then sort them by age, newest first
$files = array();
if ($handle = opendir('./spool/')) {
    while (false !== ($entry = readdir($handle))) {
        if ( $entry !== '.'  && $entry !== '..' ) {
            $stat = stat('./spool/' . $entry);
            $files[$stat['size']] = array(
                'filename' => $entry, 
                'lastmod'  => $stat['mtime'],
                'size'     => $stat['size']
usort($files, "sortinator");

# get To, From and Subject from each of our $max_messages files
$messages = array();
$display_count = 0;
foreach ($files as $file) {
    if ( $display_count < $max_messages ) {
        $file['subject'] = null;
        $file['to']      = null;
        $file['from']    = null;
        $handle = @fopen('./spool/' . $file['filename'], "r");
        if ($handle) {
            while (($buffer = fgets($handle, 4096)) !== false) {
                foreach ( array('Subject', 'To', 'From') as $header ) {
                    $h_len = strlen($header);
                    if ( substr($buffer, 0, $h_len + 1) == $header . ':' ) {
                        $file[strtolower($header)] = substr($buffer, $h_len + 2);
                if (
                       ! is_null($file['subject']) 
                    && ! is_null($file['to']) 
                    && ! is_null($file['from']) 
                    ) {
                    break; # quit looking after we match all three
        else {
            die("Couldn't open ./spool/" . $file['filename']);
        $messages[] = $file;
    else {

function sortinator($a,$b) {
    return $a['lastmod'] < $b['lastmod'];

<!DOCTYPE html>
<html lang="en">
    <meta charset="utf-8" />
    <title>local mail</title>
    <link rel="stylesheet" href="/bootstrap/css/bootstrap.min.css">
    <meta http-equiv="refresh" content="60">

<div class="container">

    <div class="alert alert-warning">
        Total Emails <span class="badge"><?= HtmlSpecialChars($total_count); ?></span>

    <table class="table table-striped">
<? FOREACH ($messages as $message): ?>
            <a href="/spool/<?= HtmlSpecialChars($message['filename']); ?>">
            <?= HtmlSpecialChars($message['to']); ?>
            <?= HtmlSpecialChars($message['from']); ?>
        <td><?= date('n/j/y h:i', $message['lastmod']); ?> </td>
        <td><?= HtmlSpecialChars($message['size']); ?> bytes</td>
            <?= HtmlSpecialChars($message['subject']); ?>
            <form method="post">
            <button type="submit" class="btn btn-primary trash-msg" 
                name="filename" value="<?= HtmlSpecialChars($message['filename']); ?>">
            <span class="glyphicon glyphicon-trash"></span></button>


<script src="/jquery-1.11.2.min.js"></script>
<script src="/bootstrap/js/bootstrap.min.js"></script>


You’ll end up with something that looks like this:

Apache with PHP-FPM, chroots and per-vhost UIDs

I’ve finally got a working config for Apache with PHP-FPM, per-vhost pools, UIDs and chroots.  There seem to be a lot of tutorials around the net to help set up FPM with nginx, but very little with Apache.  The following instructions are for FreeBSD, but they would be easy to adapt to most any OS.  This document is still evolving, but I wanted to get it out to people in FreeNode #php-fpm who have been asking for help.

Why am I setting up PHP like this??

I have been using Apache with mod_php for years, and it works, but it has a number of problems, especially in a virtual hosting situation. All PHP scripts will run with the webserver’s UID, which is crummy for security. Users’ scripts can see the whole file system. When Apache services a non-PHP requests, such as an image or style sheet, it still has to load the whole PHP interpreter, using a bunch of memory.

This setup addresses each of these issues, hopefully making PHP sites more secure, and less memory hungry. Instead of including the mod_php interpreter in Apache uses the “FastCGI” protocol to parcel requests to a long-running “PHP-FPM” server. Each website I’m hosting is has its own configuration. Each runs under its own UID. Each is chroot-ed in the owner’s home directory. Only PHP requests are handled my PHP-FPM. Everything else stays in Apache.

Let’s get to the details

We’re going to install and configure a bunch of stuff software, and then set up a chroot environment.

Install Apache 2.2

cd /usr/ports/www/apache22
make install clean

Be sure to enable suexec in the Apache options dialog.

Enable Apache

Add apache22_enable=”YES” to /etc/rc.conf and start it up

service apache22 start

Install PHP-FPM

cd /usr/ports/lang/php5
make install clean
  • Do NOT build the Apache module.
  • DO build the FPM version
  • Building the CGI and CLI versions is fine as well
  • I add the mailhead patch too

Install the PHP extensions

This is a bit of a FreeBSD-ism, that you won’t have to do on most other OSs.  FreeBSD strips the PHP port down to a bare minimum, and moves all the plugins – including the default ones – into their own ports.  The php5-extensions meta-port collects them all into one place.

cd /usr/ports/lang/php5-extensions
make install clean

Add php_fpm_enable=”YES” to /etc/rc.conf and start it up

service php-fpm start

Install fastcgi

cd /usr/ports/www/mod_fastcgi/
make install clean

Edit httpd.conf, inserting:

LoadModule fastcgi_module     libexec/apache22/
LoadModule suexec_module        libexec/apache22/

and setting:

ServerName server_ip_address_or_working_hostname

And un-comment the “Include” directives that make sense for me.
Now append:

NameVirtualHost *:80
Include etc/apache22/Includes/*.conf

and comment out this block:

#<Directory />
#    AllowOverride None
#    Order deny,allow
#    Deny from all

Yes, that super-sucks.  Does anyone know of a workaround?

I like to keep each vhosts configuration in its own file, in a “vhosts/” directory, so I append:

Include etc/apache22/vhosts/*.conf


mkdir vhosts disabled-vhosts

You can guess what the second directory if for.  Now restart and see if that works.

service apache22 restart

You may get a warning like “NameVirtualHost *:80 has no VirtualHosts” because we haven’t added any yet.  Nothing to worry about.

Next create a Includes/php-fpm.conf for global fpm configs that will apply to every site.  Mine looks like:

FastCgiIpcDir /usr/local/etc/php-fpm/
FastCgiConfig -autoUpdate -singleThreshold 100 -killInterval 300 -idle-timeout 240 -maxClassProcesses 1 -pass-header HTTP_AUTHORIZATION
FastCgiWrapper /usr/local/sbin/suexec

<FilesMatch \.php$>
SetHandler php5-fcgi

Action php5-fcgi /fcgi-bin

<Directory /usr/local/sbin>
Options ExecCGI FollowSymLinks
SetHandler fastcgi-script
Order allow,deny
Allow from all

See if Apache likes that:

service apache22 restart

Configure FPM

Now FPM needs some configuration.  Create a directory to store per-vhost fpm configs:

mkdir /usr/local/etc/fpm.d

Then edit the global php-fpm.conf, un-commenting:


switching the listen statement from a tcp port to:

listen = /tmp/php-fpm.sock

and changing the pm to:

pm = ondemand

There are a couple different types of process manager (pm).  On demand will prefork zero (0) processes.  They will only forked when needed.  I chose this for lots of small sites.  You may want a model that suits your setup better.

Now lets create a vhost.  Given a site named “” owned by user “luser”, here’s my template:

<VirtualHost *:80>
DocumentRoot    /home/luser/
SuexecUserGroup    luser luser
ErrorLog        /home/luser/
CustomLog        /home/luser/ combined

<Directory /home/luser/">
    Order allow,deny
    Allow from all
    Options +Indexes +FollowSymLinks +ExecCGI +Includes +MultiViews
    AllowOverride All

FastCgiExternalServer /tmp/ -socket /tmp/ -user luser -group luser
Alias /fcgi-bin /tmp/
<Location /fcgi-bin>
    Options +ExecCGI
    Order allow,deny
    Allow from all

<LocationMatch "/(ping|fpm-status)">
    SetHandler php5-fcgi-virt
    Action php5-fcgi-virt /fcgi-bin virtual

And create a complimentary the FPM pool config:

user = luser
group = luser
listen = /tmp/
chroot = /home/luser
pm = ondemand
pm.max_children = 50
pm.status_path = /fpm-status
php_admin_value[doc_root] = /
php_admin_value[cgi.fix_pathinfo] = 0
php_admin_value[sendmail_path] = /bin/mini_sendmail -t

Living in a chroot
So PHP’s mail() function invokes your system’s sendmail binary, usually /usr/sbin/sendmail.  From within a chroot, that won’t be available.  However, there is the further problem that even if you copied sendmail and any libraries it needs into the chroot, it will want to write files to /var/spool, and again, that won’t be available.  We need a work around.  Install mini_sendmail.  It is a sendmail workalike that you can easily copy into a chroot, and instead of writing  to /var/spool, it will make an SMTP connection to localhost.  Be sure to set the -f envelope-sender in your fpm pool config, or mini_sendmail will use your username out of the environment when PHP or mini_sendmail was compiled, at the machine name.  PHP scripts can still override it using the mail() functions additional_parameters argument.

cd /usr/ports/mail/mini_sendmail
make install clean

Create a chroot environment for the vhost:

mkdir ~luser/tmp ~luser/bin
ln /tmp/mysql.sock ~luser/tmp/
cp /rescue/sh ~luser/bin/sh
ln /usr/local/bin/mini_sendmail ~luser/bin/mini_sendmail

PHP will need a /tmp directory.  If you are using MySQL, you will need to hardlink your mysql.sock into there or use TCP connections.  If you link the socket, you need to redo that EVERY time you restart MySQL.  (I should include my rc script here).  Hard link mini_sendmail into the chroot.  And finally, PHP needs a shell to invoke sendmail.  Yes this sucks.  You can copy /bin/sh in, but chances are, it needs libraries that aren’t in the chroot.  I could copy those too, but I just copied the crunched binary from FreeBSD’s /rescue dir.  Yes, this sucks even more because it includes stuff I don’t want or need, and I need a better solution.  TODO: crunch my own sh with a couple other useful items.  Maybe use busybox for this?

Set the tmp dir in php.ini to

upload_tmp_dir = /tmp

Update #1

I had a problem with a number of server variables not getting properly translated for use within the chroot, so I added a php prepend directive to the php-fpm conf files like:

php_admin_value[auto_prepend_file] = /bin/phpfix

And then linked this file into each chroot’s ~/bin/ directory:

$_SERVER['DOCUMENT_ROOT'] = ini_get('doc_root');

Update #2

PHP’s streams tools (like file_get_contents()) rely on openssl for HTTPS URLs, and many other plugins (like SOAP) in turn rely on those streams. Curl seems to function just fine in a chroot, but PHP’s openssl streams require certain device nodes to function. You will have to mount /dev inside your chroot in order to use them. More on this when I get a good system in place.