The 12 Factor PHP App – Part 2

php

This is Part 2 of a 3 part series:

  • Part 1: Codebase, Dependencies, Configuration and Backing Services.
  • Part 2: Build Release Run, Processes, Port Binding and Concurrency.
  • Part 3: Disposability, Dev/prod parity, Logs, Admin Processes.

This series takes the precepts specified in the 12 Factor App manifesto and examines their relevance within the context of PHP applications. It also aims to provide real-world, practical advice as to how the precepts can be leveraged to architect PHP web apps in a scalable and maintainable way.

In this Part of the Series

In the second part of this series we’ll cover factors 5 through 8, including:

  • Build, Release, Run: strictly separate build and run stages.
  • Processes: execute the app as one or more stateless processes.
  • Port Binding: the web app exports HTTP as a service by binding to a port, and listening to requests coming in on that port.
  • Concurrency: scale out via the process model.

V: Build, Release, Run

Strictly separate build and run stages.

The deployment process for a 12-factor app is split into 3 distinct stages. Quoting the document directly:

The build stage is a transform which converts a code repo into an executable bundle known as a build.

The release stage […] contains both the build and the configuration and is ready for immediate execution in the execution environment.

The run stage runs the app in the execution environment, by launching some set of the app’s processes against a selected release.

This may sound complicated, but the key idea is simple: we need to separate the deployment process into two distinct phases. The first phase is to turn the codebase into a deploy (the build stage). The second phase is to update the production environment so the new deploy is used (release and run stages).

But why would we want to do this? There are three important reasons.

The first reason is that it makes it impossible (or at least highly undesirable) to make code changes in production. This is because the “deploy” and the “codebase” are separate entities, so there’s no clear path to propagating code changes backwards from the production environment. Any changes should be made to the “codebase,” which then gets turned into a deploy and pushed into production via the Build, Release, Run cycle.

The second reason is that it makes it very easy to roll-back the production code to an earlier release should something go wrong.

Finally, implementing the Build, Release, Run workflow automates deploys, making them less costly, which in turn increases the frequency with which we can deploy. More frequent deploys mean shorter feedback loops, which leads to better software and less stress when rolling out new code.

How to Apply?

Because of some idiosyncrasies related to the way PHP applications are typically architected and served, we’ll redefine the general steps from the 12-factor document into PHP-specific ones.

Build: fetch and import any vendor dependencies (normally using Composer), and compile raw assets into minified, obfuscated and compressed distributable assets (we can use Gulp, Elixir for this). Release: copy the built deploy into the production environment. Run: reconfigure the web-server to use the new release, restart any long-running processes (such as workers) and rebuild any runtime caches (e.g: the “opcode”). We can assemble our own highly customised deploy process using a combination of different tools (see: Phing, Xinc), which is a good option if we want to have complete control over how we deploy our application.

However, if we want to use an out-of-the-box, tried-and-tested open source tool for managing our build-release-run cycle, we can use Capistrano. Capistrano is an open source deployment management tool written in Ruby. Capistrano enables developers and system administrators to automate deployments, as well as other arbitrary server management processes.

For more information, check out one of the comprehensive guides at DigitalOcean or TutsPlus.

When to Apply?

Although automating the deploy process is a best practice for all web applications, it can be quite laborious to set up initially. Because of the large upfront investment in terms of development, system admin and testing effort, automated deploys (and the related process of continuous integration) are best reserved for projects where code is frequently deployed into the production environment (weekly, daily, or more frequently).

VI: Processes

Execute the app as one or more stateless processes.

The next requirement for our 12 factor web application is that it should not be implemented as a monolithic, single-threaded process. Instead, it should be implemented so that each request can be handled by different, unrelated and non-communicating processes.

This “stateless and share nothing” design pattern is important because of something that will appear as a recurring theme in 12-factor design: horizontal scalability.

Consider the client-server model of computational distribution. In this model, we have centralised servers that handle requests from distributed clients. If we want to increase the number of client requests that our system can handle, we have two options.

We can increase the processing power of the individual servers in our network, thereby increasing the number of requests each can handle. This is scaling vertically. The other option is to simply add more servers to the network. This is scaling horizontally.

Horizontal vs Vertical scaling

Vertical scaling is easier in the short-term, but quickly becomes expensive and exhibits diminishing returns in terms of performance. Horizontal scaling, on the other-hand, is pretty much linear in terms of cost-vs-performance and is highly fault tolerant.

How to Apply?

The standard server configuration for most PHP applications already helps us in satisfying this factor: PHP apps are usually run inside Apache or PHP-FPM, both of which provide complete isolation between requests. The only other thing we need to do is to make sure we utilise a software architecture that doesn’t require “session stickiness.” This means we cannot assume that two requests from the same user will be routed to the same process, or even the same server.

So what implications does this have for our software design?

First off, it means that we must use the filesystem as a cache and not a persistent store. In other words: don’t store anything there that needs to be shared between requests. Anything that needs to persist between requests should go in a remote data store (MySQL, Redis, etc.).

This also holds for our session storage mechanism. Most frameworks by default use the filesystem for session storage (including Laravel). Instead of storing session information in the filesystem, we should offload session storage to a remote database or key-value store.

When to Apply?

Building web apps as being “stateless and share nothing” is a good practice in general and should be your default software architecture. For most projects there’s no reason to wait to start architecting your applications this way.

However, it could be argued for small projects that assuming session stickiness is a better short term win. This is because designing without session stickiness typically means adding at least one more storage mechanism to your project (for session storage).

A useful decision criteria for deciding whether or not to design as stateless and share nothing is the following question: will you ever need more than ONE server for this application? If the answer is “yes” or “maybe,” build it so that it supports distribution easily (stateless and share nothing).

VII: Port Binding

The web app exports HTTP as a service by binding to a port, and listening to requests coming in on that port.

This is where things get tricky. PHP makes it relatively easy to adhere to the previous 6 factors, but when we get to port binding, we need to get creative.

This is because port binding at the application-level is non-trivial to achieve within PHP applications. Most are executed inside a web-server container, such as a module inside Apache HTTPD, or in a PHP-FPM instance connected to nginx over a socket. As a result, port binding is configured at the web-server level.

For most use-cases, this isn’t a problem. We simply handle port-binding in the web-server configuration and accept this as one of the limitations of working with PHP.

But in some situations it makes a lot more sense to adhere to this factor. Some examples of situations where this might be the case are simple headless (no front-end) or request-heavy event-based services. The benefits of using this pattern are that 1) it forces us to treat HTTP as a transport mechanism and not a core part of the app, 2) it enables more control over the lower levels of our technology stack (HTTP and TCP) and 3) it moves port configuration from the web-server into the application.

How to Apply?

We’ve already examined why it isn’t possible to do this using a typical server set up. However, if we really want to support application-level port binding, we have one of two options. We can write our own socket-based server daemon, or we can use an event-based PHP library like ReactPHP.

The ReactPHP library provides us code to handle the TCP and HTTP layer interactions that would normally be handled by our web-server (Apache or nginx). In other words, it lets us “cut out the middle-man” and have complete control over how requests are handled by our application.

Here’s an example of a simple service that returns the current time(), built using ReactPHP:

<?php

include __DIR__.'/vendor/autoload.php';

$app = function ($request, $response) {
    $body = "" . time();

    $response->writeHead(200, ['Content-Type' => 'text/plain']);
    $response->end($body);
};

$loop = React\EventLoop\Factory::create();

$socket = new React\Socket\Server($loop);

$http = new React\Http\Server($socket, $loop);
$http->on('request', $app);

// set the port, start listening on the port
$socket->listen(8000);
$loop->run();

/* server running at localhost:8000 ... */

When to Apply?

As discussed, port binding is a factor that we typically avoid when using PHP, due to limitations with the typical server configuration for PHP applications. In situations where port binding is particularly important (e.g.: event-based programming) we can leverage sockets and event-driven libraries (such as ReactPHP).

VIII. Concurrency

Scale out via the process model.

The 12 factor PHP application treats processes as “first class citizens.” This means that in order to scale our apps, we create more processes to handle the workload. As a further implication of processes as first class citizens, we handle different kinds of work using different kinds of processes.

But why would we want to add the complexity of concurrency to our web applications? There are a number of reasons.

The important benefit of concurrency is pure performance-vs-cost. Scaling out via parallelisation should provide linear / near-linear growth in terms of cost and requests-handled (provided the application is architected to support this growth strategy).

Another obvious benefit is fault tolerance. One process failing won’t cripple the application, or bring a server down.

Finally concurrency provides a faster, more responsive application. The ability to defer processing from the main thread of execution means the web server doesn’t need to wait for background actions to complete before sending the HTTP response back to the client.

How to Apply?

In order to scale out via the process model, we need to change our execution style. Instead of having a single process that handles everything, we diversify – we add different processes to handle different tasks.

The easiest way to do this is to use two process types:

Web processes: handle HTTP requests from clients. Worker processes: perform background actions. Process Types - Web and workers Process types: web processes and worker processes

Background actions include any work that doesn’t need to be done immediately to service requests from clients. For typical web applications this may include things such as sending emails, processing files, performing maintenance, etc.

One way to implement this is to use the “queue centric work pattern.” Using this pattern, the web worker(s) service HTTP requests but then “push” deferred work (jobs) onto a shared data-store: the queue. The worker processes are daemon services that pick jobs off the queue, execute them, and then repeat the process.

The problem in implementing this model is two-fold.

Firstly we have the system administration concerns: we need to set up our servers to support running the two different processes indefinitely. We can use process management tools such as Circus or SupervisorD to do this.

Secondly, we have software architecture concerns: we need to modify our application so that instead of running deferrable actions in the web process, those deferrable actions are pushed onto the queue and executed later by a worker process. We can do this using a library such as PHP Resque, or the bundled queue package in Laravel.

An example of how this works in practice is provided below. In the example, you can see that during user creation we need to send an email. Instead of performing this action inline, we push the job to a queue which gets executed later by a different process.

<?php

// this code is run by the "web" process
namespace App\Services;

class UserCreator
{
    public function create($data)
    {
        // ... etc
        Queue::push(
            'App\Jobs\SendEmail@fire',
            ['email' => $data['email'], 'body' => '...etc...']
        );
    }
}


// this code is run by the "worker" process
namespace App\Jobs;

class SendEmail
{
    public function fire($job, $data)
    {
        Mail::send('templates.email', $data, function ($message) use ($data) {
            $message->to($data['email']);
        });
    }
}

When to Apply?

It makes sense to scale out via the process model using worker processes as soon as you start implementing deferrable actions. Doing this not only makes your applications scalable from the outset, but also provides a superior user experience as users aren’t bothered with long wait-times for actions that could be performed in the background.

Also consider that you don’t need multiple servers to start benefiting from concurrency – you can configure a web process and multiple worker processes on a single server.

Conclusion

After covering factors 5 through 8, we’ve gained some important insights into building web apps that are scalable, performant and flexible.

We’ve investigated how to deploy applications in an automated, consistent manner by employing a build, release, run cycle.

We’ve also learned how to build our applications for horizontal scalability using processes and concurrent programming.

Finally, we touched on alternative architectures for port binding and event-based programming.

In the final part of this series we’ll discuss factors 9 through 12, which are primarily concerned with tools and procedures for performing systems administration for 12 factor apps.


Want to find out more?

We've worked with businesses just like yours to execute successful web projects helping them to optimise operations, improve marketing, and sell more online with custom software solutions. Reach out and tell us about your project for a free no-commitment consultation.

Find out more