Giving life to more than 27% of all websites on the surface web, WordPress is usually the choice most publishers and small businesses make. Not only is the installation process easy as 123, but the community behind it is so immense you can literally take for granted that a developer created all the plugins you need.
The platform has opened many jobs for front-end developers, and created opportunities for freelancers to create themes, plugins, and offer programming services specifically for WordPress.
Bigger websites based on WP are almost all using the same server stack since the PHP platform wouldn’t be able to handle that much traffic on its own. Because WordPress has to load its entire core and plugins on every single page request, it is primordial to wrap the whole thing with a caching engine. Good news is there are several plugins to do so. Yet again, they’re all PHP.
The fabulous World of Open Source and freeware
A good solution is to use other third-parties such as Varnish cache, Nginx caching functionalities, or even opcache. Using maintained free softwares is pretty much the easiest way around : nothing to take care of, no worries. As long as everything works fine as a whole.
But will it scale correctly?
The bigger you get, the weirder your needs become
As you develop new features and plan on building greater back-end applications, the usual server stacks become harder and harder to build on top of. No problem, let’s just launch a new server with Python, another one with Ruby on Rails, one with Node.JS and one with PHP, then develop on each of them depending on our needs, right?
Trust me, if you plan on doing this, no developer will ever want to work for you ; sailor’s honour.
Handling thousands of requests every minute
Imagine you’re a waiter in a somewhat famous restaurant. As the clients walk in, the host or hostess will assign them a table, which might, have been assigned to you. If the manager forecasts a large amount of clients that night, they will have more waiters come in. At some point, if all places are occupied, potential clients might want to wait in line until they’re assigned a table.
If they’re still too many clients, hire more staff, find a bigger place, fit more tables. Until it doesn’t work anymore.
Scaling on hardware is expensive
Now, imagine if you could serve all clients at the same time, without a bigger restaurant. That’s right, a waiter can take care of multiple people at once. Waiters can share their workload together. Clients can share seats. This analogy is going to far.
The way PHP and WordPress operate is called blocking I/O which means every process handles one single request. It means it will handle requests one after the other, and will stack the pending ones until they can be handled. Multi-thread is pretty simple to configure. The more threads you have, the more people you can serve simultaneously. Problem is those threads are all separate processes, which means they can’t share memory.
Sounds like an easy enough DDoS or Slowloris attack to me. Don’t Google this.
Non-blocking former atrocity
The first thing most developers think of when hearing of non-blocking is single-thread. And it used to be true with Node.JS until the cluster built-in module became stable. With our sites, we run with a pretty interesting stack : Nginx, a Node.JS cluster, Redis and MongoDB.
That way, we can benefit from nginx caching and static file handling, Node.JS’ multi-thread cluster with non-blocking I/O capable of handling more than 200,000 concurrents simultaneously, redis for shared memory between Node.JS forks, and MongoDB for quick and consistant data storage and replica sets. Sockets are also pretty cool for real-time events.
A screenshot of our new content management platform, Lilium.
Advertisers and users need a faster page load
We know users tend to bounce before the page is finished loading if it takes too much time. With Narcity Media, we work hard on making sure the user experience is always the best it can be and faster page load is crucial.