This article may contain affiliate links. If you buy some products using those links, I may receive monetary benefits. See affiliate disclosure here
The traditional way to develop PHP websites has been to use bundled up solutions like Xampp, Wamp, and Mamp.
When you are beginning, it’s the easiest way to get started.
However, such an approach has several limitations.
By the way, if you prefer watching a video, here is the video that covers all the points we’re going to discuss below:
The Challenges
- First of all, have you ever come across a situation where the application is working on localhost, but when you take it to the server, something gets broken. That happens when we’re using different environments on the local machine and the server. Maybe the server is using PHP 7 while we’re using PHP 8 locally.
- Second, installing PHP extensions is not that straightforward. If you have ever tried configuring Xdebug on Xampp, you know what I meant.
- And the most important factor is portability. Sometimes we push the code to GitHub and then clone it to the server. But most of us often end up uploading the the project using plain old FTP and then make some adjustments to get it working on the server.
Containers are the solution to all such problems.
Why Containers
Docker is a service that allows you to create containers.
It’s not a direct replacement for Xampp or Wamp, but it allows us to setup the exact same server environment on the local machine by writing a simple configuration file.
When it’s time to deploy the application install Docker on the server as you did on the local machine, clone the project repository along with the necessary configuration files, and start the containers. That way you get an exact replica of what you had on the local machine.
Think of a container as a computer inside your computer. It feels like a virtual machine but it’s not. A container shares the Kernel of your host operating system instead of using a full-blown operating system, making it more lightweight than a virtual machine.
For instance, you can run a lightweight Alpine Linux container inside an Ubuntu machine. You can also run multiple containers with different specifications.
So today, I will show you how you can install Docker on Ubuntu machine and then set it up for developing PHP and MySQL applications.
By the end of this video, we will have a php web page like this running on a localhost domain name with port mappings. This is a page outputting the PHP info page.
Installing Docker on Desktop
Looking at the documentation, you can see that Docker gives you a desktop application, which is a combination of the container engine, compose plugin and command line utility.
That’s the easiest way to set up Docker on your desktop machine.
You can also install each of them manually from the command line. That’s what we are going to look today.
So let’s begin by opening the terminal in our desktop.
Installing Dependencies & setting up Keyrings
First we need to install a few dependencies such as the ca-certificates
, curl
, gnupg
and lsb-release
packages. You can install all of them from the Ubuntu’s apt repository.
sudo apt update
sudo apt install ca-certificates curl gnupg lsb-release
Next we want to set up the keyrings. Keyrings are a way to ensure that package files are actually coming from the repository we have set up.
For that create a new directory called keyrings inside etc/apt if it does not already exist.
mkdir -p /etc/apt/keyrings
Then run the following curl
command, which sets up Docker’s gpg keys inside the directory we have just created:
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
Installing Docker
The next command setup Docker’s repository on our machine.
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
The command looks lengthy and complicated but it’s quite simple when you understand it.
It pulls the repository information from download.docker.com and then assigns the keyring we have just created in the previous step. And then saves it to the sources list.
From now onwards apt commands will pull the details from the newly set up repository instead of the default ubuntu repository.
Okay, now we can actually install it. First run the apt-update command.
Then install all the four packages in one line – the container engine, cli tool, containered, and the compose plugin.
sudo apt update
sudo apt install docker-ce docker-ce-cli containerd.io docker-compose-plugin
In case you get any error due to dependency issues, you can run the app install command once again with the force option:
sudo apt-get -f install
Verify Docker Installation
Then you can check if the docker service is running:
sudo systemctl status docker.service
Okay, you can see that the status is active. We can also test for that by running the hello world container:
docker run hello-world
From now onwards you can use docker-based commands to create images and spin up new containers.
We can also add the current user to the docker
group so that we can run docker commands without prefixing with sudo
.
sudo usermod -aG docker abhinav
Log out, and wait for a couple of seconds, and then we should be able to run docker commands without sudo.
Pulling Nginx, PHP-FPM & MariaDB Images
So, to create a container first we need an image. Think of an image as a blueprint to create the container. While a container is a process run using an image.
There are hundreds of pre-built images available on the Docker Hub website. You can also create custom images based on those images.
We want to create a LEMP stack for our PHP development environment.
Here are the steps we are going to do:
- First we will pull the three required images from the hub, that is PHP-fpm, Mariadb, and Nginx.
- Then we will create a Dockerfile to customise the PHP image with additional extensions such as xdebug. Not just PHP you can create separate dockerfiles to customize other images as well. All these images will be stored separately inside the var/lib/docker directory.
- Then we will create a compose file to configure our custom image from these base images the compose file will be in the yaml format.
Let’s start.
Go to the terminal on your desktop and run the command:
docker pull nginx:latest
The tag latest pulls the latest available version. Instead, if you want to pull a specific version, then you can mentions that after the colon.
In fact, it’s not necessary to pull the images now. Docker will anyway pull the necessary images if it cannot find the necessary images locally. However, we are doing it in advance to make the build times shorter.
Similarly, let’s pull the PHP image as well:
docker pull php:fpm
Here, not that we’re using the FPM version. It allows us to run PHP as an isolated service with port mappings. This also allows our Nginx server to access PHP via the HTTP protocol.
And finally, MariaDB. We’re using MariaDB instead of MySQL as it is a completely open-source alternative to MySQL.
docker pull mariadb:latest
Now, if we run the command:
docker images
We should see the three images listed.
Setting up Project Directory
Next, let us create a directory where we’re going to put our PHP applications. I’m going to create a directory called site1
in the home directory:
cd ~
mkdir site1
Inside that, create another directory called public
, where we’ll put our website files.
Let’s create a PHP file inside the public directory:
<?php
echo "Hello from PHP!"
phpinfo();
If you have been watching so far carefully, you might have question in mind:
Docker runs websites and applications inside a container, which resides in the /var/lib/docker directory. But we’re now creating our application folder somewhere else on our desktop.
How does that work?
How can Docker detect the location of our code?
That’s where the idea of volumes comes in.
In the docker-compose file, which we’ll discuss soon, we can mount this folder to the docker container.
We’ll mount three things to our container:
- our app folder where our code is residing
- the nginx configuration file, which we’ll soon create
- and, our mysql data. So, even after we destroy our container, our data persists on the host machine.
Nginx Configuration
First, let us create the Nginx configuration file.
Create a file called nginx.conf
inside our site1 directory. Then create a server block inside it:
server {
listen 80;
root /site1/public;
server_name site1.local;
index index.php index.html index.htm;
location / {
try_files $uri $uri/ /index.php?$args;
}
location ~ \.php$ {
fastcgi_pass php:9000;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
}
This is called a server block. It’s equivalent to a virtual host.
If you want to create another site, just create another server block with a different server_name
directive and root location.
Creating PHP.Dockerfile
Next, we will create the Dockerfile for our customized PHP image. Let’s name it as PHP.Dockerfile
:
All Docker images start with a base image. Here our base image is the PHP-FPM image we’ve just pulled:
FROM php:fpm
RUN apt-get update
RUN apt-get install -y --no-install-recommends \
libfreetype6-dev \
libicu-dev \
libjpeg-dev \
libmagickwand-dev \
libpng-dev \
libwebp-dev \
libzip-dev
RUN docker-php-ext-install -j "$(nproc)" \
pdo \
pdo_mysql \
mysqli \
bcmath \
exif \
gd \
intl \
zip
RUN docker-php-ext-configure gd \
--with-freetype \
--with-jpeg \
--with-webp
RUN pecl install xdebug && docker-php-ext-enable xdebug
RUN pecl install imagick-3.7.0 && docker-php-ext-enable imagick
RUN set -eux; \
docker-php-ext-enable opcache; \
{ \
echo 'opcache.memory_consumption=128'; \
echo 'opcache.interned_strings_buffer=8'; \
echo 'opcache.max_accelerated_files=4000'; \
echo 'opcache.revalidate_freq=2'; \
echo 'opcache.fast_shutdown=1'; \
} > /usr/local/etc/php/conf.d/opcache-recommended.ini
I’m going to paste the code I already have.
If you are familiar with shell scripting, then you can see that this one is also quite similar. Think of it as Docker’s way of writing a shell script.
Let’s take a quick look at what we’ve just done:
After starting from the base PHP image, we issued a command to update the package information.
Followed by that, we installed a couple of image extensions, such as libwebp, magickwand, etc. In fact, I followed the WordPress Docker image to get the list of packages. So, you should not find any issue running WordPress sites using this configuration.
Below that, we installed a few more extensions such as mysqli, pdo, etc.
Then we installed xdebug and imagick from PECL, since they are not built-in PHP extensions. Xdebug is quite helpful for debugging PHP scripts.
Finally, we enabled PHP opcache.
That’s it, now we can move on to build our compose file.
Creating docker-compose.yml
File
Create a new file called docker-compose.yml. YAML or YML – both are same. Just like JSON for data, Yaml is a popular format for writing configuration files as it offers a very simple syntax.
version: '3'
services:
web:
image: nginx:latest
ports:
- "8080:80"
volumes:
- ./nginx.conf:/etc/nginx/conf.d/nginx.conf
- ./public:/site1/public
depends_on:
- mysql
restart: always
php:
build:
context: .
dockerfile: PHP.Dockerfile
volumes:
- ./public:/site1/public
restart: always
mysql:
image: mariadb:latest
environment:
MYSQL_ROOT_PASSWORD: 'password'
MYSQL_USER: 'abhinav'
MYSQL_PASSWORD: 'password'
MYSQL_DATABASE: 'site1'
volumes:
- site1:/var/lib/mysql
ports:
- 3306:3306
restart: always
volumes:
site1: {}
Hope everything is correct. Let’s run the build command.
Building the Images using Compose
docker compose build
Now, if we check the list of images – docker images
– we can see the newly built images. Here, you can see that docker automatically added our folder name to the image names.
Starting the Containers
Next we can run the images using the up
command:
docker compose up
Since this terminal tab is taken up by the currently running process, let me open a new tab to check the list of containers
docker ps
There you can see our three containers up and running.
Now let me terminate them by pressing ctrl-c
. We can view the details of stopped containers by adding the -a
option to the ps
command
docker ps -a
Running Containers in Detached Mode
To run containers in the background, we can use the -d
option, which means detached:
docker compose up -d
Adding Domain to Hosts File
Finally, we need to edit the hosts file to add our new localhost domain – site1.local. In Ubuntu, the file is located at /etc/hosts.
sudo nano /etc/hosts
Inside that, point the domain to the loopback address:
127.0.0.1 site1.local
Testing on the Browser
Now, go ahead and open it in a browser:
That’s it. We’re able to access our site.
Let’s also check if we’re able to connect to the mysql database server. For that, let me create a test page:
<?php
$pdo = new PDO('mysql:dbname=site1;host=mysql', 'abhinav', 'password', [PDO::ATTR_ERRMODE => PDO::ERRMODE_EXCEPTION]);
$query = $pdo->query('SHOW VARIABLES like "version"');
$result = $query->fetch();
echo "Database version: " . $result['Value'];
And one more thing, when we use the docker-compose file, docker automatically adds our containers to the same network, so that they can communicate each other internally using the name we gave in the compose file.
I don’t know if you have noticed it or not, in our nginx config file, the fastcgi_pass
directive only mentioned php:9000
.
Similarly, in out database connection code, we used host=mysql
, as mysql
is the name we gave to our service in the docker-compose.yml
file.
So even if Nginx and PHP are running on different containers, they can communicate each other.
Conclusion
Thats it! I hope you found this guide useful.