Stateful workloads with Portworx
Portworx integrates with Nomad and can manage storage for stateful workloads running inside your Nomad cluster. In this guide, you will install and configure Portworx on each Nomad client node to create a storage pool that tasks can use for storage and replication. You will then deploy an HA MySQL database using that storage with a replication factor of 3, ensuring the data will be replicated on 3 different client nodes.
Prerequisites
To perform the tasks described in this guide, you need to have a Nomad environment (v0.12.0 or greater) with Consul installed. You can use this Terraform environment to provision a sandbox environment. This tutorial will assume a cluster with one server node and three client nodes.
Note
This tutorial is for demo purposes and only assumes a single server node. Please consult the reference architecture for production configuration.
Verify your storage is adequate
Portworx needs an unformatted and unmounted block device that it can fully manage. If you have provisioned a Nomad cluster in AWS using the environment provided in this guide, you already have an external block device ready to use (
/dev/xvdd
) with a capacity of 50 GB.Ensure your root volume's size is at least 20 GB. If you are using the environment provided in this guide, add the following line to your
terraform.tfvars
file:
Install the MySQL client
You will use the MySQL client to connect to our MySQL database and verify our data. Ensure it is installed on a node with access to port 3306 on your Nomad clients:
Ubuntu:
CentOS:
macOS via Homebrew:
Install Portworx
Set up the PX-OCI bundle
Run the following command on each client node to set up the PX-OCI bundle:
If the command is successful, you will see output similar to the output shown below (the output has been abbreviated):
Configure Portworx OCI bundle
Configure the Portworx OCI bundle on each client node by running the following command (the values provided to the options will be different for your environment):
You can use client node you are on with the
-k
option since Consul is installed alongside Nomad Be sure to provide the-s
option with yourexternal block device path
If the configuration is successful, you will see the following output (abbreviated):
Since you have created new unit files, please run the following command to reload the systemd manager configuration:
Start Portworx and check status
Run the following command to start Portworx:
Verify the service:
Wait a few moments (Portworx may still be initializing) and then check the
status of Portworx using the pxctl
command.
If everything is working properly, you should see the following output:
Once all nodes are configured, you should see a cluster summary with the total capacity of the storage pool (if you're using the environment provided in this guide, the total capacity will be 150 GB since the external block device attached to each client nodes has a capacity of 50 GB):
Create a Portworx volume
Run the following command to create a Portworx volume that our job will be able to use:
You should see output similar to what is shown below:
Please note from the options provided that the name of the volume you created is
mysql
and the size is 10 GB. You have configured a replication factor of 3which ensures our data is available on all 3 client nodes.
Run pxctl volume inspect mysql
to verify the status of the volume:
Deploy MySQL
Create the job file
You are now ready to deploy a MySQL database that can use Portworx for storage.
Create a file called mysql.nomad.hcl
and provide it the following contents:
Please note from the job file that you are using the
pxd
volume driver that has been configured from the previous steps.The service name is
mysql-server
which you will use later to connect to the database.
Run the job
Register the job file you created in the previous step with the following command:
Check the status of the allocation and ensure the task is running:
Write data to MySQL
Connect to MySQL
Using the mysql client (installed earlier), connect to the database and access the information:
The password for this demo database is password
.
Note
This tutorial is for demo purposes and does not follow best practices for securing database passwords. See Keeping Passwords Secure for more information.
Consul is installed alongside Nomad in this cluster so you are able to connect
using the mysql-server
service name you registered with our task in our job
file.
Add test data
Once you are connected to the database, verify the table items
exists:
Display the contents of this table with the following command:
Now add some data to this table (after you terminate our database in Nomad and bring it back up, this data should still be intact):
Run the INSERT INTO
command as many times as you like with different values.
Once you are done, type exit
and return back to the Nomad client command
line:
Destroy the database job
Run the following command to stop and purge the MySQL job from the cluster:
Verify no jobs are running in the cluster:
You can optionally stop the nomad service on whichever node you are on and move to another node to simulate a node failure.
Re-deploy and verify
Using the mysql.nomad.hcl
job file from earlier, re-deploy the
database to the Nomad cluster.
Once you re-connect to MySQL, you should be able to see that the information you added prior to destroying the database is still present:
Summary
In this guide, you deployed a highly-available MySQL server using Portworx. Portworx also has a guide—Portworx on Nomad—that discusses more ways to integrate Portworx with Nomad.