In the first post in this series, I documented the process of procuring new media server hardware, installing and configuring the operating system, and getting my Drobo shares mounted with systemd
and cifs
.
In this post, I’ll go into some detail about setting up Plex Media Server and getting offsite backups working with Duplicati and Syncthing.
Plex Media Server
I tend the run the services that are hosted on my media server inside of Docker containers. The advantage to this approach is twofold:
- If anything stops working, I can just restart the container
- Configuration can be kept in one location, mounted by the container, and backed up to my Drobo so that I can easily recover in case of hardware failure
When setting up a new media server, I typically create a docker-compose file that lists all of the containers that I’d like to run. The Docker files that describe each container come courtesy of linuxserver.io, which provides an excellent collection of containerized services. Each container’s README.md file contains a sample docker-compose file that shows how to configure the service. Here’s the docker-compose file for linuxserver/plex:
version: "2.1"
services:
plex:
image: ghcr.io/linuxserver/plex
container_name: plex
network_mode: host
environment:
- PUID=1000
- PGID=1000
- VERSION=docker
- PLEX_CLAIM= #optional
volumes:
- /mnt/media/plex:/config
- /mnt/media/tv:/tv
- /mnt/media/movies:/movies
- /mnt/media/pictures:/pictures
restart: unless-stopped
I make use of the PUID
and PGID
environment variables to configure which user account this container runs as, and set it up to mount all of the media shares that Plex will serve.
Once I have added all of my services to a single docker-compose file, I use systemd
to define a service that runs docker-compose up
when the system boots:
[Unit]
Description=Docker Compose Services
Requires=docker.service mnt-media.mount network-online.target
After=docker.service mnt-media.mount network-online.target
[Service]
Type=oneshot
RemainAfterExit=yes
WorkingDirectory=/home/jfritz/docker
ExecStartPre=/usr/bin/docker-compose pull --quiet --parallel
ExecStart=/usr/bin/docker-compose up -d
ExecStop=/usr/bin/docker-compose down
[Install]
WantedBy=multi-user.target
This service runs at boot after the Docker daemon has started, the server has obtained an IP address, and the Drobo share that contains my media has been mounted. For more information on how I mount my media shares, see the previous post in this series. The only other part of this service that needs to be customized is the WorkingDirectory
attribute, which needs to be set to the path to the directory that contains the docker-compose file.
With these files in place, I can enable my new service:
$ sudo systemctl enable docker-compose.service
reload the systemctl
daemon so that it picks up the service definition:
$ sudo systemctl daemon-reload
and give it a test run:
$ sudo systemctl start docker-compose.service
At least, that’s how it’s supposed to work. When I set Plex Media Server up using this approach, it didn’t start. In fact, it didn’t even try to start because my new service required docker.service
, which didn’t exist. It turns out that modern Ubuntu distributions have repackaged a bunch of common services as “Snaps”, and that those Snaps have different names than their non-Snap counterparts. In this case, docker.service
had been rebranded as snap.docker.dockerd.service
, which is clearly an improvement.
Lesson learned, I tweaked my service definition to use the correct identifier for the Docker daemon and tried again. This time, the service tried to run docker-compose up
, but failed with a permissions error:
ERROR: .FileNotFoundError: [Errno 2] No such file or directory: './docker-compose.yml'
I did some digging and found that I could once again invite the “Snap” edition of Docker to The Accusing Parlor. It seems that there is some sort of incompatibility with the variant of docker-compose that Snap includes and the version of Python that is present on my host system. The easiest solution to this problem was to scorch the earth: uninstall the Docker snap and reinstall Docker using apt
.
Earth summarily scorched, my service started… well, starting, along with the Plex Docker container. Unfortunately, the actual Plex Media Server service that runs inside of the Docker container refused to come online. Every time the container started, Plex crashed while trying to initialize.
The most that I could get out of Plex was this error screen that was shown whenever I tried to load the Plex dashboard in my web browser after starting the service:
I dutifully documented my issue on the Plex forums as requested, but nobody responded. I tried switching to the plexinc/pms-docker
Docker file that is officially maintained by the Plex team, but encountered the same issue.
Giving up on Docker
Frustrated, I decided to simplify the situation by installing Plex directly onto the host machine instead of trying to run it from inside of a Docker container. I downloaded the 64 bit .deb
file for Ubuntu systems and installed it with dpkg
.
The installer helpfully warned me that the Intel NUC platform requires a bunch of platform-specific libraries that can be obtained from Intel’s compute-runtime GitHub repo. I used the instructions on that page to install Intel Gmmlib
, Intel IGC Core
, Intel IGC OpenCL
, Intel OpenCL
, Intel OCLoc
, and Intel Zero GPU
.
Having installed those prerequisites, I re-ran the Plex installer, and it created a systemd
service called plexmediaserver.service
that is responsible for starting Plex Media Server at system boot. This time, everything worked as expected, and Plex came up without issue.
I never did find out why my Docker-based solution crashed on startup. In theory, the Docker container should have isolated Plex from the vagaries of the underlying hardware, making the Intel NUC prerequisites unnecessary. In practice, the fact that I was trying to poke a hole through the container to allow Plex to access the NUC’s hardware-based video transcoding acceleration capabilities may have negated that isolation.
Either way, I had a working Plex Media Server, so I moved on.
Firewalls and Transcoder Settings
To allow clients on my home network to access Plex, I poked a hole through the firewall:
$ sudo ufw allow from 192.168.1.0/24 proto tcp to any port 32400
Finally, I navigated to the Transcoder Settings page in the Plex Media Server dashboard and enabled hardware acceleration. This configuration tells Plex to take advantage of the Intel Quick Sync technology that is built into my NUC, allowing it to offload transcoding tasks to the underlying hardware.
Syncthing
I have a good friend who runs his own home media server and NAS system. Since storage is cheap, we decided to trade storage space for offsite backups. He suggested that we use Syncthing to keep our backup folders in sync. Once again, linuxserver.io came to the rescue. Here’s my docker-compose file:
syncthing:
image: lscr.io/linuxserver/syncthing
container_name: syncthing
hostname: myhostnamegoeshere
environment:
- PUID=1001
- PGID=1002
- TZ=America/Toronto
volumes:
- /mnt/backup/config:/config
- /mnt/backup/remote:/remote
- /mnt/backup/local:/local
- /mnt/backup/fileshare:/fileshare
ports:
- 8384:8384
- 22000:22000/tcp
- 22000:22000/udp
- 21027:21027/udp
restart: unless-stopped
I poked all of the ports through my firewall and started the container. When I brought the service online, it started a web interface on port 8384 that looked like this:
The first order of business was to set a username and password to prevent those pesky hackers from reading and changing the files on my computer. Next up, I worked my way through the section in the docs that deals with configuring the service, exchanging device ids with my friend along the way.
Once set up, Syncthing periodically scans any folders that I’ve opted to keep in sync with the remote device (in this case, a media server running at my friend’s house), and if it finds files that are out of sync, it transfers them to the remote device.
My friend configured his instance of Syncthing in a similar fashion, and the result is a two-way backup system that stores an offsite copy of my files at his house and visa-versa.
Duplicati
To complete the offsite backup solution, I needed a simple way to copy the files that I want to backup over to the directory that I’ve shared via Syncthing. For this task, I chose to use Duplicati. Like Syncthing, it has been packaged into a docker container by the linuxserver.io team, who also provide a sample docker-compose entry that can be used to get the service running.
Once again, I poked all of the ports through my firewall and started the container. With the service up and running, I navigated to the Duplicati dashboard in my web browser and set to work configuring a backup job:
I then followed the steps in the wizard, creating a backup job that copies all of my photos from the directory that they live in to a directory that I’ve configured to share with my friend via Syncthing. The backup runs every day at 1am and automatically cleans up old backups when they are no longer relevant.
At the time of this writing, my friend has backed up 49GB of data to my home server, and I’ve sent him 105GB of photos in exchange. Thanks to Duplicati, the files on both ends are split into small chunks that are compressed and encrypted, so my data is safe from prying eyes even as it is being moved back and forth across the internet or sitting on a remote server at rest.
The entire system has been pretty much bulletproof since we set it up, automatically discovering and backing up new photos as they are added to the collection.
Wrapping Up
Jack Schofield once wrote that data doesn’t really exist unless you have at least two copies of it, and I tend to agree, at least in principle. In practice, this is the first time that I’ve taken the time to live by that rule. In addition to the remote backup that is kept on my friend’s server, I took the time to snapshot my photo collection to a USB drive that will spend the rest of its life in a safety deposit box at my bank. I intend to repeat this exercise once a year for the rest of time. Given that storage is cheap, I figure that there’s no reason not to keep redundant copies of my most irreplaceable asset: The photos that my wife and I took of my boy growing up.
Next time, we’ll continue this series by setting up Nextcloud, a self-hosted alternative to DropBox, Google Drive, iCloud, etc. I’ve got most of the work done, but have been procrastinating on the final touches. Here’s hoping that I find time to finish the project soon.