Today, we will talk about Tdarr, why I decided to use it, and the configuration options available.
Tdarr V2 is software that allows automatic conversion of your video library, including movies, series, etc.
Link to Tdarr: Tdarr
It many interesting features, especially if, like me, you have a lab with multiple hosts, automated library management, containers, etc.


Additionally, there is a business license for setups requiring more than five nodes and commercial use.
Why convert the library? In my case, I have a library of about 50TB with a mix of video codecs and resolutions, which creates several issues:
I won’t need much configuration since I just want to convert videos to HEVC without renaming or moving files to a different folder, etc. The default configuration suffices.
Storage will be shared on Synology using NFS, which I’ll mount on the VMs.
Prerequisites We’ll set up the manager on Docker and the nodes on VMs. For this, you’ll need:
HandbrakeCli, handbrake. Download Handbrake. ccextractor. ffmpeg. Mkvtoolnix.
/opt directory. Download Tdarrtdarr-server for the manager’s persistent storage.tdarr-logs for storing logs.tdarr-config for the manager’s configuration.cache as a working folder for the nodes to perform conversions.media as the root folder containing your media library.
As mentioned earlier, it is crucial that the folders are accessible to both the manager and nodes. Moreover, they must have the same path. If they don’t, you’ll need to configure "pathTranslator" entries in the configuration files. Incorrect configuration may result in errors when moving converted files back to the libraries.
The two critical folders are cache and media. For both the manager and nodes, we will set the same paths:
/media./temp.Optionally, you could centralize the configuration and logs. I didn’t do this in my case, but it’s as simple as sharing a folder for /opt/Tdarr/logs/ and /opt/Tdarr/configs/.
Fields to modify:
Docker Compose:
version: "3.4"
services:
tdarr:
container_name: tdarr
image: ghcr.io/haveagitgat/tdarr:latest
restart: unless-stopped
ports:
- 8265:8265 # webUI port
- 8266:8266 # server port
environment:
- TZ=Europe/Madrid
- PUID=1026
- PGID=100
- UMASK_SET=002
- serverIP=192.168.10.3
- serverPort=8266
- webUIPort=8265
- internalNode=true
- inContainer=true
- ffmpegVersion=6
- nodeName=MyInternalNode
- NVIDIA_DRIVER_CAPABILITIES=all #Solo si necesitamos aceleración por hardware
- NVIDIA_VISIBLE_DEVICES=all #Solo si necesitamos aceleración por hardware
volumes:
- /volume1/docker/tdarr/tdarr-server:/app/server
- /volume1/docker/tdarr/tdarr-config:/app/configs
- /volume1/docker/tdarr/tdarr-logs:/app/logs
- /volume1/Plex:/media
- /volume1/docker/tdarr/cache:/temp
networks:
default:
name: NetworkName
external: true
Official documentation: Tdarr Compose
If you want to enable additional acceleration options, you can check them here: Hardware transcoding
Deploy the container and configure the configuration file found in the /tdarr/tdarr-config/Tdarr_Server_Config.json directory.
An example configuration would be:
{
"serverPort": "8266",
"webUIPort": "8265",
"serverIP": "192.168.10.3",
"serverBindIP": false,
"handbrakePath": "",
"ffmpegPath": "",
"logLevel": "INFO",
"mkvpropeditPath": "",
"ccextractorPath": "",
"openBrowser": true,
"cronPluginUpdate": "",
"auth": false,
"authSecretKey": "tsec_n0rOS2xNDJ9UM4De_WhgFhKN1SgYX",
"maxLogSizeMB": 10
}
Example of fstab entries:
192.168.10.3:/volume1/Plex /media nfs async,intr,bg
192.168.10.3:/volume1/docker/tdarr/cache /temp nfs async,intr,bg
/opt/ directory:
cd /opt/
wget https://storage.tdarr.io/versions/2.17.01/linux_x64/Tdarr_Updater.zip
chmod +x Tdarr_Updater_linux
unzip Tdarr_Updater.zipchmod +x Tdarr_Updater
cd Tdarr_Node/
./Tdarr_NodeConfigure:
Example node configuration file at /opt/Tdarr/configs/Tdarr_Node_Config.json:
{
"nodeName": "AST-Worker01",
"serverURL": "http://192.168.10.3:8266",
"serverIP": "192.168.10.3",
"serverPort": "8266",
"handbrakePath": "/usr/bin/HandBrakeCLI",
"ffmpegPath": "/usr/bin/ffmpeg",
"mkvpropeditPath": "",
"pathTranslators": [
{
"server": "",
"node": ""
}
],
"nodeType": "mapped",
"unmappedNodeCache": "/opt/tdarr/unmappedNodeCache",
"logLevel": "INFO",
"priority": -1,
"cronPluginUpdate": "",
"apiKey": "",
"maxLogSizeMB": 10,
"pollInterval": 2000
}
Tdarr Node configuration as a service
Create the configuration file for the service as /etc/systemd/system/tdarr_Node.service:
[Unit]
Description=Tdarr Node Daemon
After=network.target
[Service]
User=root
Group=root
Type=simple
WorkingDirectory=/opt//Tdarr/Tdarr_Node
ExecStart=/opt//Tdarr/Tdarr_Node/Tdarr_Node
TimeoutStopSec=20
KillMode=process
Restart=on-failure
[Install]
WantedBy=multi-user.target
Enable the service:
sudo systemctl enable tdarr_Node
Start and check the service:
systemctl start tdarr_Node systemctl status tdarr_Node

Adding the Worker to the Manager
If it doesn’t appear, verify:

I’ve paused "myInternalNode" and enabled all CPU cores except one on each node.
Add your media libraries from the Manager and verify that the workers process them correctly. You can adjust transcoding profiles and preferences from the Manager. With this, you should have a functional Tdarr system with a centralized Manager and distributed Workers.
Author: Francisco Tocino