So my SHS is running in docker and I have mounted a volume outside.
I’m using continus backup to my Jottacloud for the mounted volume, but is that safe ? There is a sqllite directory there. Do I need to stop container and schedule backup instead ?
So my SHS is running in docker and I have mounted a volume outside.
I’m using continus backup to my Jottacloud for the mounted volume, but is that safe ? There is a sqllite directory there. Do I need to stop container and schedule backup instead ?
Well, at minimum, you are backing up live system so eg. for SQLITE, I would suggest doing some research on continues backup to my Jottacloud vs data integrity of SQLite DB being constantly updated. My best guess would be that on a highly used system that could be a problem, even Homey is using journaling, so could be OKay….
Maybe a feature request for Homey to create a backup function flow card, that creates a consistent backup in some /backup folder. I’ll create a ticker for that. It should be easy for Homey to implement
I’m using SHS on Synology NAS.
I have several options for backup, like Synology snapshot replication and Hyperbackup.
My concerns are which should be regarded as the safest to use.
For Synology snapshot replication:
“For mission-critical SQLite data, Synology recommends ensuring “Application Consistency.” This typically requires freezing the application or the database before the snapshot triggers to ensure all transactions are flushed to disk”.
As I understand this, if a snapshot happens under a write, the SQlite database could be corrupted.
In my opinion, the best solution would be that Athom implements a mechanism to backup the docker installation safely.
Maybe subscription based like the same as for the Homey Pro, @Doekse?
I created a ticket for this today. The only good option is for Athom to add a new backup flow card, that creates a consistent backup/zip in a backupfolder. Then the user may decide it self about how often and how to secure it must be stored.
It is good to note that most platforms, like unRAID, Proxmox and more have built-in back up tools.
And what existing for docker? ??
Where/How are you running Docker?
I’m running docker on Ubuntu
I’m running Debian
And I’m running docker in my Synology NAS. Not sure if the Synology backup solutions is safe to use with SQlite as a snapshot while writing could corrupt the database.
A solution from Athom would be highly appreciated.
And I’m running Orbstack on Mac Mini. ![]()
Looks like hot copy is not a so good idea. Just saw this in my jottacloud backup job….
pid:931 2026/01/09 22:00:15 Error uploading [#1] /data/docker-data/homey-shs/sqlite/db.sqlite-wal => upload: {“code”:421,“message”:null,“cause”:“”,“error_id”:“CorruptUploadOpenApiException”,“x-id”:“U3dnV09aTlVpSmZT”}
I did open a ticket at support and got registered a backlog item for a flowcard to create a consistent backup in a own backup folder. Lest hope that will surface sooon ?
Hello,
I’ve started writing a backup script. (Debian 13)
However, when restoring, you have to remember that old and new data will be mixed.
So, do you have to delete everything before a restore?
I’m not working on SHS at the moment because I don’t have time right now.
Here’s my suggestion (crontab at 10:30 PM):
homeybkp.sh
#!/bin/bash
docker stop homey-shs
# --- Konfiguration ---
SOURCE_DIR="/home/superuser/.homey-shs" # Dein Home-Verzeichnis (z.B. /home/max)
BACKUP_DIR="/mnt/systembkp/docker/homey-shs" # Ziel fuer Backups (z.B. externe Festplatte)
DATE=$(date +%Y-%m-%d_%H-%M-%S)
LOG_FILE="$BACKUP_DIR/backup-$DATE.log"
RETENTION_DAYS=30 # Backups aelter als X Tage loeschen
# --- Skript-Logik ---
# Erstelle das Backup-Verzeichnis, falls es nicht existiert
mkdir -p "$BACKUP_DIR"
echo "--- Backup Start: $DATE ---" >> "$LOG_FILE"
# Verwende tar, um das Homey-Verzeichnis zu archivieren und zu komprimieren
# -c: Archiv erstellen, -z: gzip komprimieren, -v: verbose, -f: dateiname
tar -czvf "$BACKUP_DIR/homey-$DATE.tar.gz" \
"$SOURCE_DIR" >> "$LOG_FILE" 2>&1
echo "Backup abgeschlossen: $(date +%Y-%m-%d_%H-%M-%S)" >> "$LOG_FILE"
docker start homey-shs
# Alte Backups loeschen (Bereinigung)
echo "Loesche Backups aelter als $RETENTION_DAYS Tage..." >> "$LOG_FILE"
find "$BACKUP_DIR" -name "homey-*.tar.gz" -type f -mtime +$RETENTION_DAYS -delete >> "$LOG_FILE" 2>&1
echo "--- Backup Ende ---" >> "$LOG_FILE"
The restore will not work because the sql.db file is incompatible!!
The cleaning of /.homey-shs is still missing here:
homeyrestore.sh
#!/bin/bash
docker stop homey-shs
BACKUP_FILE="/home/superuser/homey*.tar.gz"
TARGET_DIR="/home/superuser/.homey-shs"
# Prüfen ob die Datei existiert
if [ ! -f "$BACKUP_FILE" ]; then
echo "Fehler: Backup-Datei nicht gefunden: $BACKUP_FILE"
exit 1
fi
echo "Entpacke $BACKUP_FILE nach $TARGET_DIR..."
mkdir -p "$TARGET_DIR" # Erstellt das Zielverzeichnis falls es nicht existiert
tar -xzvf "$BACKUP_FILE" -C "$TARGET_DIR" # x=extrahiere, z=gzip, v=verbose, f=file, -C=change directory
echo "Wiederherstellung abgeschlossen."
docker start homey-shs
Yes, I have also created a cron script to stop, backup at date stamped zip and start again. It serves 2 purposes.
Offline backup
My SHS is killing itself every 1 or 2 days without any logs telling about why, so it might keep the server stable by this.
Just a tip: your restore script stops the container, then checks if the backup file exists, and it not, it exits. The container is then still stopped.
Ah, okay.
I’ll add rm -rf /path/to/folder.
There’s no risk; the SHS is still empty.
What I mean is that to me it doesn’t make sense to first stop the container and then check if the file exists. It makes more sense to do it the other way around.
Yes, I understand that.
I first need to copy the desired version of homey.tar.gz from the NAS back to the home directory.
That’s a good way to check if the tar.gz file is actually there.
But the restore isn’t working properly yet.
Not all folders are being restored.
I need more time.