ITSquad/Mastodon/Maintenance: Difference between revisions

From Pirate Party Belgium
Jump to navigation Jump to search
Line 37: Line 37:
# sed -r 's/(image:.+mastodon)(:.+)?$/\1:vX.X.X/g' docker-compose.yml # don't forget to change this back to "latest" when things get fixed
# sed -r 's/(image:.+mastodon)(:.+)?$/\1:vX.X.X/g' docker-compose.yml # don't forget to change this back to "latest" when things get fixed
# docker-compose pull
# docker-compose pull
== What to backup? ==
Backups of the database are not enough. We need to backup medias, user feeds, etc.
According to the official documentation,<ref>https://docs.joinmastodon.org/administration/migrating/</ref> we need to take special cares of the following files and directories:
* The ''public/system'' directory, which contains user-uploaded images and videos
* The ''.env.production'' and ''docker-compose.yml'' files, which contain server config and secrets
* The Postgres database, using pg_dump (see below)
* The ''/etc'' directory, which contains the system's configuration files
Moreover, the backups must be encrypted on the storage server.


== How to backup? ==
== How to backup? ==


Backups of the database are not enough. We need to backup images, user feeds, etc.  
'''Currently''': Every night, a backup of the database is made on the server (but is overwritten by the next nightly backup). This is clearly not optimal. Below is the ideal situation we would like to reach. We '''do not''' backup medias and configuration files yet.


We can simply copy the whole Mastodon's directory. For instance:
'''We still need to implement a more reliable solution'''
tar cvf /home/mastodon/backup/mastodon.tar /home/mastodon/mastodon/


Then, we backup the database:
=== Database ===
 
For an instant backup, we can just dump the database as follows:
  docker-compose exec db pg_dump -U postgres -Fc postgres > /home/mastodon/backup/db/dump_$(date +%d-%m-%Y"_"%H_%M_%S).sql
  docker-compose exec db pg_dump -U postgres -Fc postgres > /home/mastodon/backup/db/dump_$(date +%d-%m-%Y"_"%H_%M_%S).sql


It can be interesting to execute the following command before doing a backup:
In short, the option ''-Fc'' enables data compression. Postgres ensures data consistency during the backup<ref>https://www.postgresql.org/docs/current/app-pgdump.html</ref>, which means that we don't have to stop Mastodon while dumping the database ;)
 
However, over a long time period, we usually wants incremental backups to avoid duplicating the whole database each day.
 
Fortunately, Postgres all the tools we need to set up incremental backups.<ref>https://www.opsdash.com/blog/postgresql-backup-restore.html</ref> (to be continued)
 
=== Medias and configuration files ===
 
It can be interesting to execute the following command before making a backup of the medias:
  docker-compose run --rm web bin/tootctl media remove --days=14
  docker-compose run --rm web bin/tootctl media remove --days=14
This will remove local cache of media older than NUM_DAYS (=7 by default, but here we set at 14 days). Note that this command is executed every day on our instance.
This will remove local cache of media older than NUM_DAYS (=7 by default, but here we set it at 14 days). Note that this command is daily executed on our instance.
 
We can backup the medias using duplicity:<ref>http://duplicity.nongnu.org/</ref><ref>https://splone.com/blog/2015/7/13/encrypted-backups-using-rsync-and-duplicity-with-gpg-and-ssh-on-linux-bsd/</ref><ref>https://blog.rom1v.com/2013/08/duplicity-des-backups-incrementaux-chiffres/</ref>
duplicity --encrypt-key <gpg encrypt key> --full-if-older-than 2W --name mastodon_medias --num-retries 3 public/system rsync://<server>:/path/to/backups


This will store the encrypted backup on a remote server. The backups are incremental, but every two weeks a full backup is made.


We do the same for configuration files, although we can keep them for a longer period as they weight nothing:
duplicity --encrypt-key <gpg encrypt key> --full-if-older-than 1M --name mastodon_system_config --num-retries 3 /etc rsync://<server>:/path/to/backups


<s>Currently, [[User:HgO|HgO]] is doing a daily backup (db + media), and keep them up to 7 days. Data are stored on his laptop.</s> We need a backup solution…
TODO: Find a way to include .env.production and docker-compose.yml


Every night, a backup of the db is also made on the server (but is overwritten by the next nightly backup).
We can also define for how long do we keep the backups:
duplicity remove-older-than 2W --force --name mastodon_medias rsync://<server>:/path/to/backups
This will remove any backup older than two weeks.


'''We still need to implement a more reliable solution'''
== References ==
 
<references/>

Revision as of 00:38, 7 July 2019

This page aims to describe procedures to maintain the Pirate Party's Mastodon instance.

How to upgrade?

A script called update.sh located in /home/mastodon will reproduce the steps below. Note: this script stops at step 11 and does not go further.

Before running this script or the following commands, please check the instruction for the current release. Sometimes, aditionnal actions are needed (other than migrate database and compile assets)

  1. git fetch -t # update the git branch, including new tags
  2. git stash # prevent changes made to the files to be overwritten (mainly, the docker-compose.yml file)
  3. git checkout vX.X.X # jump to the version where we want to update
  4. git stash pop # restore your changes
  5. docker-compose pull # pull from the docker-hub https://hub.docker.com/r/tootsuite/mastodon/
  6. docker-compose exec db pg_dump -U postgres -Fc postgres > ../dump_$(date +%d-%m-%Y"_"%H_%M_%S).sql # backup the database
  7. tail ../your_dump_file.sql # check if the backup worked, with your_dump_file.sql being the dumpfile you have created in the previous step.
  8. docker-compose down # shut down the containers.
    # You'll maybe get "ERROR: An HTTP request took too long to complete" and other errors. Don't mind this and just wait 'till it's done.
  9. docker-compose run --rm web rails db:migrate # upgrade the database
  10. docker-compose run --rm web rails assets:precompile # complie the assets
  11. docker-compose up -d # start the mastodon instance (create new volumes)
  12. docker-compose logs -ft web # (optional) if you want to monitor the progress. Once this is done you ctrl+c
  13. docker system prune -a # remove all unused volumes, old images, etc.

What to do when something went wrong?

Don't panic. You can restore the database backup as follows:

  1. docker-compose stop
  2. docker-compose start db
  3. docker-compose exec db dropdb postgres -U postgres # remove the db !!!!!!!
  4. docker-compose exec db createdb postgres -U postgres # create a fresh and new db
  5. cat ../your_dump_file.sql | docker exec -i mastodon_db_1 psql -U postgres # restore the database, with "your_dump_file" being a database backup
  6. docker-compose down
  7. docker-compose up -d

You can also go to a previous version of Mastodon with:

  1. git checkout vX.X.X
  2. sed -r 's/(image:.+mastodon)(:.+)?$/\1:vX.X.X/g' docker-compose.yml # don't forget to change this back to "latest" when things get fixed
  3. docker-compose pull

What to backup?

Backups of the database are not enough. We need to backup medias, user feeds, etc.

According to the official documentation,[1] we need to take special cares of the following files and directories:

  • The public/system directory, which contains user-uploaded images and videos
  • The .env.production and docker-compose.yml files, which contain server config and secrets
  • The Postgres database, using pg_dump (see below)
  • The /etc directory, which contains the system's configuration files

Moreover, the backups must be encrypted on the storage server.

How to backup?

Currently: Every night, a backup of the database is made on the server (but is overwritten by the next nightly backup). This is clearly not optimal. Below is the ideal situation we would like to reach. We do not backup medias and configuration files yet.

We still need to implement a more reliable solution

Database

For an instant backup, we can just dump the database as follows:

docker-compose exec db pg_dump -U postgres -Fc postgres > /home/mastodon/backup/db/dump_$(date +%d-%m-%Y"_"%H_%M_%S).sql

In short, the option -Fc enables data compression. Postgres ensures data consistency during the backup[2], which means that we don't have to stop Mastodon while dumping the database ;)

However, over a long time period, we usually wants incremental backups to avoid duplicating the whole database each day.

Fortunately, Postgres all the tools we need to set up incremental backups.[3] (to be continued)

Medias and configuration files

It can be interesting to execute the following command before making a backup of the medias:

docker-compose run --rm web bin/tootctl media remove --days=14

This will remove local cache of media older than NUM_DAYS (=7 by default, but here we set it at 14 days). Note that this command is daily executed on our instance.

We can backup the medias using duplicity:[4][5][6]

duplicity --encrypt-key <gpg encrypt key> --full-if-older-than 2W --name mastodon_medias --num-retries 3 public/system rsync://<server>:/path/to/backups

This will store the encrypted backup on a remote server. The backups are incremental, but every two weeks a full backup is made.

We do the same for configuration files, although we can keep them for a longer period as they weight nothing:

duplicity --encrypt-key <gpg encrypt key> --full-if-older-than 1M --name mastodon_system_config --num-retries 3 /etc rsync://<server>:/path/to/backups

TODO: Find a way to include .env.production and docker-compose.yml

We can also define for how long do we keep the backups:

duplicity remove-older-than 2W --force --name mastodon_medias rsync://<server>:/path/to/backups

This will remove any backup older than two weeks.

References