Disaster Recovery and cumbersome library/database

Hey everybody,
The server my station used to host LibreTime at our transmitter site suffered a power supply failure and it was a refurbished server with a proprietary power supply so we are now in the process of replacing the whole system instead of just the power supply.

So as a backup I built a cloud instance of LibreTime on AWS and then ported over the database w/o copying the 1TB of files because that is way too much data to put into a temporary instance.

Now I thought it would be simple enough to just delete the files from the database and have LibreTime resume it’s downloading of podcasts and updating content. Alas this was not the case.

First I setup a ec2 t2_small instance which was a bad call, should have used something more powerful and used t3_medium or t3_medium to get at least multiple CPU threads. As even though I had a separate icecast setup to host the public stream it was causing buffering due to liquidsoap not getting enough.

Next I ran the install from git and then loaded the database backup and pointed the DNS at the new instance and everything worked. But I didn’t have the files so I wanted do delete everything.

I thought deleting cc_files entries would allow the schedule to continue but alas it took forever to delete because I had so many instances of cc_schedule and the foreign key restraint made it take a very long time.

This also made me realize that there is a pretty major flaw in terms of how LibreTime retains its archives. The cc_schedule database tracks everything that played at a certain point and is useful for say generating track listing for reports. But if you delete a file it also scrubs the past records and erases it from the playback schedule. This is pretty bad. For a couple of reasons.

  1. It takes forever to delete a file that was used frequently because it needs to reference every instance it was scheduled in.
  2. By deleting a file you are deleting the record of what was broadcast on any particular date because of the CASCADE foreign key constraint where it DELETES the cc_schedule entry that contained that file. It also deletes the cc_playout_history.

But if you are ever in an instance of wanting to rebuild your LibreTime instance and have a substantial track library that you don’t want to bring up. I suggest wiping the schedule & playout_history before deleting the files.

If you run
TRUNCATE cc_schedule;
TRUNCATE cc_playout_history CASCADE;
then you can delete the files from before your backup of the database.
DELETE from cc_files WHERE mtime > ‘2021-01-01’;
or if you want to wipe everything
TRUNCATE cc_files;

Then when you start up LibreTime you will have your schedule but the past schedule will be erased.
If you do the TRUNCATE schedule any repeating shows will be recreated for the next week but not the current one. You can also try to delete the shows from before your backup.
DELETE from from cc_schedule WHERE starts < ‘2021-01-01’;

These instructions are just meant to provide a summary of how my station created a recovery w/o our original track files so we could stream while we restored the actual LibreTime hardware that we place at our transmitter.

If you do this your podcasts should download and be added to your track and you can import new tracks etc w/o having completely wiped your user database, podcasts and repeating shows on your future schedule.

So also if someone is running LibreTime on a very constrained system silan & replaygain suck up a lot of resources and can cause buffering. I’ve tried to alleviate it by giving the liquidsoap command priority via renice
ps aux | grep ‘liquidsoap’
Find the process id and then
sudo renice -n -20 (pid) - replace the (pid) with the pid from grep it is the # in the second column of the line with airtime-liquidsoap in it.

This seems to help.

Anyways. I think we should have an option to disable replay gain and silan and also rethink the way we archive our playout and schedule.

#HugOps

You could consider running pypo & iceast on it’s their own instances to alleviate issues regarding buffering. pypo only needs access the api since it downloads tracks locally before playing the through liquisoap instead of accessing them directly.

Luckily the hard drive worked so I finished a local backup and then put it in a new machine. Updated the OS to 18.04 and reran the LibreTime install and we were back with our original setup.

I do think we should do some thinking about how we handle file deletion etc though as having that affect the playout record and viewable schedule of past shows seems like an issue. I might open up a bug report.