Managing Libretime development with live servers

I run a Libretime for my community which I need to keep somewhat stable, but I would also like to create new features and contribute them back to the Libretime project. Ideally, I would see the results of my changes up on our servers before they hit the Libretime releases.

How do other developers balance this?

Is anyone else moving Libretime forward while maintain their own live servers? What does the workflow look like?

How does Libretime handle database upgrades?

-Ryan

This is a good discussion that needs to happen. I’ve done some patches manually to implement new features but I think there are also bugs that have been fixed on the latest alpha that I haven’t deployed. I’ve been hesitant to upgrade the python components.

I think in theory though all one needs to do to upgrade is rerun the install script, this should package the latest versions of the python apps and update all of the php files etc. It definitely makes sense to backup everything before-hand.

As far as the database, the updates are handled by the upgrade_sql controller. I think that its still a little tricky in terms of versioning but basically we can create a new upgrades.sql for the latest version and it should run it automatically and upgrade the database. I think the upgrade file needs to correspond with the version of LibreTime set at airtime_version in the config for this to work.

Hello. We are starting new project and we decided for libretime. For the first shot I wanted stable version, so I cloned the repo, (edited gitattributes due to binary files as png or jpg where stated as modified) and checked out the tag 3.0.0-alpha3. I can see the alpha4 is comming soon but it is not clear how to upgrade. Shall I only checkout the particular commit and run the install script again?
I did not find satisfying information on http://libretime.org/manual/upgrading/

I think basically the easiest way to upgrade is to just rerun the install script. Or simply copy the airtime-mvc folder on top of your installed version usually in and rerun it. We have been trying to avoid unnecessary changes to the database and the changes that do happen should be automatic in terms of updates.

The python apps can be reinstalled as well usually by running setup tools. We need to test and document it more for sure. But I’d advise that no version of LibreTime should be considered “stable” as there are still bugs in the codebase.

hello all!

Thanks in advance everybody for your patience with my questions and long-windedyness. Eventually it would be great to include production deployment recipes in official documentation as this will continue to be a big barrier to entry as long as the project remains in an alpha stage.

I’m going to focus here on the “pull” part of the workflow, as that’s all that will happen on our community radio station’s little server. @rfb I’m thinking contributions pushed to Libretime usually happen from dev instances spun up via Vagrant (as per the Getting Started instructions).

Goals:

  • Ensure availability and reliability for a live radio station
  • Quickly test updates within an environment that’s identical to production, and then push them to production.
  • Facilitate easy backups of entire working libretime instance, including ability to restore to a different machine

Context:
Our 16 year-old community radio station has a lot of heart but very little budget and tech savvy amongst collective members. Internet connections here in the poorest state in México are slow and intermittent, but we’d greatly benefit from being able to control programming remotely when possible, ideally via an easy web interface rather than the tortuously slow teamviewer.

Current thinking and questions:
After a few time-consuming failed experiments installing alpha libretime releases directly in the base system, we’re now thinking that running Libretime in a virtual machine (VM) warrants the added complexity it adds to the system setup because it give us a better ability to recover quickly from potential future problems, a better update workflow, and better security. However, I have no idea if this approach could be simplified/improved, and so need to reach out to the community. :slight_smile:

Behold my gorgeous diagram of our current tentative proposal.

Note that all user content (audio files, database) is stored outside of the VM that contains libretime in order to be able to completely replace the entire VM with a backup or an update, add in the database and media settings, and be back up and running.

I think the biggest problem with this setup is that our specific VM configuration is not controlled within git. It seems like tools like Puppet, Chef, etc., might be perfect here, but I simply have no idea. For example, we need to add a line to the VM’s hosts file to direct our externally-accessible domain (referred to as the “Webserver Host” in the Libretime install step) to 127.0.0.1, or icecast streams will fail. It sure would be great to be able to just run a “vagrant up” type command and get a complete, configured VM from our fork of the Libretime repository. We haven’t really crossed this bridge yet, but for now it seems like we’ll need to also maintain a cloneable VM without Libretime, complete with all our specific config, ready to receive the LibreTime codebase and run the install script.

whew that got long. thanks folks!
ryan from Frecuencia Libre

Cool. Yeah my devop skills are also rather limited because I’ve never had the opportunity to really test them. The vagrant prescription seems to work pretty well for setting up a development environment but I haven’t thought about how it would be modified to work w/o this.

I just yesterday did a manual upgrade of our codebase and I was nervous that I would have something fail. In fact I ran into a issue with amazon s3 in a conf.php file and had to restore the config from a back-up I made.

This was the process I did, which is fare from ideal but worked. I basically did a mv of the airtime-mvc folder and changed it and then did a git pull and copied the airtime-mvc into the previous folder. Realized the server wasn’t working and copied the file it was hanging up on from the previous airtime-mvc folder and it started working again.

So yeah.I’ve also been thinking something like docker might be a better choice than what we currently have because it would allow us to provide a completely packaged system for people to use rather than them going through the whole challenge of the install script and the various potholes they seem to run into with different VPS providers.

I guess we should really have an upgrade script too or do heavy testing on the install script to make sure it really works when updating to a new version. All of this work needs to be done. Let us know if you have any specific questions etc.

possible change of course: https://github.com/ned-kelly/docker-multicontainer-libretime/issues/1

Is anything discussed here related to what @hairmare was mentioning in LibreTime on AWS?

i would assume that yes, the both of these are discussions about using containers. however I’m such a newbie that I shouldn’t talk much haha. My tiny amount of experience here leads me to believe that it’s possible to use docker containers on single servers (as is our small-scale use case) without getting into using kubernetes as @hairmare mentions. i think that docker is the tool that creates containers, and that kubernetes then manages those containers, with load balancing etc etc across as many servers as you want. @ned-kelly would know but he’s on walkabout :smile:, i’ll bug him when he’s back this week. @gusaus have you used docker much?

Thanks @frecuencialibre - My main interest is helping enable something like LibreTime on AWS be an option for stations without a skilled sysadmin to set up and maintain. As it seems like there are a couple folks currently interested/working on this, possibly there’s enough overlap for everybody to join forces and pick up here? https://github.com/LibreTime/libretime/issues/439

I’m running Libretime on AWS for a customer, be happy to have a quick Skype with you about your thoughts here before you dive down the rabbit hole - have worked heavily with AWS as a partner for 6+ years now so know all the inner workings of the platform & the Company so happy to have a quick chat – Sent you a PM.

Thanks @ned-kelly. This thread may (or may not) provide some additional background to what I’d like to help enable Any interest in LibreTime available as a SaaS with hosting and support?

I’ll be online in our new Slack room tomorrow to discuss options/potential next steps with you and anyone else who might have an interest.

https://join.slack.com/t/libretime/shared_invite/enQtNDQ4OTk0Nzc4NzU0LWY2MTE3ZTEwMGJlMjMxZmU2ZDRiNTgwMTM2MDllMzRkZTNjM2YzNmEwYTY1Y2ZlMDI4NmMxNTM4YjQ5YjYwNmU

Cheers!

Docker builds the containers and Kubernetes orchestrates them. K8s does come with quite some nice orchestration features like init containers that are the perfect place to run database setup/migration.

My personal docker images at https://github.com/hairmare/centos-docker-libretime don’t need very much work for them to run on Kubernetes, I’ve just never found the time to prioritize work on them due to them not having any relevance at my station for the time being.

If you want to use them (or @ned-kelly images) without K8s you will be missing out on some of the orchestration features. Parts of those aren’t really relevant because icecast is very much not container ready at this point.

Getting a grasp on running Icecast on K8s/Openshift has been in my bucket list for quite a while now. IMO the Kubernetes SIG-Apps would greatly benefit from a member that focuses on audio use-cases.