A few weeks ago I went to load my feed reader and my web server was down. A minute later it came back. After some investigation I discovered it had rebooted because of a a hard drive glitch. Logging into the device I found an alarming number of failures.
[07:36]odroid:/$ ll
total 1.1M
-rw-r--r-- 1 root root 31 Dec 16 2019 2019-12-16_00:21_Drive_Failure
-rw-r--r-- 1 root root 31 Dec 26 2019 2019-12-26_21:21_Drive_Failure
-rw-r--r-- 1 root root 31 Jan 20 2020 2020-01-20_00:21_Drive_Failure
-rw-r--r-- 1 root root 31 Apr 6 2020 2020-04-06_00:21_Drive_Failure
-rw-r--r-- 1 root root 31 Apr 9 2020 2020-04-09_08:21_Drive_Failure
-rw-r--r-- 1 root root 31 Apr 9 2020 2020-04-09_08:32_Drive_Failure
-rw-r--r-- 1 root root 31 Jun 27 2020 2020-06-27_05:47_Drive_Failure
-rw-r--r-- 1 root root 31 Jan 11 2021 2021-01-11_10:04_Drive_Failure
-rw-r--r-- 1 root root 31 Jun 28 2021 2021-06-28_00:21_Drive_Failure
-rw-r--r-- 1 root root 31 Mar 14 2022 2022-03-14_00:21_Drive_Failure
-rw-r--r-- 1 root root 31 Apr 8 2022 2022-04-08_09:59_Drive_Failure
-rw-r--r-- 1 root root 31 May 29 2022 2022-05-29_13:46_Drive_Failure
-rw-r--r-- 1 root root 31 May 29 2022 2022-05-29_15:28_Drive_Failure
-rw-r--r-- 1 root root 31 Jan 16 15:48 2023-01-16_15:48_Drive_Failure
-rw-r--r-- 1 root root 31 Jan 19 12:28 2023-01-19_12:28_Drive_Failure
drwxr-xr-x 2 root root 4.0K Nov 30 06:18 bin
drwxr-xr-x 2 root root 96K Mar 20 2021 blog
drwxr-xr-x 2 root root 4.0K Sep 14 06:16 boot
-rw-r--r-- 1 root root 136K Mar 20 2021 complete_index.gmi
drwxr-xr-x 17 root root 15K Sep 5 20:18 dev
After some investigation I rediscovered a script I wrote years ago that would create these files and reboot the system when the drive had an issue and remounted as read-only. This band aid worked so well I completely forgot it existed.
I've been meaning to update the server for a while now anyway. Most of the software on that machine was ancient. It was running Django v2.1.2 (we're currently at v4.1.6) a horrible version of tiny-tiny rss (that's a joke son, every version of tt-rss is horrible), and gitea required logging into the database after every push to toggle a bit that would make the repo visible.
I wanted newer software and I wanted a clear update path. Having a single system hosting everything is fine but I was hesitant to change anything for fear of breaking it. Building a new system from scratch would let me document the process and when I was done I could just switch over to the new system when it was ready
I started to get it working in a rootless podman container. The goal was to have something portable I could just drop onto another system. I had some success but it was going slow - especially getting all the various volumes working together. When I had another drive failure three days later I decided to abandon my efforts to make it fully portable and just do it the old way. Fortuntately I already had an SBC lined up and ready to go.
Meet armadillo - so named because of the blue protective heat sink.
This is an ODROID-M1 with 8G of on board RAM and a 1TB M.2 drive. It cost about ~$200 in total and it runs like a dream - drawing 4.4W at max load.
I've been running all my web servers on a single SBC for more than a decade now - generally as a single nginx instance running as a reverse proxy for a bunch of services. Armadillo is now running nginx, seven services, four databases, and my internet of things instance (iota). I plan on adding a matrix server in the future.
During the rebuild I kept detailed notes of every issue I ran into and here are some of the highlights:
I compiled nginx from source because the debian repo version was too old. Ubuntu 20.04 requires a specialized configure command:
./configure --prefix=/var/www/html \
--sbin-path=/usr/sbin/nginx \
--conf-path=/etc/nginx/nginx.conf \
--http-log-path=/var/log/nginx/access.log \
--error-log-path=/var/log/nginx/error.log \
--with-pcre \
--lock-path=/var/lock/nginx.lock \
--pid- path=/var/run/nginx.pid \
--with-http_ssl_module \
--modules-path=/etc/nginx/modules \
--with-http_v2_module \
--with-stream=dynamic \
--with-http_addition_module \
--with-http_mp4_module
The default nginx service file has an evil setting:
PrivateTmp=true
This systemd setting is added as security feature but what it means is when you're trying to establish a reverse proxy connection to a unix socket, your socket file can exist with the proper permissions and nginx will still be unable to see it. I spent hours putting the unix socket in various places, setting various permissions, and trying various things before I discovered this stupid option.
uwsgi won't drop the user and group when you run it from the command line (it only does it when run from a service file)
sudo /home/na/.pyenv/versions/3.10.4/bin/uwsgi --emperor /tr/trousermonkey/uwsgi.ini
This only contributed to the confusion surrounding the PrivateTmp issue.
uwsgi bills itself as an application server with support for a variety of backends including python and php. There is next to no documentation on how to set it up to serve php applications and absolutely nothing about whether it can handle both python and php in the same pool. I tried to get this going and then gave up and used php-fpm. This is another project that could use better documentation - or even just a few examples.
Make sure to install libpcre3-dev before doing a pip install of uwsgi or else it won't compile with this options and your services will fail.
I was using a tag cloud plugin for django. I really liked it but apparently I was alone. The last time I did a system update I just managed to keep it running but with this django update it was now past deprecated and into super-duper deprecated. As much as I liked having a tag cloud none of the replacements had the features I wanted so I decided to just remove it. This required a database migration. In the past I'd just manually ran a drop table and rebuilt from my fixture file but this time I decided to do it the official way and use migration files. The trick is to run this command before you change your models:
./manage.py migrate --fake-initial
Then modify your models and run:
./manage.py makemigrations
./manage.py migrate
Because my models had already been changed I had to revert some git commits to get this to work but on the plus side the migration fixed a bunch of old changes to the general schema.
It's been years since I setup a gitea instance but the documentation is still unclear about the launch configuration. The first half of the online docs are thorough and clear and then they just run out. I had to figure out the configuration from the source code. Spawn the application with:
sudo -u git gitea web -p 3940 --custom-path <path>/gitea --config <path>/gitea/app.ini --work-path <path>/gitea
I had to read the source to discover that the work-path option sets the 'chunked upload directory' and the 'APP_DATA_PATH'. It's working now but I haven't gone to the trouble to upload all my projects. At least it's running much better than it was.
Tiny tiny RSS sucks and the developer is a jerk. I'm looking for a better self hosted rss feed reader. yarr seems perfect but doesn't support authentication.
Like many of these web services the tiny tiny rss documentation refuses to tell you how to run their server outside of an isolated docker container. The code to setup the postgresql schema is deliberately broken to prevent you from trying to do this on your own. I used this to set up my database:
php ./update.php --update-schema
And I had to refer to the code to figure out how to specify the configuration options in a file rather than passing them in as arguments or environment variables. Maybe I'll try some http authentication and give yarr another shot. Tiny tiny rss has a lot of features but I don't like it.
I somehow managed to corrupt the postgresql database while attempting to move the data directory to a different mount point. The solution was to completely purge the old installation, manually remove any directories apt refused to clean up, and reinstall.
sudo apt purge postgresql postgresql-12 postgresql-common postgresql-contrib postgresql-client-12 postgresql-client-common
<manually removed empty directories>
sudo apt install postgresql postgresql-12 postgresql-common postgresql-contrib postgresql-client-12 postgresql-client-common
To it's credit, tiny-tiny rss will step you through the initialization the first time you load up index.php but be sure to change the admin password because it defaults to a trivial password and won't let you remove that account.
Use a pip installed certbot to generate certificates and do the renewal. I used:
sudo /home/na/.pyenv/versions/3.10.4/bin/certbot certonly --config /etc/letsencrypt/cli.ini -d trousermonkey.net --no-eff-email -vvv --dry-run
I configured it to use the webroot authenticator. This is better than the standalone authenticator because it means you don't have shut down your webserver during a renewal but making it work was made difficult by the stupid code. Certbot generates the challenge file in a hosted subdirectory and when the agent fails to access this file, it deletes it without mentioning where it put it or where the agent looked for it. This makes it impossible to diagnose without -vvv.
The old webserver used certbot-auto to renew certificates and it was fiddly and complicated. The new certbot is pretty easy. To renew your domain now you just have to run:
sudo /home/na/.pyenv/versions/3.10.4/bin/certbot renew
And it will automatically find your configuration and all the stuff it needed to make this happen.
Now the migration to a new system is mostly complete, I'm pretty happy with the results. It's even faster and easier to post content, and I have good notes on pitfalls I'll run into the next time I have to do this, as well as steps to follow to keep things up to date.
I don't like the changes to tiny tiny rss but I suspect I'll either get used to them or modify the code to make it do what I want. Maybe I should take another look at freshrss or miniflux.
I don't think anyone reads this site but me but let me know if you see any issues with the new installation. I think everything is still working. Hopefully the M.2 drive doesn't have any issues like my last SSD.