My weekend project has been with Docker. Unless you have been living in a barrel in the middle of Sahara, you might have heard of it. For those who haven’t, please DO check it out – it’s almost the best thing since sliced bread. Up until now, I really hadn’t had time nor a use-case where Docker containers would be useful, but recently few incidents set the ball in motion and I decided to jump in the Docker bandwagon.
First was the trouble with Raspberry Pi SD cards. I thought I would have spared from the rampant file system corruption when I bought a supposedly compatible and reliable Kingston SD card and then made sure that I don’t EVER turn it off without explicit shutdown command from the command line. Also Openhab and Z-Way servers were configured not to trash the SD card with excessive logging (by moving the /var/log to memory using tmpfs). Still, I managed somehow to corrupt the file system on three separate occasions. It was not that funny to discover after rebooting that connections to outside servers were crippled. After some head-scratching I found out that /lib/libresolv.so had just vanished! After re-installing and re-configuring the whole system again it wouldn’t reboot anymore. The culprit was this time missing /etc/inittab … You can guess my frustration. The corruption may have been caused by the shutdown scripts hanging before the root filesystem was un-mounted and causing data corruption when I pulled the plug. Just a hypothesis – I haven’t had trouble after I started to shutdown RPi directly from the console while viewing the logs with a monitor hooked up to the RPi, making sure the system really is halted before switching it off. Solution wise, there are of course ways to make the file system read-only, but the whole incident made me think if I should migrate from RPi to something more robust.
I have a Synology 1812+ 8-bay NAS that serves as a local network provider for music, movies and personal files, as well as backup target for various devices (such as my desktop computer and laptop). I’ve once tried hacking with it, but I remember having trouble installing even such basic stuff as multi-threaded Perl on Synology. Exercises in trying to compile anything more complex than “Hello World” have been getting resistance in the form of dozens missing library dependencies. The custom Linux distro that Synology uses is quite limited, and I wasn’t prepared to brute force it open with the possibility of breaking stuff and data loss lingering in my mind. Just wasn’t worth the fight at that time.
It all changed when I heard the news about the new Synology DSM 5.2 having Docker on its application center as a turn-key solution. Now we’re getting somewhere!
Preparing the Synology
First I needed to update to DSM 5.2. That wasn’t exactly straightforward, since for some reason the update function of web based DSM didn’t work (“Connection failed. Please check you network connection.”) There was nothing wrong with my network settings and this was a known problem with some versions of DSM 4.2. Nor did the instructions on how to update it manually via shell work. My last resort was to boot the Synology to network update mode and only after then could the firmware be updated to 5.2 from desktop computer with pre-downloaded DSM file. Obviously, backups were made of all important files before that.
After that I enabled the SSH shell (it’s there on DSM’s Control Panel / Terminal & SNMP window), logged in and installed bootstrap script to enable ipkg in order to install vim, bash and other more familiar unix tools.
I use 3 * 3 TB Western Digital drives in RAID5 configuration to serve most of my files, but I had an extra 120gb SSD drive laying around that I decided to dedicate to Docker, since there’s never enough speed when moving around these few hundred MB containers. Since containers itself are easily downloadable and disposable, I don’t need to worry about backups so much – only these small data volumes that contain user data and configuration files need to be backupped to the main RAID volume with a regular cron script, so I could just run Docker from single disk configuration.
The Docker can be installed from the DSMs application manager, and even though it’s not the newest version (1.6.2 on Synology and 1.7.0 on Github), there doesn’t seem to be big changes according to changelog, so I took the route of least resistance and used app manager to install Docker. BTW, one nice tutorial for Docker on Synology can be found here, showing how to install GitLab and Jenkins on Synology as Docker containers. I personally tried GitLab but it is a bit memory hog (taking several hundred megabytes of the 1 gig memory on 1812+), so I chose to use the more light-weight Gogs. Also it can be found as a Docker container (my pick was the one by codeskyblue).
Installing Openhab Docker container
I first tried the most popular Openhab container from tdecker, but found out that there was this one bug with managing addons (though easily fixed), but there also was no easy way to turn on debugging and logging, so I customized my own Docker container (wetware/openhab on Docker Hub, wetwarelabs/docker-openhab on GitHub). Also JDK 1.7 was replaced with JRE 1.8 for slightly smaller images and faster execution.
The container can be downloaded with command:
docker pull wetware/openhab
Directories for config files and logs are created (here on my new SSD volume):
Then it is easy to map the /volume3/openhab as a new Samba network share on a desktop computer if one likes to remotely edit the configuration files or follow logs easily (I did that previously on Raspberry too).
Instrunctions for configuring and running the container can be found on both GitHub and Docker hub pages. You could then either copy your existing Openhab configuration (from Raspberry etc) to configurations directory, or run Openhab in a demo mode.
After configuring addons and timezone, Openhab can be run with command:
docker run -d -p 8080:8080 -p 9001:9001 -v /volume3/openhab/configurations/:/etc/openhab -v /volume3/openhab/logs:/opt/openhab/logs wetware/openhab
This maps configuration and logging directory from the host to the container as well as allows access to Supervisor (in port 9001) and Openhab web page (in port 8080).
If you then want to monitor Openhab running status or switch between normal and debug mode, you can do it from Supervisor web page (http://your.host:9001).
Migrating Openhab from Raspberry Pi to Synology has been quite pain-free, and these past few days it has been running happily inside the container. I also think that the extra CPU power doesn’t hurt running the complex Java beast that Openhab is, either. Of course Mosquitto and mqttwarn are still running on Raspberry, but I think I’ll try converting them to Docker containers as well in the near future. That would leave the Z-Way server and Razberry the sole inhabitants on my RasPi, but I think there might be workarounds to get them running on Synology too..!
Anyway, I hope you too try out the Openhab container and let me hear if you have any issues, regardless of whether you have Synology or not!