Docker revisited: OpenHAB 2.0 on Synology

paper-ui

After testing the OpenHab 1.7.0 on Synology, it came to my mind that having now a working Docker setup would be great environment for trying out the daily snapshots of OpenHAB 2.0 alpha. Its new GUI, the Paper UI looks quite nice, and with it’s built-in autodetection one could configure at least some of the devices without fiddling with config files. I confess – As much I love hacking and tweaking, I’d still prefer devices to “just work”. Fine-tuning can come later, but it would be nice if I could play with devices as much as possible by just plugging them in, and in this regard OpenHAB is going right direction with its Paper UI.

Unfortunately there were no public Docker images for OpenHAB 2.0, so I had to roll my own. Here are the links to the Docker hub page and GitHub repository

Image can be downloaded with command:

docker pull wetware/openhab2

For further instructions for configuration and running the container, please see the Docker and GitHub pages above.

Information about changes in configuration and running between OpenHAB 1.x and 2.0 can be found here

Auto-detection of devices with UPnP

Unfortunately not all things work out-of-the-box when using Docker containers due to the missing multicasting support. Auto-discovery of new devices in Openhab 2.0’s new Paper UI uses mainly UPnP protocol. Also protocol called AllJoyn is listed, but I’m not that familiar with that one, although the supported devices and services list seems interesting (especially Spotify).

Detection is done by sending discovery UDP messages to address 239.255.255.250:1900. Other UPnP devices (such as Philips Hue hub) will response with a message to this same address. Sending the UDP multicast message is done correctly from the container, but receiving them however requires support from Docker to enable MULTICASTING on container network interface, which is not yet implemented (7/2015). You can follow the discussion here at the GitHub issue page There are 2 work-arounds available:

  • Run container with –net=host option. This will use the network interface of the host instead of creating a separate one for the container. In practice it will map 1:1 all ports on the container to the host and enable the container to receive multicast UDP messages.
  • Run container with –net=none option. This defers creating the network interface during the startup. Then on the host use pipework to create the network interface on the container side with IFF_MULTICAST set:
pipework docker0 -i eth0 CONTAINER_ID IP_ADDRESS/IP_MASK@DEFAULT_ROUTE_IP

BTW. If you have Philips Hue, the auto-detection for lights and other devices connected to the hub might not work automatically, since the hub must be first paired with OpenHAB. The Paper UI does not currently explicitly say this, but after the hub is detected, OpenHAB is waiting for the user to press a pairing button on the hub. Before the button is pressed, no devices can be found.

Feedback

As with the previous OpenHAB Docker image, I’d be happy to receive any comments, bug reports or other feedback related to running OpenHAB 2.0 on Docker!

Setting sails with Docker: OpenHAB on Synology

My weekend project has been with Docker. Unless you have been living in a barrel in the middle of Sahara, you might have heard of it. For those who haven’t, please DO check it out – it’s almost the best thing since sliced bread. Up until now, I really hadn’t had time nor a use-case where Docker containers would be useful, but recently few incidents set the ball in motion and I decided to jump in the Docker bandwagon.

First was the trouble with Raspberry Pi SD cards. I thought I would have spared from the rampant file system corruption when I bought a supposedly compatible and reliable Kingston SD card and then made sure that I don’t EVER turn it off without explicit shutdown command from the command line. Also Openhab and Z-Way servers were configured not to trash the SD card with excessive logging (by moving the /var/log to memory using tmpfs). Still, I managed somehow to corrupt the file system on three separate occasions. It was not that funny to discover after rebooting that connections to outside servers were crippled. After some head-scratching I found out that /lib/libresolv.so had just vanished! After re-installing and re-configuring the whole system again it wouldn’t reboot anymore. The culprit was this time missing /etc/inittab … You can guess my frustration. The corruption may have been caused by the shutdown scripts hanging before the root filesystem was un-mounted and causing data corruption when I pulled the plug. Just a hypothesis – I haven’t had trouble after I started to shutdown RPi directly from the console while viewing the logs with a monitor hooked up to the RPi, making sure the system really is halted before switching it off. Solution wise, there are of course ways to make the file system read-only, but the whole incident made me think if I should migrate from RPi to something more robust.

I have a Synology 1812+ 8-bay NAS that serves as a local network provider for music, movies and personal files, as well as backup target for various devices (such as my desktop computer and laptop). I’ve once tried hacking with it, but I remember having trouble installing even such basic stuff as multi-threaded Perl on Synology. Exercises in trying to compile anything more complex than “Hello World” have been getting resistance in the form of dozens missing library dependencies. The custom Linux distro that Synology uses is quite limited, and I wasn’t prepared to brute force it open with the possibility of breaking stuff and data loss lingering in my mind. Just wasn’t worth the fight at that time.

It all changed when I heard the news about the new Synology DSM 5.2 having Docker on its application center as a turn-key solution. Now we’re getting somewhere!

Preparing the Synology

First I needed to update to DSM 5.2. That wasn’t exactly straightforward, since for some reason the update function of web based DSM didn’t work (“Connection failed. Please check you network connection.”) There was nothing wrong with my network settings and this was a known problem with some versions of DSM 4.2. Nor did the instructions on how to update it manually via shell work. My last resort was to boot the Synology to network update mode and only after then could the firmware be updated to 5.2 from desktop computer with pre-downloaded DSM file. Obviously, backups were made of all important files before that.

After that I enabled the SSH shell (it’s there on DSM’s Control Panel / Terminal & SNMP window), logged in and installed bootstrap script to enable ipkg in order to install vim, bash and other more familiar unix tools.

I use 3 * 3 TB Western Digital drives in RAID5 configuration to serve most of my files, but I had an extra 120gb SSD drive laying around that I decided to dedicate to Docker, since there’s never enough speed when moving around these few hundred MB containers. Since containers itself are easily downloadable and disposable, I don’t need to worry about backups so much – only these small data volumes that contain user data and configuration files need to be backupped to the main RAID volume with a regular cron script, so I could just run Docker from single disk configuration.

The Docker can be installed from the DSMs application manager, and even though it’s not the newest version (1.6.2 on Synology and 1.7.0 on Github), there doesn’t seem to be big changes according to changelog, so I took the route of least resistance and used app manager to install Docker. BTW, one nice tutorial for Docker on Synology can be found here, showing how to install GitLab and Jenkins on Synology as Docker containers. I personally tried GitLab but it is a bit memory hog (taking several hundred megabytes of the 1 gig memory on 1812+), so I chose to use the more light-weight Gogs. Also it can be found as a Docker container (my pick was the one by codeskyblue).

Installing Openhab Docker container

I first tried the most popular Openhab container from tdecker, but found out that there was this one bug with managing addons (though easily fixed), but there also was no easy way to turn on debugging and logging, so I customized my own Docker container (wetware/openhab on Docker Hub, wetwarelabs/docker-openhab on GitHub). Also JDK 1.7 was replaced with JRE 1.8 for slightly smaller images and faster execution.

The container can be downloaded with command:

docker pull wetware/openhab

Directories for config files and logs are created (here on my new SSD volume):

mkdir /volume3/openhab
mkdir /volume3/openhab/configurations
mkdir /volume3/openhab/logs

Then it is easy to map the /volume3/openhab as a new Samba network share on a desktop computer if one likes to remotely edit the configuration files or follow logs easily (I did that previously on Raspberry too).

Instrunctions for configuring and running the container can be found on both GitHub and Docker hub pages. You could then either copy your existing Openhab configuration (from Raspberry etc) to configurations directory, or run Openhab in a demo mode.

After configuring addons and timezone, Openhab can be run with command:

docker run -d -p 8080:8080 -p 9001:9001 -v /volume3/openhab/configurations/:/etc/openhab -v /volume3/openhab/logs:/opt/openhab/logs wetware/openhab

This maps configuration and logging directory from the host to the container as well as allows access to Supervisor (in port 9001) and Openhab web page (in port 8080).

If you then want to monitor Openhab running status or switch between normal and debug mode, you can do it from Supervisor web page (http://your.host:9001).

Aftermath

Migrating Openhab from Raspberry Pi to Synology has been quite pain-free, and these past few days it has been running happily inside the container. I also think that the extra CPU power doesn’t hurt running the complex Java beast that Openhab is, either. Of course Mosquitto and mqttwarn are still running on Raspberry, but I think I’ll try converting them to Docker containers as well in the near future. That would leave the Z-Way server and Razberry the sole inhabitants on my RasPi, but I think there might be workarounds to get them running on Synology too..!

Anyway, I hope you too try out the Openhab container and let me hear if you have any issues, regardless of whether you have Synology or not!

Wall switch wonderland

wallc-s-e

In my previous post I explained how to connect Z-Wave (Plus) wall plugs to Openhab via MQTT with the help of mqttwarn and Z-Way server. Today is time to do the same with Z-Wave.me WALLC-S wall switches. Some people have been able to get these to work just fine, but I stumbled across to quite many posts where people have been having trouble configuring them especially in DIY installations, such as Openhab.

These too are Z-Wave Plus (Gen 5) devices that require SECURITY class functionality from Z-Wave server and thus there was no way to use them with the Z-Wave stack of Openhab. I did try to turn off the security functionality but in the end didn’t figure out how to do that. Nonetheless, the Z-way server from Z-Wave.me (that we hacked a bit in the previous post) works with these switches just fine. I like also the ability to decouple the handling of Z-Wave communication from Openhab and use MQTT as the interface for as many devices as possible.

WALLC-S can act as a basic on/off switch, dimmer as well as a scene switch. I didn’t want to program any kind of scene handling to devices but instead let the Openhab handle all the logic side, so dumb on/off and dimmer functionality is enough for my needs.

wallc-s-conf

After Z-Wave device inclusion we must first add the Z-Way server (more accurately, the Razberry) to WALLC-S’s control groups A and B (to receive notifications about button #1 and #3 presses) as well as to its “Life line” group (to receive update and battery status messages). Note that we must associate each WALLC-S to unique instance of Razberry to differentiate between switches. So for WALLC-S #1 (and #2), it is added to instance #1 (and #2 respectively) of control groups A and B.

Then we have to configure the button behaviour. I have a single paddle on both of the switches, so I set the switch in a pair mode (uppermost buttons 1 and 2 work in pairs, as well as the lower row buttons 3 and 4). Actually separate mode would work just as well, since we are not interested in the control group C and D messages.

Finally we configure for both control groups the command type which is sent to Razberry, in this case “Switch On/Off and Dim”.

Another Z-Way JSON debugging session…

Now when buttons #1 and #3 of WALLC-S #1 are pressed, the following data structures are updated:

[2015-07-11 01:24:02.944] [D] [zway] SETDATA devices.1.instances.1.commandClasses.32.data.srcNodeId = 22 (0x00000016)
[2015-07-11 01:24:02.945] [D] [zway] SETDATA devices.1.instances.1.commandClasses.32.data.srcInstanceId = 0 (0x00000000)
[2015-07-11 01:24:02.945] [D] [zway] SETDATA devices.1.instances.1.commandClasses.32.data.level = 255 (0x000000ff)
.
.
[2015-07-11 01:25:24.109] [D] [zway] SETDATA devices.1.instances.1.commandClasses.32.data.srcNodeId = 22 (0x00000016)
[2015-07-11 01:25:24.109] [D] [zway] SETDATA devices.1.instances.1.commandClasses.32.data.srcInstanceId = 0 (0x00000000)
[2015-07-11 01:25:24.110] [D] [zway] SETDATA devices.1.instances.1.commandClasses.32.data.level = 0 (0x00000000)

So command class 32 level is 255 for button #1 press and 0 for button #3 press.

Battery levels are received like this:

[2015-07-11 01:31:40.961] [D] [zway] SETDATA devices.22.data.lastReceived = 0 (0x00000000)
[2015-07-11 01:31:40.961] [D] [zway] SETDATA devices.22.instances.0.commandClasses.128.data.history.96 = 1436567500 (0x55a047cc)
[2015-07-11 01:31:40.962] [D] [zway] SETDATA devices.22.instances.0.commandClasses.128.data.last = 96 (0x00000060)

When button #1 is kept pressed (dimmer up)..

[2015-07-11 15:21:11.145] [D] [zway] SETDATA devices.19.data.lastReceived = 0 (0x00000000)
[2015-07-11 15:21:11.146] [D] [zway] SETDATA devices.1.instances.2.commandClasses.38.data.srcNodeId = 19 (0x00000013)
[2015-07-11 15:21:11.146] [D] [zway] SETDATA devices.1.instances.2.commandClasses.38.data.srcInstanceId = 0 (0x00000000)
[2015-07-11 15:21:11.146] [D] [zway] SETDATA devices.1.instances.2.commandClasses.38.data.startChange = True

..and released:

[2015-07-11 15:21:12.650] [D] [zway] SETDATA devices.19.data.lastReceived = 0 (0x00000000)
[2015-07-11 15:21:12.650] [D] [zway] SETDATA devices.1.instances.2.commandClasses.38.data.srcNodeId = 19 (0x00000013)
[2015-07-11 15:21:12.650] [D] [zway] SETDATA devices.1.instances.2.commandClasses.38.data.srcInstanceId = 0 (0x00000000)
[2015-07-11 15:21:12.650] [D] [zway] SETDATA devices.1.instances.2.commandClasses.38.data.stopChange = Empty

Similarily, button #3 is kept pressed (dimmer down)..

[2015-07-11 15:25:11.367] [D] [zway] SETDATA devices.19.data.lastReceived = 0 (0x00000000)
[2015-07-11 15:25:11.368] [D] [zway] SETDATA devices.1.instances.2.commandClasses.38.data.srcNodeId = 19 (0x00000013)
[2015-07-11 15:25:11.368] [D] [zway] SETDATA devices.1.instances.2.commandClasses.38.data.srcInstanceId = 0 (0x00000000)
[2015-07-11 15:25:11.368] [D] [zway] SETDATA devices.1.instances.2.commandClasses.38.data.startChange = False

..and released:

[2015-07-11 15:25:12.440] [D] [zway] SETDATA devices.19.data.lastReceived = 0 (0x00000000)
[2015-07-11 15:25:12.440] [D] [zway] SETDATA devices.1.instances.2.commandClasses.38.data.srcNodeId = 19 (0x00000013)
[2015-07-11 15:25:12.440] [D] [zway] SETDATA devices.1.instances.2.commandClasses.38.data.srcInstanceId = 0 (0x00000000)
[2015-07-11 15:25:12.440] [D] [zway] SETDATA devices.1.instances.2.commandClasses.38.data.stopChange = Empty

So the “StopChange” indicates button release and “StartChange” is true for dimmer up and false for dimmer down. Note that the actual values will be lingering in the JSON tree long after buttons have been released and StopChange doesn’t seem to ever change its value, but only the updateTime tag is updated. Nonetheless, we don’t have to check for changes in values but just bind to the JSON tree to get notifications when the tags have been updated, just like in previous post.

I found out that Z-Way server holds no functionality to contain dimmer value nor there is any way to automatically get notifications on certain intervals that the button is still kept pressed, so we have to implement our on notification mechanism with timers in JavaScript. For the actual dimmer value (e.g. 0-100%), we let Openhab take care of that.

Hooking Z-Way server…

The amount of our extra code in main.js of Z-Way server is getting bigger (see the previous post), so we move it to separate file (mqtt.js) and just put this in the end of main.js:

    executeFile("mqtt.js");

In mqtt.js we define (in addition to Wall Plug code explained in the previous post)


// Here the id:19 and id:22 are hardcoded Z-Wave device ID's. Change them (and associated instance IDs) accordingly to your setup
var dimmers = [];
dimmers.push( { id:19, instance:1, timer:null, timercount:0 } );
dimmers.push( { id:22, instance:2, timer:null, timercount:0 } );

var dimmer_publish_interval = 100;  // in milliseconds

function getById(id, myArray) {
        for ( var i=0; i<myArray.length; i++) {
                if(myArray[i].id == id) {
                         return myArray[i];
                }
        }
        return null;
}

function battery_level_publish (device, theValue) {
        eventString = 'Device' + device + "/battery";
        publish_mqtt(eventString, theValue);
}

function wallswitch_binary (device, theValue) {
        eventString = 'Device' + device + "/wallswitch/binary";
        state = 'on';
        if (theValue == false){
                state = 'off';
        }
        publish_mqtt(eventString, state);
}

function wallswitch_dimmer_publish (device, theValue) {
        eventString = 'Device' + device + "/wallswitch/dimmer";
        state = 'increase';
        if (theValue == false){
                state = 'decrease';
        }
        publish_mqtt(eventString, state);
}

function wallswitch_dimmer_start (device, theValue) {
        wallswitch_dimmer_publish (device, theValue);
        dimmer = getById(device, dimmers);
        if (dimmer != null)
        {
                if (dimmer.timer != null) {
                        clearInterval(dimmer.timer);
                }
                dimmer.timercount = 0;
                dimmer.timer = setInterval(
                        function() {
                                wallswitch_dimmer_publish (device, theValue);
                                dimmer.timercount++;
                                if (dimmer.timercount>20) {
                                        // this is to stop sending updates eventually if for some reason the "dimmer_stop" 
                                        // message is not received and the dimmer gets "stuck"
                                        clearInterval(dimmer.timer);
                                }
                        }, dimmer_publish_interval);
        } else
        {
                console.log("dimmer not found!");
        }
}


function wallswitch_dimmer_stop (device, theValue) {
        dimmer = getById(device, dimmers);
        if (dimmer != null)
        {
                if (dimmer.timer != null)
                        clearInterval(dimmer.timer);
        }
}

// Binding to WALLC-S devices
for (var i=0; i < dimmers.length; i++) {

        var id = dimmers[i].id;
        (function(devid) {
                console.log("MQTT plugin: Configure dimmer " + devid);
                zway.devices[1].instances[ dimmers[i].instance ].commandClasses[38].data.startChange.bind(function() {
                        wallswitch_dimmer_start ( devid, this.value);
                });
                zway.devices[1].instances[ dimmers[i].instance ].commandClasses[38].data.stopChange.bind(function() {
                    wallswitch_dimmer_stop( devid, this.value);
                 });
                zway.devices[1].instances[ dimmers[i].instance ] .commandClasses[32].data.level.bind(function() {
                    console.log("MQTT plugin: wallswitch #" + devid + ": binary " + this.value);
                    wallswitch_binary (devid, this.value);
                 });
                zway.devices[ dimmers[i].id ].instances[0].commandClasses[128].data.last.bind(function() {
                    battery_level_publish (devid, this.value);
                 });
        })(id); // tie device ID so it is referenced correctly from callback funcs
}


And when testing the following MQTT messages are seen:

home/zwave/Device22/wallswitch/binary on
home/zwave/Device22/wallswitch/dimmer increase
home/zwave/Device22/wallswitch/dimmer increase
home/zwave/Device22/wallswitch/dimmer increase
home/zwave/Device22/wallswitch/dimmer increase
home/zwave/Device22/wallswitch/dimmer increase
home/zwave/Device22/wallswitch/dimmer increase
home/zwave/Device22/wallswitch/dimmer decrease
home/zwave/Device22/wallswitch/dimmer decrease
home/zwave/Device22/wallswitch/dimmer decrease
home/zwave/Device22/wallswitch/dimmer decrease
home/zwave/Device22/wallswitch/dimmer decrease
home/zwave/Device22/wallswitch/binary off
home/zwave/Device22/battery 96

It works! 🙂

Openhab configuration

We can then define the items in .items file..

Switch Kitchen_Light_Switch "Kitchen light switch" (GF_Kitchen)  {mqtt="<[mosquitto:home/zwave/Device19/wallswitch/binary:state:MAP(wallswitchFromMqtt.map)]"}
String Kitchen_Light_Dimmer "Kitchen light switch" (GF_Kitchen)  {mqtt="<[mosquitto:home/zwave/Device19/wallswitch/dimmer:state:MAP(wallswitchFromMqtt.map)]"}
Number Kitchen_Light_Switch_Battery "Kitchen light switch battery [%3f %]" (gBattery,GF_Kitchen) {mqtt="<[mosquitto:home/zwave/Device22/battery:state:default"}

And add appropriate rules in .rules file:

var Integer manualTimeOut = 1800

rule "Kitchen Light Switch"
        when
        Item Kitchen_Light_Switch received update
        then
{
    logInfo("Kitchen Light Switch", "button changed state ("+Kitchen_Light_Switch.state +")")
    if (Kitchen_Light_Switch.state==ON)
        {
        sendCommand(Kitchen_Light_Toggle, ON)  
        sendCommand(Kitchen_Light_CT_Dimm, Color_Temperature.state as DecimalType)
        if (KitchenTimer == null )
            {                       
            // create timer
            logInfo("Kitchen Light Switch", " creating timer for " + manualTimeOut + "sec" )
                        
            KitchenTimer = createTimer( now.plusSeconds(manualTimeOut) )
                [ 
                logInfo("Kitchen Light Switch", "timer expired, switching off ")
                sendCommand(Kitchen_Light_Toggle,OFF)       
                KitchenTimer=null
                ]               
            }
        else
            {
                logInfo("Kitchen Light Switch", " rescheduling timer" )
                KitchenTimer.reschedule(now.plusSeconds(manualTimeOut))
            }
        }
    else
        {
        sendCommand(Kitchen_Light_Toggle, OFF)                  
        if (KitchenTimer != null )
            KitchenTimer.cancel
        KitchenTimer=null
        }
}
end

rule "Kitchen Light Dimmer"
    when
        Item Kitchen_Light_Dimmer received update
    then
{
    logInfo("Kitchen Light Dimmer", "changed state ("+Kitchen_Light_Dimmer.state +")")
    if (Kitchen_Light_Dimmer.state=="INCREASE")
        {
        sendCommand(Kitchen_Light_Dimm, INCREASE)
        }
    else
        {
        sendCommand(Kitchen_Light_Dimm, DECREASE)
        }
}
end

Also add the transformation map wallswitchFromMqtt.map

on=ON
off=OFF
increase=INCREASE
decrease=DECREASE

With this configuration the setup works nicely, except that there's 500-600 millisecond delay between a button press and a change in lighting, most of which can be accounted to Z-Wave (delay inside WALLC-S, Z-Wave transceiver, decoding and Z-Way server). When our JSON binding callback is called, a MQTT message is sent nearly instantaneously and Philips Hue lights react also quite quickly. I have to look into it more deeply in the future whether there's any way to speed up it a bit.