Home Creating a docker swarm to monitor RPI operating temperatures with visualization
Post
Cancel

Creating a docker swarm to monitor RPI operating temperatures with visualization

This post shows how to create a multi-architecture docker swarm with deployment constraints on different architectures.

For the purpose of this post, we will be using simple images with both ARM and X86 variants.

First step is to have a raspberry that is able to run docker.

Pre Requisites

I will not be getting into details on how to install HypriotOS on rpi, or raspbian. If you want to install HypriotOS follow their instructions at (https://blog.hypriot.com/), HypriotOS is a very straightfoward and simple way to get docker running in your RPI in minutes.

1
2
* RPI 3 with HypriotOS installed
* Docker-machine installed (tested with version 0.14.0)

What we will actually build

A temperature monitoring system for our RPI’s :D, with connection to ElasticSearch for data storage and Kibana for data visualization, using python+flask in our RPI

Setting RPI to be used with docker-machine

First we need to trick the docker-machine provisioning system into accepting the RPI OS.

So let’s change the os-release of RPI

1
ssh USER@RPI-IP-ADDRESS

With any terminal editor (example Nano)

1
sudo nano /etc/os-release

Change the line

1
ID=raspbian

To

1
ID=debian

If you don’t have ssh via key, its time to copy your public key to rpi

In your computer

1
ssh-copy-id -i ~/.ssh/id_rsa.pub USER@RPI-IP-ADDRESS

If you don’t use a Linux based OS, please refer to Putty or other means you use to ssh into your RPI

Now we are set up to create a docker-machine

From a terminal run

1
2
3
4
5
6
docker-machine create --driver generic \
--generic-ip-address RPI-IP-ADDRESS \
--generic-ssh-key ~/.ssh/id_rsa.pub \
--generic-ssh-user PI-USER \
--engine-storage-driver overlay \
rpi

And you should now have a new docker-machine named rpi

To test it.

1
docker-machine ls

Should give you a list of machines, in this case one.

1
rpi

Creating Docker Swarm

Now let’s create a swarm ( your computer as the Manager and RPI as a worker)

You can increase this to how many RPI’s you wish as long as you follow the process above.

For simplicity you should have two terminals tabs open as we will have different environments on each on this step

To init the swarm is as easy as:

1
docker swarm init --advertise-addr $GIVE-IP-OF-INTERFACE

The --advertise-addr is required in most situations for multiple IP’s available, either from eth0 and wlo1 , etc..

If all goes well the command should return something like

1
docker swarm join --token SWMTKN-1-2ip14zs1g7tjwd6afbd2y6gg8d0oi6zgjxnp819uhg7u4g0da9-cbr0jcaz4tf5yqe550gdi4acg 192.168.1.1:2377

You will need this token to allow other nodes to join the swarm as workers.

If you lose the token, you may get it again by doing

1
docker swarm join-token worker

Or if you wish to have a node join as a manager

1
docker swarm join-token manager

So now, let’s copy this line

On another terminal window, let’s set the environment to make use of docker environment at the RPI.

1
eval $(docker machine env rpi)

This will allow you to run docker commands as if you in a shell in RPI

So let’s join the swarm. Just copy the line and the docker-machine currently in user rpi will join the swarm.

To check, then just switch to the terminal running your local docker environment, now the Manager Node

And do

1
docker node ls

This will list your nodes.

To have more information on nodes, you can always inspect them, just like inspecting a docker container.

1
docker node inspect rpi

One important part of this inspection is that it allows you to check on how you may use the constraints directive under docker-compose service declaration.

For the example in this post. We will be interested in the following part of the output

1
2
3
4
5
6
7
8
9
10
        "Description": {
        "Hostname": "rpi",
        "Platform": {
            "Architecture": "armv7l",
            "OS": "linux"
        },
        "Resources": {
            "NanoCPUs": 4000000000,
            "MemoryBytes": 1024184320
        },

In particular the Platform : Architecture

So now let’s use docker-compose to create a deployable stack.

In fact we will be using a base docker-compose with our stack declaration, and one for our deploy strategy that will add/override a deploy declaration.

More on that later.

Creating Our RPI operating Temperature Monitor

(A previous version of what will build here is available https://github.com/Ilhicas/docker-swarm-demo)

Create the following structure and files

1
2
3
4
5
6
7
8
demo\
    app\
        app.py
        db.py
    requirements.txt
    Dockerfile
docker-compose.yml
deploy.yml

The contents of each file are below

app.py

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
from db import db
import psutil
from flask import Flask, make_response
import psutil
import time
import socket
app = Flask(__name__)

def temperature_thread():
    logged = {}
    logged['hostname'] = socket.gethostname()
    logged['cpu_perc'] = psutil.cpu_percent()
    logged['mem_perc'] = psutil.virtual_memory().percent
    logged['@timestamp'] = time.time()
    logged['container_id'] = socket.gethostname()
    with open("/sys/class/thermal/thermal_zone0/temp") as temp:
        logged['temperature'] = float(temp.read().strip())/1000

    db.index(index="ilhicas.com", doc_type='log', body=logged)
    threading.Timer(30, temperature_thread).start()


@app.route("/")
def index():

    return make_response("Container ID: {}".format(socket.gethostname()), 200)

if __name__ == "__main__":
    threading.Timer(1, temperature_thread).start()
    app.run("0.0.0.0", port=5000, threaded=True)

db.py

1
2
3
from elasticsearch import Elasticsearch

db = Elasticsearch(["elasticsearch"], port=9200)

requirements.txt

1
2
3
flask
psutil
elasticsearch==5.4.0

Dockerfile

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
FROM armhf/alpine:3.5

RUN apk add --update alpine-sdk python python-dev py-pip linux-headers

RUN rm -rf /var/cache/apk/*

COPY requirements.txt requirements.txt

RUN pip install -r requirements.txt

COPY /app/. /app

WORKDIR /app

EXPOSE 5000

ENTRYPOINT ["python"]

CMD ["app.py"]

docker-compose.yml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
version: "3.2"
services:
    demo:
        image: swarm_demo_v1
        ports:
            - mode: host
              target: 5000
              published: 80
    kibana:
        image: kibana
        environment:
            - ELASTICSEARCH_URL=http://elasticsearch:9200
        ports:
            - 5601:5601
    elasticsearch:
        image: elasticsearch:alpine
        volumes:
            - ek:/usr/share/elasticsearch/data
        environment:
            ES_JAVA_OPTS: '-Xms2048m -Xmx2048m'
volumes:
    ek:

deploy.yml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
version: '3.2'
services:
  elasticsearch:
    deploy:
      replicas: 1
      constraints:
          - node.platform.arch == x86_64
  kibana:
    deploy:
      replicas: 1
      constraints:
          - node.platform.arch == x86_64
  demo:
    deploy:
      replicas: 1
      placement:
        constraints:
          - node.platform.arch == armv7l

Ok, sorry for the long paste of files, but its nice to have all in one place, and as well to learn other things along the way besides swarm.

We are actually using BI tool to analyse our RPI operating temperature.

Nice overkill :P

But first things first.

Before deploying we need to build the arm image in RPI

So switch to the terminal containing the RPI Environment and

1
docker-compose build demo

When you are done, all you have to do is deploy the Service stack

Switch back to the Manager Node terminal (your local computer)

1
docker stack deploy -c docker-compose-yml -c deploy.yml temperature-monitor

Now if everything went well you should be able to visit both IP’s and get the same info

1
2
3
http://127.0.0.1.5601

http://RPI-IP:5601

Should both return the Kibana Dashboard

Just go to management and add the index, after Elastic Search has started up The index should be named

1
ilhicas.com

Or other name if you changed the python code at app.py

You may also visit

1
http://127.0.0.1:5000

And get the container ID of the container running under the RPI

That’s it, hope you found it helpful.

This post is licensed under CC BY 4.0 by the author.