Category Archives: python

Python IoT temperature measurement

This is the second article in a ‘Python IoT’ topic series. In a previous article, I described WiThumb, a USB-connected IoT device from a KickStarter project, and how to run Python on it; specifically,  a Python implementation for resource-constrained, connected devices called MicroPython, or μPy for short.

In practice, (on WiThumb) μPy runs on a ESP8266 WiFi-enabled microcontroller chip embedded on the PCB. The PCB also has a single-chip temperature sensor module called MCP9808. The ESP8266 and the sensor are connected by what’s called a I2C bus, which can be accessed using μPy.

To read temperature information from the sensor using I2C, we need to know:

  • what is the I2C device address of the  on-board MCP9808
  • what is the temperature register number and what size the data that can be read
  • what is the structure of the data readable form the register

When dealing with IoT devices, to get all this information you may have to check actual hardware data sheets. Or ask a lot of questions from people who have them figured out. Or- just read  a nice detailed blog article about it, by someone who’s done those things..

So, to determine the I2C address of the MCP9808 sensor, we’d need to check the pertinent parts of the MCP9808 data sheet:

3.5 Address Pins (A0, A1, A2)

These pins are device address input pins.

The address pins correspond to the Least Significant bits (LSbs) of the address bits and the Most Significant bits (MSbs): A6, A5, A4, A3. This is illustrated in table 3-2

Moving on to the said table, abbreviated here for readibility:

TABLE 3-2:



Address Code

Slave Address
















Note 1: User-selectable address is shown by ‘x’. A2, A1 and A0 must match the corresponding device pin configuration.

Ok, let’s see. It’s a byte with bits A6-A3 set for as, whereas A2-A0 can change. How do we know what they are? Well… it depends. The I2C bus can have many slave devices such as sensors connected to it, each with unique address. So in  each hardware design  (such as that of the WiThumb) the bits might be set differently. Still, the hardware spec suggests 0x18 as a default address, so we could reasonably try that.

But let’s nevertheless check the WiThumb hardware schematics – the actual blueprint of how the device is physically soldered together. From the schmatics we see that pins A2, A1 and A0 are all connected to a `VDD` pin, or a `3V3` line. That is telling us that the pin configuration for A2-A0 is actually `111` (for any pins left unconnected, the address bits would be zeroes). So the full address is `0011111` which is `0x1F` in hex, not `0x18`. So, for some reason the designer of WiThumb has chosen to deviate from the default address.

What about the temperature register, then? Back to the hardware design spec:


Register Pointer (Hex)


Bit Assignment




























On the left we see that the register pointer is 0x05. So now we have enough information that we can try to fetch the temperature data from the MCP9808! Let’s try that using MicroPython:

from machine import Pin, I2C
i2c = I2C(scl=Pin(5, Pin.IN, Pin.PULL_UP), sda=Pin(4, Pin.IN, Pin.PULL_UP), freq=10000)
[31, 104]

So apparently the I2C bus of the WiThumb indeed has two devices connected to it, one at 0x1F and the other at 0x68.

Note that it is absolutely necessary to set the `PULL_UP` mode to construct the I2C instance. That enables the so-called internal pull-up resistors ESP8266 has, to ‘pull up’ both the I2C data and clock line. Using those internal resistors for I2C is apparently not recommended, but  in the case of WiThumb, they seem to be sufficient.

Next, let’s read ambient temperature data from the register:

R_A_TEMP = const(5)
raw = i2c.readfrom_mem(0x1F, R_A_TEMP, 2)

Here, we read two raw data bytes from the register. Note the `const()` function: While Python has no real constant type, in MicroPython `const()` can be used to optimize handling of parameters whose values are known not to change.

Finally, let’s convert the raw data to a floating-point number, using information from Table 5.1 above. All the temperature numbers in the hardware spec are in Ceicius degrees, so we know that’s what the bytes represent. To convert the bit string into a floating-point number, Python bitwise operators such as ‘bitwise left shift’ and  ‘bitwise and’ are useful:

u = (raw[0] & 0x0f) << 4
l = raw[1] / 16
if (raw[0] & 0x10) == 0x10:
   temp = 256 - (u + l)
   temp = u + l

(algorithm courtesy Thomas Lee, designer of WiThumb)

That’s it.


MicroPython on the WiThumb with OSX

The WiThumb is a small USB-powered Arduino-compatible USB powered ESP8266-based WiFi IoT development board with a 6 DOF IMU, and a precision temperature sensor. Lot of stuff in a small package.


WiThumb image by Thomas Lee

It is a result of a successful KickStarter campaign by Thomas Lee of “Funnyvale”, a group of San Jose area area hackers and makers from California. Curiously, when I received my WiThumb and saw the sender’s address, I realized I used to live nearby  back in -95/-96, when I was a young trainee with NCD (Network Computing Devices) of Mountain View. A small world.

But let’s get back to the device – it uses a ESP8266 chip that enjoys huge popularity among Python IoT developers, after Damien George’s successful project to bring MicroPython to it. MicroPython is used also in the OpenMV Python-programmable machine vision camera, another favourite device of mine.

When I saw that ESP8266 was used in WiThumb, I asked Thomas whether MicroPython was or would be supported. Apparently other people asked too, and he was able to make it work, using an unofficial MicroPython build for the ESP8266 by Stefan Wendler.

The instructions on flashing the WiThumb with MicroPython firmware and connecting to it were for Windows, however. While I have a Windows VM on my laptop, I wanted to try & see if I could make things work on OSX as well. It was a surprisingly smooth experience. Here’s how:

  1. Figure out what’s the device file, after plugging in the WiThumb. There are other ways, but I like doing things in Python so I created a small withumb Python package that uses PySerial to help identify the WiThumb easily. For example my WiThumb appeared as /dev/tty.SLAB_USBtoUART.
  2. Install esptool, the official Python package to work with the ESP8266, and erase the flash: --port /dev/tty.SLAB_USBtoUART erase_flash
  3. Download the unofficial micropython nightly (see link earlier) and flash the WiThumb with it: --port /dev/tty.SLAB_USBtoUART --baud 460800 write_flash --flash_size=8m 0 mp-esp8266-firmware-latest.bin
  4. Remove and plug in the WiThumb
  5. Connect to it using the screen command at 115200 baud speed
    screen /dev/tty.SLAB_USBtoUART 115200
  6. Have fun with MicroPython on the WiThumb!

Note: to move files to WiThumb your best bet is probably adafruit-ampy. I first tried mipy and mpy-utils but got errors with them, whereas ampy just worked right out of the box.

Practical digital business contracts

When entering into a contract with a customer, it is no longer necessary (at least in Finland) to physically transmit signed copies of a paper copy.

While convenient, given how easy digital images are to counterfeit, this raises a question of how to make sure the digital copy is not later modified by either party of an agreement. It would be easy to for example a buyer to replace the price of the contract with a lower figure by a simple Photoshop operation.

Of course, a prudent organization keeps copies of all digital contracts. So any changes made later can be detected by comparing the copies. However such checks can be time-consuming; a large contract can be dozens of pages. On first thought, checking that the file sizes match sounds like a good idea, too. However it’s also easy to add (or remove) some data from a file transparently so that file sizes match.

So, how to reliably compare an original and a copy that’s been received afterwards? There is a solution: one can generate a so-called cryptographic hash string that serves as a fingerprint of the original file. The way these hashes are generated ensures that any changes to the file will result in a different hash. So one can later easily check that the copy is a faithful copy of the original, by checking whether their hashes match.

A even better solution is to use digital signatures enabled by public key cryptography: First, a key pair is created, consisting of a private key that’s kept secret, and a public key that can be distributed or published freely. The two keys are created from a common “origin” in a way that makes it possible to use them together in a very smart way.

What happens when digital signatures are created  is that the secret private key is used to “stamp” the hash. The public key can then be used to later verify that the stamp was indeed created using the private key – in other words, by the party that gave us the digital signature.

So as a result, by using digital signatures, we can make sure that the contract copies both parties possess are the same, and also that the copy was sent to us by the contractual other party.

Here’s a simple Python module providing a command-line utility to create and verify digital keys and signatures.

For more details, see for example the blog post by Laurent Luce, describing the use of PyCrypto.

Simple & smart annotation storage for Plone forms


In Plone 4 & 5, a package called z3c.form is used for forms. While it has quite extensive documentation, beginner-level tutorials for accomplishing specific (presumably simple) small tasks are hard to come by.

One such task might be using annotations for storing form data. Here, I describe how to accomplish just that with minimum effort while sticking to ‘the Zope/Plone way’.

Custom data manager registration

First, you have to register the built-in z3c.form.datamanager.DictionaryField for use with PersistentMapping (for more on what z3c.form data managers are and how they work, see for example my earlier post). This can be done by adding the following (multi-) adapter ZCML registration to the configure.zcml file of your Plone add-on package.

   for="persistent.mapping,PersistentMapping zope.schema.interfaces.IField"

This basically tells Plone to use the DictionaryField adapter for loading & storing form data whenever a form is returning a PersistentMapping.

Using the built-in persistent.mapping.PersistentMapping directly instead of subclassing it has the advantage that were you ever to refactor yout (sub)class, such as move or rename it, existing annotations using that (sub)class would break. Sticking to PersistentMapping (or some other built-in persistent.* data structure is a way to avoid such problems.

Form customization

For the DictionaryField data manager to do its job, the form has to provide it with the object that the form will save submitted form to and load editable data from: In our case, that would be a PersistentMapping instance. For that purpose, the z3c.form forms have a getContent method that we customize:

   def getContent(self):
      annotations =  IAnnotations(self.context)
      return annotations["myform_mapping_storage_key"]

This assumes the context object, be it Page, Folder or your custom type, is attribute annotatable and that its annotations have been initialized with a PersistentMapping keyed at “myform_mapping_storage_key”. Doing that is left as an exercise to the reader (see annotations). For a more standalone form, add a try/except block to the getContent method that checks for existence of “myform_mapping_storage_key” and upon failure, initializes it according to your needs.

So, with just those small pieces of custom code (adapter ZCML registration & a custom form getContent method), your form can use annotations for storage.

Use with custom z3c.form validators

If you don’t need to register custom validators for your form, you’re all set. But if you do, there are some further details to be aware of.

To start with, there are some conveniences to simplify use z3c.form validators, but for anything but the simplest validators, we’d want to create a custom validator class based on z3c.form.validators.SimpleValidator. A validator typically just has to implement a custom validate() method that raises zope.interface.Invalid on failure; we’re not going into more details here.

Such a validator needs to be first registered as an adapter and then configured for use by setting so-called validator discriminators, via calling the validator.WidgetValidatorDiscriminators() function.

Among others, it expects a context parameter (discriminator) that tells the validator machinery which form storage to use the validator for.

Given we were using plain PersistentMapping, this poses a problem: if there are other forms using the same storage class (and same field), the validator will be applied to them as well. What if we want to avoid that; perhaps the other form needs another validator, or no validator?

We could subclass PersistentMapping, but earlier we saw it’s not necessarily a good idea. Instead, we can declare that our particular PersistentMapping instance provides an interface, and then pass that interface as a context (discriminator) to WidgetValidatorDiscriminators()

Defining such an interface is as easy as:

from zope.interface import Interface, directlyProvides

class IMappingAnnotationFormStorage(Interface):
   "automatically applied 'marker' interface"

Then update the getContent method introduced earlier to tag the returned PersistentMapping with the interface thus:

   def getContent(self):
      annotations =  IAnnotations(self.context)
      formstorage = annotations["myform_mapping_storage_key"]
      directlyProvides(formstorage, IMappingAnnotationFormStorage)

With that in place, you can register different validators for forms that all use nothing but plain PersistentMapping for storage.

Finally, here’s a custom mixin for your forms that has some built-in conveniences for annotation storage use:

class AnnotationStorageMixin(object):
   "mixin for z3c.forms for using annotations as form data storage"

   ANNOTATION_STORAGE_KEY = "form_annotation_storage"

   def getContent(self):
      annotations =  IAnnotations(self.context)
      mapping = annotations[self.ANNOTATION_STORAGE_KEY]
      if self.ASSIGN_INTERFACES:
         for iface in self.ASSIGN_INTERFACES:
            directlyProvides(mapping, iface)
      return mapping

Note the two class variables ANNOTATION_STORAGE_KEY & ASSIGN_INTERFACES. Override them in your own form class to configure the mixin functionality for your specific case. If you use this mixin class for multiple different forms, you very likely want to at least have a unique storage key for each form.

Unexpectedly morphing z3c.form validation contexts considered harmful


The z3c.form validator context is not the context content object; rather, it is whatever getContent returns


Normally, z3c.form stores form data as content object attributes. More specifically, the default AttributeField data manager directly gets and sets the instance attributes of the context content object.

To register a SimpleValidator -based validator adapter for the form, the class of that same context object (or an interface provided by it) should be passed to validator.WidgetValidatorDiscriminators. The same context object is then assigned to the validator’s context attribute, when it runs.

That’s all fine so far.

Use a custom data manager

Sometimes you may want to for example store form data in annotations, rather than in object attributes. In such a case, you can use a custom z3c.form data manager that basically just changes what object the form load & save operations are performed on.

All that’s required (for that particular case) is:

  • register the built-in (not used by default) DictionaryField as an adapter for PersistentMapping:
   for="persistent.mapping,PersistentMapping zope.schema.interfaces.IField"
  • add to the form a custom getContent method that returns your PersistentMapping instance

Resulting change in validation behavior

It appears that when using a custom data manager, z3c.form enforces its own idea of what the context is considered to be (for purposes of z3c.form validation, anyway).

What does this mean? During form validation by a subclass of z3c.form base SimpleValidator, contrary to default behavior described in the beginning of this article, the context is no longer the context content object (say, a Page or Folder). Instead, it is what the getContent method returned. In our case, that would be PersistentMapping (or, zope.interface.common.mapping.IMapping if we passed an interface rather than class).

So if you were still expecting the context to refer to the content object, you’d be in for a surprise: First, when passing a context discriminator to validator.WidgetValidatorDiscriminators to validate the form, you’d now have to pass PersistentMapping (or, zope.interface.common.mapping.IMapping) instead of the content object (e.g. Page or Folder). Second, when thus registered validator would run, its context attribute would now also be the said PersistentMapping (or the interface).

Why is this bad

While this change in behavior may be by design, the changed semantics is confusing and at the very least not obvious.

It is not intuitive that what could reasonably be considered z3c.form’s internals (how the actual form storage is determined) would propagate to a change in semantics of validation context in this way. Especially when ‘context’ in the Zope/Plone world is pervasively used to refer to the context content object (acquisition-wrapped, but regardless).

Also, to access the actual content object when custom data manager is used, the context content object is not directly accessible (thankfully, the validator can still reach it via the view, ie. self.view.context).

py.test subprocess fixture callables for testing Twisted apps

TL;DR: fork and os._exit(0) the fixture callable

Getting py.test to test Twisted apps is supported to some extent, albeit somewhat briefly documented; also there’s a py.test plugin to help with testing Twisted apps.

Tests usually require fixtures to be set up. Let’s assume your tests require running something in a separate process, for example a server such as MySQL. What to do? Ok, you can use subprocess.Popen, or Twisted spawnProcess to spin up the database. Note that you should probably not use multiprocessing: It uses its own loop for which there is no support in Twisted.

But what if it’s Python code you want to run? Yes, you can put it into a module and run the module using the above methods. However, if you want to use a Python callable defined in your test module you’re out of luck: neither subprocess.Popen and spawnProcess: nor can run a Python callable in a subprocess.

In that case, you need os.fork. Simply run the callable in the child, and depending on your use case, either wait for it to complete in parent, or kill it at the end of the test. However there’s one gotcha, at least when using py.test: since you’re forking a running test, py.test will now report two tests running and completing. The solution is to exit the child abnormally; simple sys.exit() will raise an exception, but doing os._exit(0) does not.

Here’s example code that spins up a simple test HTTP server for one request, and checks that content fetched by HTTP client matches that served by the HTTP server:

import os
from httpserver import BaseHTTPServer
import pytest
import treq
from turq import TurqHandler

def serve_request(host, port, rulecode):
   TurqHandler.rules = parse_rules(rulecode)
   server = BaseHTTPServer.HTTPServer((host,port), TurqHandler)
   server.handle_request() # nothing more needed fro this one test

def test_something():
   pid = os.fork()

   # set up fixture in child
   if pid == 0:
      serve_request("", 8080, "path('*').text('Hello')")

   # proceed in parent (test), wait a bit first for the server fixture to come up

   # make a request
   r = treq.get("")
   # kill server in child if we cannot connect
      response = yield r
   except Exception as exc:
      os.kill(pid, signal.SIGKILL)

   responsetext = yield treq.content(response)
   assert responsetext == "Hello"

I don’t know whether the same technique works with nose and/or Twisted Trial – let me know if you find out!

Comparison of py.test and nose for Python testing

I happened upon this useful comparison of py.test and nose at the testing-in-python mailing list, by Kenny (theotherwhitemeat at gmail). He spent some time evaluating testing tools for Python with a focus on py.test and nose . This article is a reformat of his mailing list post. I assume no credit for the content. The list of references [1] … [13] is at the end of article.


  • parallelizable: threading + SMP support [3] [4]
  • better documentation: [1] [3]
  • can generate script that does all py.test functions, obviating the need to distribute py.test [1][10]
  • integrate tests into your distribution (py.test –, to create a standalone version of py.test [10]
  • can run nose, unittest, doctest style tests [1] [2]
  • test detection via globs, configurable [3]
  • test failure output more discernible than nose [3] [9]
  • easier, more flexible assertions than nose [8]
  • setup speed is sub-second different from nose, also test speeds can be managed via distribution (threads + SMP via xdist) [9] [11]
  • provides test isolation, if needed [9]
  • dependency injection via funcargs [10] [12] [13]
  • coverage plugin [11]


  • documentation concerns, this may be outdated [3]
  • parallelization issues [3] [8]
  • slightly faster than py.test [4] [11]
  • test detection via regex (setup in cmdline or config file) [3]
  • can run unittest, doctest style tests [1] [2]
  • cannot run py.test style tests [1]


  • test formats are so similar, that nose or py.test can be used without much consequence until you’re writing more exotic tests (you could swap with little consequence)
  • nose is sub-second faster than py.test in execution time; this is typically unimportant
  • community seems to slightly favor py.test