Programming

The Disconnectivity of Remote Working

Photo by trail on Unsplash

Photo by trail on Unsplash

Throughout the 30+ years of running my own business, I have explored all aspects of teamwork.  From having my own in house team, to having a totally remote team, to a combined mix of the two.

Which do I prefer? Now THAT is an interesting question.

I would consider myself an introvert, and I do prefer working by myself in my own home office a lot of the time.  However, some of my best working memories have been when I have been in an office situation and working alongside others.

There is something about the human connection of being in the same space as others.  A myriad of non verbal cues and communication that goes on, most at a sub conscious level, which lends itself to a better sense of being part of a community which is pulling in the same direction.

Case in point - my current startup is a fully remote setup.  For the past two years, it was really only myself and another co-founder, who worked in a small town literally on the other side of the world.

Now, my co-founder and I had a great working relationship, and we produced a ton of stuff together.  All communication was mainly via Slack and email, and we used to talk on a daily basis PLUS have a weekly web video catch up.

My co-founder left the startup about 2 months ago.  The first week was really challenging, as I directly missed talking to someone while working away on new ideas.

But by the end of the first month, I started to get used to working by myself again.  After all, I had run the startup by myself for about a year before my co-founder joined me.  So it felt basically the same as it did before.

By the end of the second month, I was actually struggling to recall even working with my former co-founder.  This concerned me, as I always considered myself a sensitive person who liked to reminisce about happy memories.  So why was it suddenly so difficult for me to recall any of those good times we had had?  My co-founder's departure was amicable, so this wasn't as a result of any ill feelings.  Rather it just seemed that those experiences and memories were just floating out of reach, and without anything to anchor them too, they just seemed to waft away whenever I tried to recall them.

Even when I would go back through a Slack conversation to find an old screenshot or idea, I would re-read some of our conversations - but I struggled to actually remember the emotions or personality behind those chats.  Re-reading them seemed somehow cold and impersonal and I couldn't tell if I was tired, or angry, excited or happy while typing those paragraphs.

As a direct contrast to that, I can still clearly recall events that happened in my office over 20 years ago when I worked only feet away from the rest of my team.

Tiny things like a shared look, collapsing on the floor laughing at an 'in house' joke, or the casual punch on the shoulder as someone congratulated you while walking past your desk - all those things just added so much to my working experience that I, even as a self confessed 'lone wolf', missed them terribly.

There is something about being around people who are experiencing the highs and lows of their lives (even outside of work) that is strangely enriching and bonding.

To extend this even further - I was looking through my Facebook feed just this week, and I realised that I have become close friends with a vast majority of people that I have worked with face to face over the decades.  Remote workers much less so.  For some reason when a former remote staff member posts about their family or holiday or other life event, I find myself a lot less engaged with their thoughts and feelings.  There is still an element of them being an unknown 'stranger' so that any such intimate details of their lives instills a sense of guilt that I tend to deliberately avoid seeming too familiar or presumptuous when reading their posts.

While my recently departed co-founder and I had discussed an actual company meetup where we (and potential future staff) could meet face to face, it never happened during our working time together.  And now that my co-founder has moved on, I have accepted that we will probably never, ever meet in real life.

I am in the process of building up a whole new remote team now though, and am looking at strategies to try and counter this feeling of disconnection with those that I will figuratively work alongside for the coming years.

Regular company face to face meetups are definitely on the cards.  But I am also thinking that we might need to put something else in place outside of those times.

But what could take the virtual place of those little moments like tossing a paper plane across the office to see whose desk it would land on, or the understanding look that I would share with a colleague across from me after hanging up from a talking to a difficult client, or the good natured group ribbing that would happen when a co-worker brought a delicious smelling lunch into the office?  I have yet to see a web or mobile app that can replicate this sort of interaction.

Perhaps I have to go and invent it?
 

Building a face recognition app in under an hour

Over the weekend, I was flicking through my Amazon AWS console, and I noticed a new service on there called 'Rekognition'.  I guess it was the mangled spelling that caught my attention, but I wondered what this service was? Amazon has a habit of adding new services to their platform with alarming regularity, and this one slipped past my radar somehow.

So I dived in and checked it out, and it turns out that in late 2016, Amazon released their own image recognition engine on their platform.  It not only does facial recognition, but general photo object identification too.  It is still fairly new, so the details were sketchy, but I was immediately excited to try it out.  Long story short, within an hour, I had knocked up a quick sample web page that could grab photos from my PC camera and perform basic facial recognition on it.  Want to know how to do the same? Read on...

I had dabbled in facial recognition technology before, using third party libraries, along with the Microsoft Face API, but the effort of putting together even a rudimentary prototype was fraught with complexity and a steep learning curve.  But while browsing the Rekognition docs (thin as they are), I realised that the AWS API was actually quite simple to use, while seemingly quite powerful.  I couldn't wait, and decided to jump in feet first to knock up a quick prototype.

The Objective

I wanted a 'quick and dirty' single web page that would allow me to grab a photo using my iMac camera, and perform some basic recognition on the photo - basically, I wanted to identify the user sitting in front of the camera.

The Amazon Rekognition service allows you to create one or more collections.  A collection is simply a, well, collection of facial vectors for sample photos that you tell it to save.  NOTE: The service doesn't store the actual photos, but a JSON representation of measurements obtained from a reference photo.

Once you have a collection on Amazon, you can then take a subject photo and have it compare the features of the subject to its reference collection, and return the closest match.  Sounds simply doesn't it?  And it is.  To be honest, coding the front end of this web page to get the camera data actually took longer than the back end to perform the recognition - by a factor of 3 to 1 !!

So, in short, the web page lets you (1) create or delete a collection of facial data on Amazon, (2) upload face data via a captured photo to your collection, and (3) compare new photos to the existing collection to find a match.

Oh, and as a tricky extra (4), I also added in the Amazon Polly service to this demo so that after recognising a photo, the page will broadcast a verbal, customised greeting to the person named in the photo!

The Front End

My first question was what library to use to capture the image using my iMac camera.  After a quick Google search, I found the amazing JPEG Camera library on GitHub by amw, which allows you to use a standard HTML5 canvas to perform the capture, or fallback to a Flash widget for older browsers.  I quickly grabbed the library, and modified the example javascript file for my needs.

The Back End

For the back end, I knocked up a quick Sinatra project, for a lightweight Ruby based framework that could do all the heavy lifting with AWS.  I actually used Sinatra extensively (well, Padrino actually) to build all my web apps, and highly recommend the platform.

Note: Amazon Rekognition example actually promote uploading the source photos used in their API to an Amazon S3 bucket first, then processing them.  I wanted to avoid this double step and send the image data directly to their API instead, which I managed to do.

I also managed to do a similar thing with their Polly greeting.  Instead of saving the audio to an MP3 file and playing that, I managed to encode the MP3 data directly into an <audio> tag on the page and play it from there!

The Code

I have placed all the code for this project on my GitHub page.  Feel free to grab it, fork it and improve it as you like.  I will endeavour to explain the code in more detail here.

The Steps

First things first, you will need an Amazon AWS account.  I won't go into the details of setting that up here, because there are many articles you can find on Google for doing so.

Creating an AWS IAM User

But once you are set up on AWS, the first thing we need to do is to create an Amazon IAM (Identity & Access Management) user which has the permissions to use the Rekognition service.  Oh, we will also set up permissions for Amazon's Polly service as well, because once I got started on these new services, I could not stop.

In the Amazon console, click on 'Services' in the top left corner, then choose 'IAM' from the vast list of Amazon services.  Then, on the left hand side menu, click on 'Users'.  This should show you a list of existing IAM users that you have created on the console, if you have done so in the past.

Click on the 'Add User' blue button on the top of this list to add a new IAM user.

Give the user a recognisable name (more for your own reference), and make sure you tick 'Programmatic Access' as you will be using this IAM in an API call.

Next is the permissions settings.  Make sure you click the THIRD box on the screen, that says 'Attach existing policies directly'.  Then, on the 'Filter: Policy Type' search box below that, type in 'rekognition' (note the Amazonian spelling) to filter only the Rekognition policies. Choose 'AmazonRekognitionFullAccess' from the list by placing a check mark next to it.

Next, change the search filter to 'polly', and place a check mark next to 'AmazonPollyFullAccess'.

Nearly there.  We now have full permission for this IAM for Amazon Rekognition and Amazon Polly.  Click on 'Next: Review' on the bottom right.

On the review page, you should see 2 Managed Policies giving you full access to Rekognition and Polly.  If you don't, go back and re-select the policies again as per the previous step.  If you do, then click 'Create User' on the bottom right.

Now this page is IMPORTANT.  Make a note of the AWS Key and Secret that you are given on this page, as we will need to incorporate it into our application below.  

This is the ONLY time that you will be shown the key/secret for this user, so please copy and paste the info somewhere safe, and download the CSV file from this page with the information in it and keep it safe as well.

Download the Code

Next step, is to download the sample code from my GitHub page so you can modify it as necessary.  Go to this link and either download the code as ZIP file, or perform a 'git clone' to clone it to your working folder.

First thing you need to do is to create a file called '.env' in your working folder, and enter these two lines, substituting your Amazon IAM Key and Secret in there (Note: These are NOT real key details below):

export AWS_KEY=A1B2C3D4E5J6K7L10
export AWS_SECRET=T/9rt344Ur+ln89we3552H5uKp901

You can also just run these two lines on your command shell (Linux and OSX) to set them as environment variable that the app can use.  Windows user can run them too, just replace the 'export' prefix with 'set'.

Now, if you have Ruby installed on your system (Note: No need for full Ruby on Rails, just the basic Ruby language is all you need), then you can run

bundle install

to install all the pre-requisites (Sinatra etc.), then you can type

ruby faceapp.rb

to actually run the app.  This should start up a web browser on port 4567, so you can fire up your browser and go to 

http://localhost:4567

to see the web page and begin testing.

Using the App

The web page itself is fairly simple.  You should see a live streaming image on the top center, which is the feed from your on board camera.

The first thing you will need to do is to create a collection by clicking the link at the very bottom left of the page.  This will create an empty collection on Amazon's servers to hold your image data.  Note that the default name for this collection is 'faceapp_test', but you can change that on the faceapp.rb ruby code (line 17).

Then, to begin adding faces to your collection, ask several people to sit down in front of your PC or table/phone, and make sure their face is in the photo frame ONLY (Multiple faces will make the scan fail).  Once ready, enter their name in the text input box and click the 'Add to collection' button.  You should see a message that their facial data has been added to the database.

Once you have built up several faces in your database, then you can get random people to sit down in front of the camera and click on 'Compare image'.  Hopefully for people who have been already added to the collection, you should get back their name on screen, as well as a verbal greeting personalised to their name.

Please note that the usual way for Amazon Rekognition to work is to upload the JPEG/PNG photo to an Amazon S3 Bucket, then run the processing from there, but I wanted to bypass that double step and actually send the photo data directly to Rekognition as a Base64 encoded byte stream.  Fortunately, the aws-sdk for Ruby allows you to do both methods.

Lets walk through the code now.

First of all, lets take a look at the we page raw HTML itself.

https://github.com/CyberFerret/FaceRekognition-Demo/blob/master/views/faceapp.erb

This is a really simple page that should be self explanatory to anyone familiar with HTML creation.  Just a series of names divs, as well as buttons and links.  Note that we are using jQuery, and also Moment.js for the custom greeting.  Of note is the faceapp.js code, which does all the tricky stuff, and the links to the JPEG camera library.

You may also notice the <audio> tags at the bottom of the file, and you may ask what this is all about - well, this is going to be the placeholder for the audio greeting we send to the user (see below).

Let's break down the main app js file.

https://github.com/CyberFerret/FaceRekognition-Demo/blob/master/public/js/faceapp.js

This sets up the JPEG Camera library to show the camera feed on screen, and process the upload of the images.

The add_to_collection() function is straightforward, in that it takes the captured image from the camera, then does a post to the /upload endpoint along with the user's name as the parameter.  The function will check that you have actually entered a name or it will not continue, as you need a short name as a unique identifier for this facial data.

The upload function simply checks that the call to /upload finished cleanly, and either displays a success message or the error if it doesn't.

The compare_image() function is what gets called when you click the, well, 'Compare image' button.  It simply grabs a frame from the camera, and POSTs the photo data to the /compare endpoint.  This endpoint will return either an error, or else a JSON structure containing the id (name) of the found face, as well as the percentage confidence.

If there is a successful face match, the function will then go ahead and send the name of the found face to the /speech endpoint.  This endpoint calls the Amazon Polly service to convert the custom greeting to an MP3 file that can be played back to the user.

The Amazon Polly service returns the greeting as a binary MP3 stream, and so we take this IO stream and BaseEncode64 it, and place it as an encoded source link in the <audio> placeholder tags on our web page, which we can then do a .play() on the element in order to play the MP3 through the user's speakers using the HTML5 Web Audio API.

This is also the first time I have placed encoded data in the audio src attribute, rather than a link to a physical MP3 file, and I am glad to report that it worked a treat!

Lastly on the app js file is the greetingTime() function.  All this does is work out whether to say 'good morning/afternoon/evening' depending on the user's time of day.  A lot of code for something so simple, but I wanted the custom greeting they hear to be tailored to their time of day.

Lastly, lets look at the Ruby code for the Sinatra app.

https://github.com/CyberFerret/FaceRekognition-Demo/blob/master/faceapp.rb

Pretty straightforward Sinatra stuff here.  The top is just the requires that we need for the various AWS SDK and other libraries.

Then there is a block setting up the AWS authentication configuration, and the default collection name that we will be using (which you can feel free to change).

Then, the rest of the code is simply the endpoints that Sinatra will listen out for.  It listens for a GET on '/' in order to display the actual web page to the end user, and it also listens out for POST calls to /upload, /compare and /speech which the javascript file above posts data to.  Only about 3 or 4 lines of code for each of these endpoints to actually carry out the facial recognition and speech tasks, all documented in the AWS SDK documentation.

That's about all that I can think of to share at this point.  Please have fun with the project, and let me know what you end up building with it.  Personally, I am using this project as a starting block for some amazing new features that I would love to have in our main web app HR Partner.

Good Luck, and enjoy your facial recognition/speech synthesis journey.

 

 

 

 

TopHN - A fun side project built with Vue.js and RethinkDB

TopHN running in a side window so I can see news bubbling up and down in real time while I work away... (Yes, what you see is some actual code from the project - don't laugh!).

TopHN running in a side window so I can see news bubbling up and down in real time while I work away... (Yes, what you see is some actual code from the project - don't laugh!).

Over the past couple of years, I have tried to push my ageing brain constantly, and one of the best ways I've found to do that is to try and learn a new programming language, framework or methodology every month or so, just to keep the skills sharp.

I've always had a love/hate relationship with NoSQL databases, having cut my teeth for many decades on pure SQL systems, so I wanted to get my hands dirty with that.  I've also struggled a little bit to get to grips with Javascript front end frameworks, and wanted to improve my skill sets in that area.

So this past weekend, I decided to get 'down and dirty' with Vue.js as well as RethinkDB.  There is a lot of good natured banter amongst programmers about React vs Vue vs Angular etc. and I wanted to see for myself which one would suit my programming style better.  I had already done a lot of work in Angular v1 with my mobile app development (using Cordova and Ionic), and wanted to see if Angular v2 and the other frameworks I mentioned would be an easy transition.

Long story short, I had a bit of trouble getting my head around Angular v2, as well as React.  At the end of the day, Vue.js just seemed more natural, and possibly closer to Angular v1 to me, and I found myself being able to understand concepts and start knocking together a basic app within short order.

RethinkDB has also been in the news lately, with their parent company shutting down, although the database itself looks like it will live on as open source.  I've always liked the look of the RethinkDB management console, as well as the ease of installation on various platforms, so I decided to install it on my development Mac and give it a go.

The Project

The big question is - what to build?  I wanted to build something actually useful, instead of just another throwaway project.  Then, one day last week while I was browsing around Hacker News, it hit me.

Now, I love browsing Hacker News, and catching up with the latest tech articles, but one of the things that I found myself repeatedly doing was (a) refreshing the main 'Top News' screen every few minutes to see what people were talking about, and what had made its way to the Top 30, and (b) checking the messages that I had personally posted recently, to see if there were any replies to them, and (c) constantly checking my Karma balance on the top of the screen to see if there had been a mass of up or downvotes to anything I had posted.

These three things seemed to be my primary activities on the site (apart from reading articles), so I decided to see if I could build a little side project to make it easier.  So TopHN was born!

What is TopHN in a nutshell? Well, it is basically a real time display of top news activity on your web screen.  To be fair, there are already a LOT of other Hacker News real time feeds available out there, many which are far better than mine - but I wanted my solution to be very specific.  Most of the others display comments and other details, but I wanted my solution to be just a 'dashboard' style view of the top, important stuff that was relevant to me (and hopefully most other users too).

First things first, I decided to take a look at the HackerNews API.  I was excited to see that this was based on Google's Firebase.  I had used Firebase in a couple of mobile programming jobs 2 years ago, and really loved the asynchronous 'push' system they used to publish changes.  I debated whether to use the Firebase feed directly, but decided that No, because I was going to be doing some other manipulation and polling of data, that I didn't want to clutter up the Firebase feed directly with more poll requests, but instead would try and replicate the HN data set in RethinkDB.

So I went ahead and set up a dedicated RethinkDB server in the cloud.  This was a piece of cake following their instructions.  One the same server, I built a small Node.js app (only about 30 lines of code), whose sole purpose was to listen to the HN API feed from Firebase, and grab the current data and save a snapshot of them in my RethinkDB database.

Hacker News actually publishes some really cool feeds - every 30 seconds or so, a list of the top 500 articles are pushed out to the world as a JSON string.  Also, they have a dedicated feed which pushes out a list of changes made every 20 to 30 seconds.  This includes a list of article and comment ids that have been changed or entered in their system, as well as the user ids of any users who had changed their status (i.e. made profile changes, or had their karma increased/decreased by someone, or posted a comment etc.).

I decided to use these two feeds as the basis for building my replicated data set.  Every time the 'Top 500' feed would be pushed out, I would grab the id's of the articles, have a quick look in RethinkDB to see if they already existed, and if they didn't, I would go and ask for the missing articles individually, and plop those in RethinkDB.  After a few days of doing this, I ended up with tens of thousands of articles in my database.

I would also sniff out the 'changes' feed, and scan the articles in there to see if I already had them, and copy them if not.  Same with the users.  Every time a user was mentioned in the 'changes' feed, I would grab their updated profile and save in RethinkDB.

The screenshot above shows the RethinkDB management console, a really cool tool for checking server performance, as well as testing queries and managing data tables and shards.

So far so good.  The replicated database was filling up with data every few seconds.  Now, the question was - What to do with it?

I was excited to see that RethinkDB also had a 'changes()' feature, which would publish data changes as they happened.  But unlike the Firebase tools, these weren't client side only tools, and needed some sort of server platform to engage the features.  So what I decided on, was to use another Node.js app as the server back end, and use Vue.js as the front end for the interface elements.

I would also need to build a connection between the two using socket.io.  I was a bit disappointed that there didn't seem to be any native way to push/pull the changes from server to client without it, but hey - we are all about learning new things, and building a socket driven app was certainly something I hadn't done before (at least not from scratch).

So, end of the day, this second Node.js app would sit on a different server, and and wait for a user to visit the site.  Now, users can do a couple of things.  They can simply visit the top level URL of the site, and just see the Top 30 feed in real time.  And I mean nearly real time.  As new articles are published, or they move up and down the Top 30, the page view will bubble them up and down and show the latest scores and comment counters.

If the user elected to enter in their HN username, the page would additionally also display the user's Karma balance in real time, along with a notation for how much it has changed in the last couple of minutes.  Nothing like vanity metrics to keep people excited!

Also, if their username is entered, the page will show their last 10 or so comments and stories they published, so they can keep an eye on any responses to comments etc.

The second Node.js server is essentially a push/pull server.  It will silently push Top 30 list changes to all web browsers connected to it.  AND it will also set up a custom push event handler for any browsers where the user has specified their username.  As you can expect, this take a bit of management, and server resources, so I hope I never get to experience the HackerNews 'hug of death' where a bunch of people log on at the same time, because I am not really sure of how far this will scale before it comes to a screaming halt.

The Vue.js components purely sit there and listen for JSON data packets from the server pushes, and then format them accordingly and display them on the web page without having to refresh.

I haven't gone into the nutty details of how I built this on here, but if there is any interest and I get lots of requests, then I am open to publishing some code snippets and going into deeper detail of how I built the various components.

All in all, I am pretty happy with what amounted to around 4 or 5 days of part time coding.  I think this is a useful tool, and as you can see from the header image, I tend to have a narrow Chrome window open off to the side so I can keep an eye on news happenings and watch them bubble up and down.  The web page is also totally responsive, and should work on most mobile browsers for portability.

Are you a Hacker News member? Why not check out https://tophn.info and let me know what you think?

 

Building an IoT system using the Onion Omega and Amazon AWS

As well as being a programmer, I am a mad keen guitarist, and over the years, I have built up a sizeable collection of guitars of all types and models.  One thing about guitars though (acoustic guitars in particular), is that they are quite sensitive to environmental conditions such as temperature and humidity.

Similar to people, guitars like to kept and a relatively cool temperature and somewhere not too dry or damp.  Seeing as I live in the tropics, this can be a challenge at time, which is why I try and keep my guitars in my home office, which is secure, as well as air conditioned most of the time.

However, air conditioning is not perfect, and sometimes things like a power failure or someone leaving a window ajar can affect the overall climate of the room.  Because I often travel for work and am away from the home office for days at a time, I'd like to keep an eye on any anomalies, so I can advise another family member at home to check or rectify the situation.

What better way than to try and use my programming skills to (a) learn some new skills, and (b) do some experimenting with this whole IoT (internet of things) buzz.  Please note that my normal programming work involves business and enterprise type databases and reporting tools, so programming hardware devices is a new thing for me.

The end result is that I wanted a web page that I could access from ANYWHERE in the world, which would give me real time stats as to the temperature and humidity variations in the guitar room throughout a 24 hour period.

Please bear in mind, I am going to try and document ALL the steps I took to build this system, so this blog post is VERY long, but hopefully will serve as a guide for someone else who wants to build something similar.

The steps I will be going through here are:

1. Setting up the Omega Onion to work with my PC
2. Hooking up the DHT22 temperature and humidity sensor to my Onion
3. Installing all the requisite software on the Onion to be able to do what I want
4. Set up Amazon IoT so that the Onion can be a 'thing' on the Amazon IoT cloud
5. Setting up a DynamoDB database on Amazon AWS to store the temperature/humidity readings from the Onion
6. Setting up a web page to read the data from DynamoDB to present it as a chart.

Here is what the final chart will look like:

Hat tip: I used this blog post as inspiration for designing the dashboard and pulling data from DynamoDB.

 

The Hardware

Well, over a year ago I participated in the Onion Omega kickstarter project.  I'd got one of these tiny little thumb sized Linux computers but didn't quite know what to do with it so it sat in its box for a long while until I decided to dust it off this week.

Connecting the Onion up to it's programming board, I hooked it up to a USB cable from my iMac.  In order to get communications happening, I had to download and install a USB to UART driver from here:

https://www.silabs.com/Support%20Documents/Software/Mac_OSX_VCP_Driver.zip

Full instructions on connecting the Omega Onion to your Mac is on their Wiki page:

https://wiki.onion.io/get-started

Once I had connected the two devices, I was able to issue the command

screen /dev/tty.SLAB_USBtoUART 115200 

from a Terminal screen to connect to the device.  Yay!

First thing I had to do was to set up the WiFi so that I could access the device using my local home office WIFi network.  That was a simple case of issuing the command

wifisetup 

It is a simple step by step program that asks you for your WiFi access point name and security key.  Once again, the Wiki link above explains it in more detail.

Once the Wifi is setup on the Onion, you can then access it via its IP address using a web browser.  My device ended up being 192.168.15.11, so it was a matter of entering that address in Chrome.  Once logged in (the default username is 'root' and password 'onioneer'), you get to see this:

First things first, because my device was so old, I had to go to 'Settings' and run a Firmware Update.

I also dug out an old HDT22 sensor unit which I played around with when I dabbled in Arduino projects a while back.  I wondered if I could pair the HDT22 with the Onion device, and lo behold, a quick search on the Onion forums showed that this had been done before, quite easily.  Here is a blog post detailing how to hook up the HDT22 to the Onion:

https://wiki.onion.io/Tutorials/PHP-DHT11-DHT22-Sensor-Examples

The article shows you how to wire the two devices together using only 3 wires. In short, the wiring is as follows on my unit:

Pin 1 from the HDT22 goes to the 5.5V plug on the Omega Onion
Pin 2 from the HDT22 goes to GPIO port 6 on my Onion
Pin 3 is unused on the HDT22
Pin 4 from the HDT22 goes to the GND (Ground) plug on the Onion


The Software

Now we come to all the software that we will need to be able to collect the data, and send it along to Amazon.  In short, we will be writing all our code in Node.js.  But we will also be calling some command line utilities to (a) read the data from the HDT22 and (b) send it to the Amazon IoT cloud.

To collect the data, we will be using an app called 'checkHumidity' which is detailed on the page above about setting up the DHT22.  To talk to the Amazon IoT cloud, we need to use the MQTT protocol.  To do this, will be using an app called 'mosquitto' which is a nice, neat MQTT wrapper.  We can use HTTPS, but MQTT just seemed more efficient and I wanted to experiment with it.

So lets go through these steps for installation.  All the packages are fairly small, so it won't take up much room on the 16MB storage on the Onion.  I think my Onion still has about 2MB left after all installs.  Here goes (from the Onion command line):

(1) Install the checkHumidity app and set the permissions for running it.  checkHumidity is so much cleaner than trying to read the pins on the Onion in Node.js.  Running it returns the temperature (in degrees Celsius) and the humidity (as a percentage) in a text response.

opkg update
opkg install wget
cd /root
wget https://community.onion.io/uploads/files/1450434316215-checkhumidity.tar.gz
tar -zxvf 1450434316215-checkhumidity.tar.gz
chmod -R 755 /root/checkHumidity/bin/checkHumidity

If your HDT 22 is connected to pin 6 like my board, try it out:

/root/checkHumidity/bin/checkHumidity 6 HDT22
29.6
49.301

Showing me 29.6 degrees C wilth 49.301% humidity!

(2) Install Node.js on the Onion.  From here on in, we will be using the opkg manager to install:

opkg install nodejs

(3) I also installed nano because it is my favourite editor on Linux.  You can bypass this if you are happy with any other editor (Note: There is also an editor on the web interface, but I had some issues with saving on it):

opkg install nano

(4) Install the mosquitto app for MQTT conversations:

opkg install mosquitto
opkg install mosquitto-client

This installs the mosquitto broker and client.  We won't really be using the broker, mainly the client, but it is handy to have if you want to set up your Onion as an MQTT bridge later.


Amazon IoT

Ok, now we have almost everything prepped on the device itself, we need to set up a 'thing' on Amazon's IoT cloud to mimic the Onion.  The 'thing' you set up on Amazon acts as a cloud repository for information you want to store on your IoT device.  Amazon uses a concept of a 'shadow' for the 'thing' that can store the data.  That way, even if your physical 'thing' is powered off or offline, You can still send MQTT packets of data to the 'thing', and the data will be stored on the 'shadow' copy of the 'thing' in the cloud until the device comes back online, at which point Amazon can copy the 'shadow' data back to the physical device.

You see, our Node.js app will be pushing temperature and humidity data to the shadow copy of the 'thing' in the cloud.  From there, we can set up a rule on Amazon IoT to further push that data into a DynamoDB database.

Setting up the 'thing' on the cloud can be a little tricky.  Mainly due to the security.  Because the physical device will be working unattended and pretty much anonymously, authentication is carried out using security certificates.  Lets step through the creation of a 'thing'. (Note: This tutorial assumes you already have an AWS account set up).

From the Amazon Console, click on 'Services' on the top toolbar, then choose 'AWS IoT' under 'Internet Of Things'.

On the left hand menu, click on 'Registry', then 'Things'.

Your screen will probably be blank if you have never created a thing before.  Click on 'Create' way over on the top right hand side of your screen.

You will need to give you thing a name.  Call it anything you like.  I just used the unique name for my Omega Onion, which looks like Omega-XXXX.

Great!  Next, you will be taken to a screen showing all the information for your 'thing'.  Click on the 'Security' option on the left hand side.

Click on the 'Create Certificate' button.

You can now download all four certificates from this screen and store them in a safe place.

NOTE: DON'T FORGET to click on the link for 'A root CA for AWS IoT Download'.  This is the Root CA certificate that we will need later.  Store all 4 certificates in a safe place for now on your local hard drive.  Don't lose them or you will have to recreate the certificates again and re-attach policies etc.  Messy stuff.

Lastly, click on 'Activate' to activate your certificates and your thing.


Next, we have to attach a policy to this certificate.  There is a button marked 'Create Policy' on this security screen.  Click it, and you will see the next screen asking you to create a new policy.

We are going to create a simple policy that lets us perform any IoT action against any device.  This is rather all encompassing, and in a production environment, you may want to restrict the policy down a little, but for the sake of this exercise, we will enable all actions to all devices under this policy:

In the 'Action' field, enter 'iot:*' for all IoT actions, and in the 'Reource ARN' field, enter '*' for all devices and topics etc.  Don't forget to tick the 'Allow' button below, then click 'Create'.

You now have a thing, a set of security certificates for the thing, and a policy to control the certificates against the thing.  Hopefully the policy should be attached to the certificates that you just created.  If not, you will have to manually attach the policy to the certificates.  To do this, click on 'Security' on the left hand menu, then click on 'Certificates', then click on the certificate that you just created.

Click on the 'Policies' on the left hand side of the certificate screen.

If you see 'There are no policies attached to this certificate', then you need to attach it by clicking on the 'Actions' drop down on the top right, then choosing 'Attach Policy' from the drop down menu.

Simply tick the policy you want to attach to this certificate, then click 'Attach'.

You may want to now click on 'Things' on the left hand menu to ensure that the thing you created is attached to the certificate as well.

To ensure all your ducks are in a row:-

The 'thing' -> needs to have -> Security Certificate(s) -> needs to be attached to -> A Policy

Actually, there is one more factor that we want to note on here which is important for later.  Go ahead and click on the 'Registry' then 'Things' on the IoT dashboard.  Choose the thing you just created, and then click on the 'Interact' option on the left hand menu that pops up.

Notice under HTTPS, there is a REST API endpoint shown.  Copy this information down and keep it aside for now, because we will need it in our Node.js code later to specify which host we want to talk to.  This host address is unique for each Amazon IoT account, so keep it safe and under wraps.

Also note on this screen that there are some special Amazon IoT reserved topics that can be used to update or read the shadow copy of your IoT thing.  We won't really be using these in this project, but it is handy to know for more complex projects where you might have several devices talking to each other, and also devices that may go on and offline a lot.  The 'shadow' feature allows you to still 'talk' to those devices even though they are offline or unavailable, and lets them sync up later.  Very powerful stuff.

Next, we will take a break from the IoT section, and set up a DynamoDB table to collect the data from the Onion.

 

Amazon DynamoDB

Click on 'Services' then 'Dynamo DB' under 'Databases'.

Click on 'Create Table'.

Give the table a meaningful name.  Important: Give the partition key the name of 'id' and set it to a 'String' type.  Tick the box that says 'Add sort key' and give the key a name of 'timestamp' and set it to a 'Number' type.  This is very important, and you cannot change it later, so please ensure your setup looks like above.


Tip: Once you have created your DynamoDB table, copy down the "Amazon Resource Name (ARN)" on the bottom of the table information screen (circled in red above).  You will need this bit of information later when creating a security policy for reading data from this table to show on the web site chart.

Ok, now that you have a table being created, you can go back to the Amazon IoT Dashboard again for the next step ('Services' then 'AWS IoT' in your console top menu).  What we will do now is create a 'Rule' in IoT which will handball any data coming in to a certain topic across to DynamoDB to store in a data file.

Tip: When you transmit data to an IoT thing using MQTT, you generally post the data to a 'topic'.  The topic can be anything you like.  Amazon IoT has some reserved topic names that do certain things, but you can post MQTT packets to any topic name you make up on the spot.  Your devices can also listen on a particular topic for data coming back from Amazon etc.  MQTT is really quite a nice, powerful and simple way to interact with IoT devices and servers.

In the IoT dashboard, click on 'Rules' on the left hand side, then click the 'Create' button.

The 'Name' can be something distinctive that you make up.  Add a 'Description' to help you remember what this rule does.  For the 'SQL Version', just choose '2016-03-23' which is the latest one at time of writing.

Below that, on 'Attribute', type in '*' because we will be selecting ALL fields sent to us.  In the 'Topic Filter', type in 'temp-humidity/+'.  This is the topic name that we will be listening out for.  You can call it anything you like.  We include a '/+' at the end of the topic name because we can add extra data after this, and we want the query to treat this extra data as a 'wildcard' and still select it. (Note: We will be adding the device name to the end of the topic as an identifier (e.g. temp-humidity/Omega-XXXX).  This way, if we later have multiple temperature/humidity sensors, we can identify each one via a different topic suffix, but still get all the data from all sensors sent to DynamoDB).

ERRATA: The screenshot above shows 'temp-humidity' in the 'Topic Filter' field, but it should actually be 'temp-humidity/+'.

Leave the 'Condition' blank.

Now below this, you will see an 'Add Action' button.  Click this, and choose 'Insert a message into a DynamoDB table'.

As you can see, there is a myriad of other things you can do, including on forwarding the data to another IoT device.  But for now, we will just focus on writing the data and finishing there.  Click on the 'Configure Action' button at the bottom of the screen.

Choose the DynamoDB table we just created from the drop down 'Table Name'.  The 'Hash Key' should be 'id', of type 'STRING', and in the 'Hash Key Value', enter '${topic()}'.  It means we will be storing the topic name as the main key.

The 'Range Key' should be 'timestamp' with a type of 'NUMBER'.  The 'Range Key Value' should be '${timestamp()}'.  This will place the contents of the packet timestamp in this field.

Lastly, in the the 'Write Message Data To This Column', I enter in 'payload'.  This is the name of the data column that contains the object with the JSON data packet sent from the device.  You can call this column anything you like, but I like to call it 'payload' or 'iotdata' or similar so that I know all the packet information is stored under here.


One more thing to do, for security purposes, we have to set up an IAM role which will allow us to add data to the DynamoDB table.  This is actually quite easy to do from here.  Click the 'Create A New Role' button.

Give the role a meaningful name, then click 'Create A New Role'.  A new button will show up with the text next to it saying 'Give AWS IoT permission to send a message to the selected resource'.  Click on the 'Update Role' button.

Important: You must click the 'Update Role' button to set the privileges properly.  Once completed, click the 'Update' button.

Thats It!  We are pretty much done as far as Amazon IoT and DynamoDB setup.  It was quite a rigmarole wasn't it?  Lots of steps that have to be done in a certain order.  But the good news is that once this is done, the rest of the project is quite easy, AND FUN!


Installing Certificates

Oh, Wait - One more slightly tedious step to do.  Remember those 4 certificates we downloaded much earlier?  Now is the time we need to put them to good use (well, 3 out of the 4 at least).  We need to copy these certificates to the Onion.  I found it easiest to copy and paste the text contents of the certificate over onto the '/home/certs' folder on the Onion.  I simply used the web interface editor to create the files in the '/home/certs' folder and paste the contents of the certificate I downloaded.  The three certificates I needed (and which I copied and renamed) are:

  • VeriSign-Class3-Public-Primary-Certification-Authority-G5.pem -> /home/certs/rootCA.pem
  • x1234abcd56ef-certificate.pem.crt -> /home/certs/certificate.pem
  • x1234abcd56ef-private.pem.key -> /home/certs/private.key

As you can see, I shortened down the file name for ease of handling, and put them all into one folder for easy access from my Node.js app too.  That's it.  Once done, you don't have to muck about with certificates any more.

Exactly where you store the certificates or what you call them is not important, you just need to know the details later when writing the Node.js script.

 

Writing Code

Ok, back to the Omega Onion now, where we will write the code to grab information from the HDT22 and transmit it to Amazon IoT.  This is where the rubber hits the road.  Using nano, or the web editor on the Onion, create a file called '/home/app.js' and enter the following:

var util = require('util');
var spawn = require('child_process').spawn;
var execFile = require('child_process').execFile;

var mosqparam = [
'--cafile', '/home/certs/rootCA.pem',
'--cert', '/home/certs/certificate.pem',
'--key', '/home/certs/private.key',
'-h', 'a1b2c3d4e5f6g7.iot.us-east-1.amazonaws.com',
'-p', '8883'
];

setInterval(function() {
execFile('/root/checkHumidity/bin/checkHumidity', ['6','DHT22'], function(error, stdout, stderr) {
var dataArray = stdout.split("\n");
var logDate = new Date()
var postData = {
datetime: logDate.toISOString(),
temperature: parseFloat(dataArray[1]),
humidity: parseFloat(dataArray[0])
}
// publish to main data queue (for DynamoDB)
execFile('mosquitto_pub', mosqparam.concat('-t', 'temp-humidity/Omega-XXXX', '-m', JSON.stringify(postData)), function(error, stdout, stderr) {
// published
});
// publish to device shadow
var shadowPayload = {
state: {
desired: {
datetime: logDate.toISOString(),
temperature: parseFloat(dataArray[1]),
humidity: parseFloat(dataArray[0])
}
}
}
execFile('mosquitto_pub', mosqparam.concat('-t','$aws/things/Omega-XXXX/shadow/update', '-m', JSON.stringify(shadowPayload)), function(error, stdout, stderr) {
// shadow update done
});
});
}, 1000 * 60 * 5);

 

NOTE: I have obfuscated the name of the Omega device here, as well as the Amazon IoT host name for my own security.  You will need to ensure that the host name and device name correspond to your own setups above.

Lets go through this code section by section.  At the top are the 'require' statements for the Node.js modules we need.  Luckily no NPM installs needed here, as the modules we want are part of the core Node.js install.

Then we define an array called 'mosqparam'.  These are actually the parameters that we need to pass to the mosquitto command line each time - mainly so it know the MQTT host (-h) and port (-p) it will be talking to, and where to find the 3 certificates that we downloaded from Amazon IoT and copied across earlier.

Tip: If your application fails to run, it is almost certain that the certificate files either cannot be found, or else they have been corrupted during download or copying across to the Onion.  The mosquitto error messages are cryptic at best, and a certificate error doesn't always present to obviously.  Take care with this bit.

After this is the meat of the code.  We are basically running a function within a javascript setInterval() function which fires once every five minutes.

What this function does is run an execFile() to execute the checkHumidity app that we downloaded and installed earlier.  It then takes the two lines that the app returns and splits them by the carriage return (\n) to form an array with two elements.  We then create a postData object which contains the temperature, the humidity, and the log time as an ISO8601 string.

Then we transmit that postData object to Amazon IoT by calling execFile() on the 'mosquitto_pub' command that we also installed earlier as part of the mosquitto package.  mosquitto_pub basically stands for 'MQTT Publish', and it will send the message (-m) consisting of the postData object translated to JSON, to the topic (-t) 'temp-humidity/Omega-XXXX'.

That is really all we need to do, however, in the code above, I've done something else.  Straight after publishing the data packet to the 'temp-humidity/Omega-XXXX' topic, I did a second publish to the '$aws/things/Omega-XXXX/shadow/update' topic as well, with essentially the same data, but with some extra object wrappers around it in shadowPayload.

Why did I do this?  Well, the '$aws/things/Omega-XXXX/shadow/update' topic is actually a special Amazon IoT topic which stores the data packet within the 'shadow' copy of the Omega-XXXX thing in the cloud.  That means that later on, I can use another software system from anywhere in the world to interrogate the Omega-XXXX shadow in the cloud to see what the latest data readings are.

If for any reason the Onion goes offline or the home internet goes down, I can interrogate the shadow copy to see what and when the last reading was.  I don't need to set this up, but for future plans I have, I thought it would be a good idea.

Enough talk - save the above file, lets run the code

cd /home
node app.js

You won't see anything on the screen, but in the background, every 5 minutes, the Omega Onion will read the data and transmit it to.  Hopefully it is working.

If it doesn't work - things to check are the location and validity of the certificate file.  Also check that your home or work firewall isn't blocking port 8883 which is the port MQTT uses to communicate with Amazon IoT.

Now ideally we want our Node.js app to run as a service on the Omega Onion.  That way, if the device reboots or loses power and comes back online, the app will auto start and keep logging data regardless.  Fortunately, this is easy as well.

Using nano, create a script file called /etc/init.d/iotapp and save the following in it:

#!/bin/sh /etc/rc.common
# Auto start iot app script

START=40

start() {
echo start
service_start /usr/bin/node /home/app.js &
}

stop() {
echo stop
service_stop /usr/bin/node /home/app.js
}

restart() {
stop
start
}


Save the file, then make it executable:

chmod +x /etc/init.d/iotapp

Now register it to auto-run:

/etc/init.d/iotapp enable

Done.  The service should start at bootup, and you can start/stop it anytime from the command line via:

/etc/init.d/iotapp stop

or 

/etc/init.d/iotapp start

 

If you go back to your DynamoDB dashboard, click on the table you created, you should be able to see the packet data being sent and updated every 5 or so minutes.

Also, if you go to the Amazon IoT dashboard and click on 'Registry' then 'Things' and then choose your IoT thing, then click on 'Activity', you should see a history of activity from the physical board to the online thing.  You can click on each activity line to show the data being sent.

Hopefully everything is working out for you here.  Feel free to adjust the setInterval() timing to one minute or so, just so you don't have to wait so long to see if data is being streamed.  In fact, tweak the interval setting to whatever you like to suit your own needs.  5 minutes may be too short a span for some, or it may be too long for others.  The value is in the very last line of the Node.js code:

    1000 (milliseconds) x 60 (seconds in a minute) x 5 (minutes)

 

Set up the Website

Final stretch now.  Funny to think that all that hard work we did above is essentially invisible.  But this bit here is what we, as the end user, will see and interact with.

What we will do here is to set up a simple web site which will read the last 24 hours of data from our DynamoDB table we created above, and display it in a nice Chart.js line chart showing us the temperature and humidity plot over that time.  The web site itself is a simple Bootstrap/jQuery based one, with a single HTML file and a single .js file with our script to create the charts.

Since I am using Amazon for nearly everything else, I decided to use Amazon S3 to host my website.  You don't have to do this, but it is an incredibly cheap and effective way to quickly throw up a static site.

A bigger problem would be how to read DynamoDB data within a javascript code block on a web page.  Doing everything client side means that my Amazon credentials will have to be exposed on a publicly accessible platform - meaning anyone can grab it and use it in their own code.

Most knowledgebase articles I scanned suggested using Amazon's Cognito service 'Identity Pools' to set up authentication, but setting up identity pools is another long and painful process.  I was fatigued after doing all the above set up by now, so opted for the quick solution of setting up a 'throwaway' Amazon IAM user with just read only privileges on my DynamoDB data table.  This is not 'best practice', but I figured for a non critical app like this (I don't really care who can see the temperature setting in my guitar room - it's not like a private video or security feed) that it would do for what I needed.

Additionally, I have CloudWatch alarms set up on my DynamoDB tables so if I see excessively high read rates from nefarious users, I can easily revoke the IAM credentials or shut down the table access.

 

Amazon IAM

To set up a throwaway IAM, go to the 'Services' menu in your AWS console and choose 'IAM' under 'Security, Identity and Compliance'.

Click on the 'Users' option on the menu down the left, then click 'Create' to create a new IAM user:

Give the user any name you like, but ensure you tick the box saying 'Programmatic Access'.  Then click the 'Next: Permissions' button.

On the next screen, click on the third image at the top which says 'Attach existing policies directly'.  Then click on the button that says 'Create Policy'.

Note: This will open the Create Policy screen on a new browser tab.

On the Create Policy screen, click the 'Select' button on the LAST option, i.e. 'Create Your Own Policy'.

Enter in the policy details as below.  Ensure that the 'Resource' line contains the ARN of your DynamoDB table like we found out above.

Here is the policy that you can cut and paste into the editor yourself (after substituting your DynamoDB ARN in it):

{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ReadOnlyIoTDataTable",
"Effect": "Allow",
"Action": [
"dynamodb:DescribeTable",
"dynamodb:Query",
"dynamodb:Scan"
],
"Resource": "<insert your DynamoDB ARN here>"
}
]
}

Once done, click on 'Validate Policy' to ensure everything is OK, the 'Create Policy'.

Now go back to the previous browser tab where you were creating the user, and click the 'Refresh' button.  You should now see the policy you just create on the list. (Hint: You can do a search on the policy name).  Tick it.

Click 'Next' to go to the review screen, then click 'Create User'.

Copy down the key and click on 'Show' to show the secret.  Copy both of these and keep them safely aside.  We will need them in our web site script below.

Ok, now lets set up the Amazon S3 bucket to host our website.

 

Amazon S3

Click on 'Service' on your AWS Console, then choose 'S3' under 'Storage'.  You should see a list of buckets if you have used S3 before.  Click on 'Create Bucket' on the top left to create a new bucket to host your website.

Give your bucket a meaningful name.

Tip: The bucket name will be part of your website name that you will need to type in your browser, so it helps to make it easy to remember and if it gives a hint as to what it does.

Once the bucket is created, select it from the list of buckets by clicking on the name.  Your bucket is obviously empty for now.

Click on the 'Properties' button on the top right, then expand the 'Permissions' section.  You will see your own username as a full access user.

Click on the 'Add more permissions' button here, and choose 'Everyone' from the drop down, and tick the 'List' checkbox.  This will give all public users the ability to see the contents of this bucket (i.e. your web page).  Click on 'Save' to save these permissions.

Next, expand the section below that says 'Static Website Hosting'.

Click on the radio button which says 'Enable website hosting', and enter in 'index.html' in the 'Index Document' field.

Click 'Save'.

That is about it - this is the minimum required to set up a website on S3.  You can come back later to include an error page filename and set up logging etc., but this is all we need for now.

NOTE: Copy down the 'Endpoint' link on this page (circled in red).  This will be the website address you need to type into your browser bar later to get access to the web page we will be setting up.

Tip: You can use Amazon Route53 to set up a more user friendly name for your website, but we won't go into that in this already lengthy tutorial.  There are plenty of resources on Google which go into that in detail.

The Code

Now for the web site code itself.  Use your favourite editor to create this index.html file:

<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1">
<meta name="description" content="">
<meta name="author" content="">

<title>Home Monitoring App</title>

<!-- Bootstrap core CSS -->
<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.1/css/bootstrap.min.css">

<!-- HTML5 shim and Respond.js for IE8 support of HTML5 elements and media queries -->
<!--[if lt IE 9]>
<script src="https://oss.maxcdn.com/html5shiv/3.7.2/html5shiv.min.js"></script>
<script src="https://oss.maxcdn.com/respond/1.4.2/respond.min.js"></script>
<![endif]-->

</head>

<body>

<div class="container">
<br />
<div class="jumbotron text-center">
<h1>Temperature & Humidity Dashboard</h1>
<p class="lead">Guitar Storage Room</p>
</div>

<div class="row">

<div class="col-md-6">

<canvas id="temperaturegraph" class="inner cover" width="500" height="320"></canvas>

<br />
<div class="panel panel-default">
<div class="panel-body">
<div class="row">
<div class="col-sm-3 text-right">
<span class="label label-danger">High</span>&nbsp;
</div>
<div class="col-sm-9">
<span id="t-high" class="text-muted">(n/a)</span>
</div>
</div>
<div class="row">
<div class="col-sm-3 text-right">
<span class="label label-success">Low</span>&nbsp;
</div>
<div class="col-sm-9">
<span id="t-low" class="text-muted">(n/a)</span>
</div>
</div>
</div>
</div>
</div>

<div class="col-md-6">

<canvas id="humiditygraph" class="inner cover" width="500" height="320"></canvas>

<br />
<div class="panel panel-default">
<div class="panel-body">
<div class="row">
<div class="col-sm-3 text-right">
<span class="label label-danger">High</span>&nbsp;
</div>
<div class="col-sm-9">
<span id="h-high" class="text-muted">(n/a)</span>
</div>
</div>
<div class="row">
<div class="col-sm-3 text-right">
<span class="label label-success">Low</span>&nbsp;
</div>
<div class="col-sm-9">
<span id="h-low" class="text-muted">(n/a)</span>
</div>
</div>
</div>
</div>
</div>
</div>

<div class="row">
<div class="col-md-12">
<p class="text-center">5 minute feed from home sensors for the past 24 hours.</p>
</div>
</div>

<footer class="footer">
<pclass="text-center">Copyright &copy; Devan Sabaratnam - Blaze Business Software Pty Ltd</p>
</footer>

</div> <!-- /container -->

<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.11.1/jquery.min.js"></script>
<script src="https://sdk.amazonaws.com/js/aws-sdk-2.1.40.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/Chart.js/2.4.0/Chart.min.js"></script>
<script src="refresh.js"></script>
</body>
</html>


Nothing magical here - just a simple HTML page using bootstrap constructs to place the chart canvas elements on the page in two columns.  We are loading all script and css goodies using external CDN links for Bootstrap, jQuery, Amazon SDK and Chart.js etc. so we don't have to clutter up our web server with extra .js and .css files.

Next we code up the script, in a file called refresh.js:

AWS.config.region = 'us-east-1';
AWS.config.credentials = new AWS.Credentials('AKIZBYNOTREALPQCRTVQ', 'FYu9Jksl/aThIsNoT/ArEaL+K3yTR8fjpLkKg');

var dynamodb = new AWS.DynamoDB();
var datumVal = new Date() - 86400000;
var params = { 
TableName: 'iot-temperature-humidity',
KeyConditionExpression: '#id = :iottopic and #ts >= :datum',
ExpressionAttributeNames: {
"#id": "id",
"#ts": "timestamp"
},
ExpressionAttributeValues: {
":iottopic": { "S" : "temp-humidity/Omega-XXXX"},
":datum": { "N" : datumVal.toString()}
}
 };

/* Create the context for applying the chart to the HTML canvas */
var tctx = $("#temperaturegraph").get(0).getContext("2d");
var hctx = $("#humiditygraph").get(0).getContext("2d");

/* Set the options for our chart */
var options = { 
responsive: true,
showLines: true,
scales: {
xAxes: [{
display: false
}],
yAxes: [{
ticks: {
beginAtZero:true
}
}]
} 
};

/* Set the inital data */
var tinit = {
labels: [],
datasets: [
{
label: "Temperature °C",
backgroundColor: 'rgba(204,229,255,0.5)',
borderColor: 'rgba(153,204,255,0.75)',
data: []
}
]
};

var hinit = {
labels: [],
datasets: [
{
label: "Humidity %",
backgroundColor: 'rgba(229,204,255,0.5)',
borderColor: 'rgba(204,153,255,0.75)',
data: []
}
]
};

var temperaturegraph = new Chart.Line(tctx, {data: tinit, options: options});
var humiditygraph = new Chart.Line(hctx, {data: hinit, options: options});

$(function() {
getData();
$.ajaxSetup({ cache: false });
setInterval(getData, 300000);
});

/* Makes a scan of the DynamoDB table to set a data object for the chart */
function getData() {
dynamodb.query(params, function(err, data) {
if (err) {
console.log(err);
return null;
} else {

// placeholders for the data arrays
var temperatureValues = [];
var humidityValues = [];
var labelValues = [];

// placeholders for the data read
var temperatureRead = 0.0;
var humidityRead = 0.0;
var timeRead = "";

// placeholders for the high/low markers
var temperatureHigh = -999.0;
var humidityHigh = -999.0;
var temperatureLow = 999.0;
var humidityLow = 999.0;
var temperatureHighTime = "";
var temperatureLowTime = "";
var humidityHighTime = "";
var humidityLowTime = "";

for (var i in data['Items']) {
// read the values from the dynamodb JSON packet
temperatureRead = parseFloat(data['Items'][i]['payload']['M']['temperature']['N']);
humidityRead = parseFloat(data['Items'][i]['payload']['M']['humidity']['N']);
timeRead = new Date(data['Items'][i]['payload']['M']['datetime']['S']);

// check the read values for high/low watermarks
if (temperatureRead < temperatureLow) {
temperatureLow = temperatureRead;
temperatureLowTime = timeRead;
}
if (temperatureRead > temperatureHigh) {
temperatureHigh = temperatureRead;
temperatureHighTime = timeRead;
}
if (humidityRead < humidityLow) {
humidityLow = humidityRead;
humidityLowTime = timeRead;
}
if (humidityRead > humidityHigh) {
humidityHigh = humidityRead;
humidityHighTime = timeRead;
}

// append the read data to the data arrays
temperatureValues.push(temperatureRead);
humidityValues.push(humidityRead);
labelValues.push(timeRead);
}

// set the chart object data and label arrays
temperaturegraph.data.labels = labelValues;
temperaturegraph.data.datasets[0].data = temperatureValues;

humiditygraph.data.labels = labelValues;
humiditygraph.data.datasets[0].data = humidityValues;

// redraw the graph canvas
temperaturegraph.update();
humiditygraph.update();

// update the high/low watermark sections
$('#t-high').text(Number(temperatureHigh).toFixed(2).toString() + '°C at ' + temperatureHighTime);
$('#t-low').text(Number(temperatureLow).toFixed(2).toString() + '°C at ' + temperatureLowTime);
$('#h-high').text(Number(humidityHigh).toFixed(2).toString() + '% at ' + humidityHighTime);
$('#h-low').text(Number(humidityLow).toFixed(2).toString() + '% at ' + humidityLowTime);

}
});
}

Lets go through this script in detail.

The first two lines set up the Amazon AWS SDK.  We need to specify the AWS region, then we need to specify the credentials we will be using for interrogating the DynamoDB table.  Copy and paste in the Key and Secret that you created in the previous section here.

The next bit is initialising the AWS DynamoDB object in 'dynamodb'.  The 'datumVal' variable contains a timestamp that is 24 hours before the current date/time.  This will be used in the DynamoDB query to only select data rows in the prior 24 hour period.

The 'params' object contains the parameters that will be sent to the dynamodb object to select the table, and run a query upon it.  I am not a fan of NoSQL, mainly because querying data is a huge pain, and this proves it.  The next 10 lines are purely setting up an expression to look at the ID and the Timestamp columns in the DynamoDB table, and pull our all ID's which contain 'temp-humidity/Omega-XXX' (remember, the ID is actually the topic, including the thing identifier), and a timestamp that is greater than, or equal to the 'datum' that we set before.

Next, on line 20 and 21 we set up the context placeholders for the two charts.  Simple Chart.js stuff here.

Lines 23 to 62 we are simply setting up some default placeholders for the charts, including the colours of the lines and shading etc.  I am also using some xAxes and yAxes properties to turn off the X-axis labels and to ensure the Y-Axis starts at a zero base.  You can omit these if you want the graph to look more dynamic (or cluttered! :)).

Lines 64 and 65 is just initialising the Chart.js objects with the above options and context.

Next comes a generic function that calls the getData() function every five minutes.  You can change the setInterval() parameter from 300000 (1000 milliseconds per second x 60 seconds per minute x 5 minutes) to whatever you like.  But seeing as we are only pushing temperature and humidity data from our Onion to Amazon IoT every 5 minutes as well, anything less than a 5 minute check is just overkill.  Feel free to tailor these numbers to suit your own purposes though.

Line 70 to the end is just the getData() function itself.  All this does is run a query against the 'dynamodb' object using the 'params' we supplied for the query parameters etc.

The results are returned in the data['Items'] array.

Lines 81 to 84 just sets up the placeholder arrays for the values and labels to be used on the charts.

Lines 86 to 99 I have set up purely for checking the highest and lowest settings for the temperature and humidity reading.  You can elect not to do this, but I wanted to show on the main page the highs/lows for the preceding 24 hour period.  I am simply initialising some empty variable here to use in the following loop.

Lines 101 to 129 is just a simple loop that runs through the returned data['Items'] array and parses the keys into the variables and arrays I defined above.  I am also comparing the read values against the highs and lows.  For every array element I read, I check to see if the highs are higher than the last highest value, and the lows lower that the last value(s), and update the highs/lows accordingly.

Then, after the loop, lines 132 to 136 update the Chart.js chart data and labels with what we have read in the loop.

Lines 139 and 140 force the charts to redraw themselves.  Lines 143 to 146 use jQuery AJAX calls to update the High and Low sections on the main web page with the readings and times.

That is it!

Save these two files, then upload them to your bucket by going back to your Amazon S3 Bucket screen and clicking on the 'Actions' button and choosing 'Upload Files'.

Drag and drop the two files onto the upload screen, but don't start it yet!  Click on the 'Set Details >' button at the bottom, then immediately click on 'Set Permissions >'.

Make sure you tick the box that says 'Make everything public', otherwise nobody can see your index.html file!

Now click 'Start Upload' to begin uploading the two files.

You are DONE!  Can you believe it??  We are done.  Finished.  Completed.

If you type in the website address we noted down earlier into your browser, you should be able to see a beautiful dashboard showing the collected data from your Onion Omega device.

Conclusion

If you made it this far, then congratulations on achieving this marathon.  It took me several days to nut the above settings out, and many false starts and frustrations along with it.  I am hoping that by documenting what eventually worked for me, I can reduce your stress and wasted time and set you on the path to IoT development a lot quicker and easier.

Next steps for me are to set up a battery power source for my Omega Onion, so it doesn't have to be connected to my computer, and can sit on a shelf somewhere in my guitar storage room and still report to me.

Let me know if you find this tutorial useful, and please also let me know what you guys have built with IoT - it is a fascinating field!