High-Caffeine Fun with AWS IoT: Introducing TriNimbus IoT Coffeebot

High-Caffeine Fun with AWS IoT: Introducing TriNimbus IoT Coffeebot

When AWS IoT was initially announced during re:Invent 2015, my immediate reaction was frankly “sure, everyone is doing IoT, of course AWS will have its own offering.” My indifference however immediately dispersed after a quick glance at the high-level architecture of AWS IoT.AWS IoT just made sense: the underlying services are all the things that we are already familiar with, such as Amazon DynamoDB, AWS Lambda, Amazon Kinesis and Amazon S3. AWS IoT was quite simply just another entry point to the plethora of AWS services that we use day to day - not discounting the huge value the excellent engine AWS acquired from 2lemetry a year ago that is at the core of the AWS IoT service of course! I was quite determined then to experiment with the service once I found an opportunity.

The opportunity actually presented itself quite soon. It so happens that like many organizations, TriNimbus uses Slack for internal communications. Given the lot of coffee drinkers in the office, we have been posting messages on Slack to notify everyone about fresh coffee being ready by impersonating Picture2“Coffeebot”. It did not take long before the idea emerged that we should be automating the workflow. There should be some means to detect the brew being ready, which would trigger a post to our usual Slack channel. It was an immediate no brainer that AWS IoT would play a pivotal part in the workflow. As such, the project “TriNimbus IoT Coffeebot” was born.

Setting up AWS IoT and the rest of the backend services, however, is only part of the challenge. An IoT problem is often complex because it involves multiple diverse disciplines. For example, designing the hardware requires fairly different knowledge from the usual software know-how. Fortunately a colleague -- Andrej -- offered to flex his muscles to architect the hardware components required. We eventually implemented a system that consists of two sensors that measure the temperature of the coffee filter and the current passing through the power cord of the coffee machine respectively. The combination of the two dimensions can confidently determine the availability of fresh coffee, when the temperature of the coffee is at low ninety degrees celsius and the machine has been switched off after the brewing action is complete. Meanwhile, the centre of the system is a Raspberry Pi unit that reads the data originating from the sensors through an analog-digital convertor unit. The Raspberry Pi is also the processing unit that will eventually send the data through MQTT (Message Queue Telemetry Transport) to the AWS IoT backend.

Coffee Bot - Architecture v3

As a side-project to occupy my otherwise lacking evenings, I have spent about three weeks learning about AWS IoT from scratch and gone through roughly three versions of design, all of which brought the desired results of Slack messages being posted when pots of coffee are made. While I can write multiple (likely very dry and boring) pages documenting my meandering journey in the learning and experiments, it is more interesting to share some of my impressions and tricks that I have learnt from the exercise. Some of them are fairly AWS IoT specific, while others can well be extended to other AWS services and beyond.

(1) Do not expect immediate feedback at sending an MQTT message from the “thing” on AWS IoT

It is a mouthful, but I have seen people -- including me -- being caught by this. As software engineers we tend to expect immediate feedback and logs from which we get information that makes debugging easier; unfortunately, AWS IoT does not provide that by default. It is in fact a bit puzzling about the feedback capability of the AWS IoT setup. The textbook procedure of setting up a device (the “thing”) consists of downloading the necessary X509 certificates, setting a device policy and starting a MQTT client program to send messages to AWS IoT. It is however not clear what feedback or response one is supposed to get after running all these steps. To make it more confusing, there is a section associated with the device on the AWS IoT UI console where one can apparently read the state of the device. My early intuition was that if a message was successfully sent from the device to AWS IoT, I should be able to get some indications that the message was received successfully.

Long story short, it simply does not work that way: without further configuration on AWS, one could only rely on the connectivity feedback from the MQTT client program running on the device. The aforementioned section associated with the device only participates at testing the reverse workflow, when one attempts to send a message from AWS IoT to the device (for more information, I suggest some reading on Device Shadow for AWS IoT). Once I got used to the idea and have become more comfortable with the AWS IoT, it made a lot more sense. Regardless, it could be baffling, especially for someone completely new to the AWS IoT service.

(2) Use AWS IoT SDK to the Fullest

Like every service in AWS, AWS IoT is API driven. The customized MQTT program that interfaces hardware data to AWS IoT will likely leverage the AWS IoT SDK to establish communication, to publish and to receive from any specified MQTT topics. As one gets deeper into the AWS IoT SDK service, one tends to become increasingly well-versed in the AWS IoT SDK -- in either flavours of node.js, embedded C and Arduino Yun at time of writing. I will also add that the examples included in the SDK have become instrumental in the Coffeebot setup -- in fact, it forms the foundation of the workflow (a side note: I personally find the example program included in the SDK to be a lot more intuitive than the combination of documentations on the AWS IoT website). Finally, in case that it is not clear, the AWS IoT SDK is different from the AWS SDK. More likely than not, you will need both in an AWS IoT setup.

(3) My New Best Friend: CloudWatch Logs

The lack of feedback can be unnerving, especially for a newbie who finds comfort in verifying every minute step along the way in order to avoid surprises. I wanted to get feedback on messages received, to confirm if the messages passed through the rules engine as expected and to observe how the subsequent services processed the message. It became immediately obvious that I needed logs.

Thankfully the Amazon CloudWatch Logs service provides that essential functionality. My recommendation is quite simply “turn it on.” The costs of Amazon CloudWatch Logs are minimal and are certainly well justified when one puts a price tag on the potential headaches down the road. I did not only use CloudWatch Logs for AWS IoT; quite simply, I cannot do without CloudWatch Logs when I am writing AWS Lambda functions. By default, CloudWatch Logs is not switched on for AWS IoT, but one can turn on the AWS IoT CloudWatch Logs connectivity directly on the AWS IoT UI console, while there are tutorials documenting how to make the AWS IoT and Cloudwatch Logs connectivity external to the UI.

(4) Modularize with SNS

No matter what your design is, chances are it is going to consist of multiple services in the end-to-end workflow. We have all learnt the adage of “tightly cohesive, loosely coupled” some point in time; the same idea is well applicable to designing a solution consisting of AWS services and is achievable by adding Amazon SNS (Simple Notification Service) into the workflow. SNS is best known for notification in form of email (raw data, JSON ) or SMS; I find it however most powerful as a publication-subscription system to allow an event triggering one or more downstream activities (i.e. “fan-out”). In the TriNimbus IoT Coffeebot system, I have two Lambda functions. Instead of making one Lambda function to call the other one directly, I have inserted an SNS topic between the two. If some time later it becomes necessary to replace one of the Lambda functions with some other service, the other function can continue on its merry way without a care about its previous partner in the workflow.

(5) Clearly Define the Interface between the AWS Services in the Workflow

Lambda has a very helpful test functionality on the management console, where the user can send in specific JSON payload and examine the response from the Lambda function either directly on the Lambda console or in CloudWatch Logs. While other services may not necessarily have the same kind of sophisticated interfaces, the same practice can easily be applied. For example, if one is testing data ingestion in Amazon Kinesis, it is worthwhile to test some sample data and retrieve the messages for examination. If one is testing data insertion into Amazon DynamoDb, it is imperative to get the newly added items on the target table to ensure the data is inserted as expected. The complexity of a workflow tends to increase as times go by and one can use all the help to preserve sanity in the long run.

(6) Design with Scale in mind; Optimize afterwards

Generally with designing solutions -- particularly based on the guidelines on how to wisely use AWS -- there are always trade-offs between scale and cost. An IoT solution is often a challenge for scale, as we usually anticipate millions of small devices whose data needs to be processed at the same time. As such, even with one lone coffee machine in the office, I initially tested with DynamoDB streams and Kinesis as part of the exploration. It was definitely over-provisioning by orders of magnitude, when a lone coffee machine will likely generate a 100 kb message every 5 seconds instead of multiple megabytes of data every second. However, I still consciously went through the exercise to implement and test a system that I knew could support many different devices than committing to a limited design that would not be functional under the stress of demand. In the end, I deployed neither DynamoDB streams nor Kinesis in the solution but reverted to SNS as my “streaming” solution. The exercise still yielded value because I managed to prove a workflow that would support a much higher volume. This workflow could be rolled out methodically should it be needed one day.

(7) Free tier is glorious

Especially for a small experiment, free tier tends to be invaluable and can influence a design decision. Quoting the example of choosing a streaming solution again: while Kinesis fits all bills of the service, I would have chosen the latter given the lone machine workflow because there is a generous free tier with DynamoDB stream that would have definitely satisfied the current needs.

Conclusion

The above is just a superficial account of some impressions from experimenting with the TriNimbus IoT Coffeebot. As I have gained some fundamental experience with AWS IoT, I remain more convinced than ever that AWS IoT is one of the many use cases that can capitalize on the processing power and scalability of AWS. The services directly supported by AWS IoT are either fully managed or can be dynamically managed with some fairly standard CloudWatch metrics monitoring and AWS Lambda-based response. Meanwhile, with the IoT-connected Coffeebot in place, we are looking at adding more devices within the office and beyond to the ecosystem. The diverseness of the devices and the requirement of the workflows will most certainly lead to more interesting designs leveraging AWS IoT and other AWS offerings.

If you’re in the IoT space or simply looking for help on AWS, let us know. We’d be glad to learn more about what you’re doing and see if we can help you.

P.S. When is Amazon Echo going to be available in Canada? I look forward to the day on which I can ask Alexa to make me coffee instead of my having to walk to the machine and press the button. 😉

Comments are closed.