IoT Architectures for Digital Twin With Apache Kafka

A digital twin is a virtual representation of something else. This can be a physical thing, process or service. This post covers the benefits and IoT architectures of a Digital Twin in various industries and its relation to Apache Kafka, IoT frameworks and Machine Learning. Kafka is often used as a central event streaming platform to build a scalable and reliable digital twin and digital thread for real-time streaming sensor data.

I already blogged about this topic recently in detail: Apache Kafka as Digital Twin for Open, Scalable, Reliable Industrial IoT (IIoT). Hence that post covers the relation to Event Streaming and why people choose Apache Kafka to build an open, scalable and reliable digital twin infrastructure.

Program an Arduino With State Machines in 5 Minutes

Did you ever program an Arduino? Have you ever been worried about complex control flows written in pure C? Maybe you have already heard of statecharts and state machines? In this blog post, I will show you how to program an Arduino in just 5 minutes in a model-driven way with the help of YAKINDU Statechart Tools (SCT).

There have been several attempts to program an Arduino with YAKINDU SCT as described by Marco Scholtyssek or René Beckmann. However, when I tried to teach how to program an Arduino with YAKINDU SCT within the Automotive Software Engineering Summer School 2016 at the University of Applied Sciences and Arts in Dortmund I found out that it’s hard to understand and implement without appropriate tooling. So I sat down and implemented Arduino support for YAKINDU SCT to generate lots of the glue code that is needed to run state machines on the Arduino.

Simulation Testing’s Uncanny Valley Problem

No one wants to be hurt because they're inadvertently driving next to an unproven self-driving vehicle. However, the costs of validating self-driving vehicles on the roads are extraordinary. To mitigate this, most autonomous developers test their systems in simulation, that is, in virtual environments. Starsky uses limited low-fidelity simulation to gauge the effects of certain system inputs on truck behavior. Simulation helps us to learn the proper force an actuator should exert on a steering mechanism, to achieve a turn of the desired radius. The technique also helps us to model the correct amount of throttle pressure to achieve a certain acceleration. But over-reliance on simulation can actually make the system less safe. To state the issue another way, heavy dependence on testing in virtual simulations has an uncanny valley problem.

First, some context. Simulation has arisen as a method to validate self-driving software as the autonomy stack has increasingly relied on deep-learning algorithms. These algorithms are massively complex. So complex that, given the volume of data the AV sensors provide, it’s essentially impossible to discern why the systems made any particular decision. They’re black boxes whose developers don’t really understand them. (I’ve written elsewhere about the problem with deep learning.) Consequently, it’s difficult to eliminate the possibility that they’ll make a decision you don’t like.