Concepts of Distributed Systems (Part 1)

What ARE distributed systems?

What Are Distributed Systems?

There are lots of different definitions you can find for distributed systems. For example, Wikipedia defines distributed systems as:

"A distributed system is a system whose components are located on different networked computers, which communicate and coordinate their actions by passing messages to one another. The components interact with one another in order to achieve a common goal."

Similarly, Technopedia defines distributed systems as

Simulation Testing’s Uncanny Valley Problem

No one wants to be hurt because they're inadvertently driving next to an unproven self-driving vehicle. However, the costs of validating self-driving vehicles on the roads are extraordinary. To mitigate this, most autonomous developers test their systems in simulation, that is, in virtual environments. Starsky uses limited low-fidelity simulation to gauge the effects of certain system inputs on truck behavior. Simulation helps us to learn the proper force an actuator should exert on a steering mechanism, to achieve a turn of the desired radius. The technique also helps us to model the correct amount of throttle pressure to achieve a certain acceleration. But over-reliance on simulation can actually make the system less safe. To state the issue another way, heavy dependence on testing in virtual simulations has an uncanny valley problem.

First, some context. Simulation has arisen as a method to validate self-driving software as the autonomy stack has increasingly relied on deep-learning algorithms. These algorithms are massively complex. So complex that, given the volume of data the AV sensors provide, it’s essentially impossible to discern why the systems made any particular decision. They’re black boxes whose developers don’t really understand them. (I’ve written elsewhere about the problem with deep learning.) Consequently, it’s difficult to eliminate the possibility that they’ll make a decision you don’t like.

Why the Car Industry Needs to Take Lessons From Aviation to Make Autonomous Tech Safe

Before autonomous vehicles become fully autonomous, they were developing a range of driver assistance tools to help us navigate the roads safely and effectively. These tools aren’t always as straightforward as they sound, however, and various studies have illustrated how long it takes a human to regain safe control of a vehicle if they haven’t been concentrating on the road.

While these sorts of challenges are still relatively unfamiliar in a motoring context, they are very familiar in aviation, where difficulties in navigating the human/machine interface have caused numerous incidents. It’s a lesson a recent paper suggests we are not heeding.