Think Beyond Cloud: Intelligent Edge Is the Future of Computing and AI

The average inference speed for cloud-based AI hovers around 1.5 seconds. The intelligence edge cuts it down sharply to 10-15 milliseconds. This drastic reduction in latency alone makes a number of futuristic technologies – such as autonomous vehicles – possible.

The advent of cloud computing set off a colossal centralization fever that has caught almost every business that understands the importance of a digital-first business strategy. Even the world’s governments and public sector organizations are leveraging the advantages offered by cloud computing. Easy access to data, powerful analytical tools, and improved business agility have enabled organizations to make more “intelligent” and informed decisions than ever before.

Fog Computing is the Future

The term fog computing (or fogging) was coined by Cisco in 2014, so it is new for the general public. Fog and cloud computing are interconnected. In nature, fog is closer to the earth than clouds; in the technological world, it is just the same, fog is closer to end-users, bringing cloud capabilities down to the ground.

The main difference between fog computing and cloud computing is that the cloud is a centralized system, while the fog is a distributed decentralized infrastructure.

Unboxing the Most Amazing Edge AI Device Part 1 of 3 – NVIDIA Jetson Xavier NX

Fast, Intuitive, Powerful and Easy. 

This is the first of a series on articles on using the Jetson Xavier NX Developer kit for EdgeAI applications. This will include running various TensorFlow, Pytorch, MXNet and other frameworks. I will also show how to use this amazing device with Apache projects including the FLaNK Stack of Apache Flink, Apache Kafka, Apache NiFi, Apache MXNet and Apache NiFi - MiNiFi.

These are not words that one would usually use to define AI, Deep Learning, IoT or Edge Devices. They are now. There is a new tool for making what was incredibly slow and difficult to something that you can easily get your hands on and develop with. Supporting running multiple models simultaneously in containers with fast frame rates is not something I thought you could affordably run in robots and IoT devices. Now it is and this will drive some amazingly smart robots, drones, self-driving machines and applications that are not yet in prototypes.