How Can New Deep Learning Initiatives Overcome Challenges in Robotics?

Deep Learning Problems in Robotics

When data scientists talk about Deep Learning, they’re usually speaking about image generation, detection, classification, and regression tasks. Still, the thing that deep learning and artificial intelligence are getting vastly used for is in the field of robotics and solving some of its most significant challenges. It is deep learning for computer vision that is powering the pursuit of self-driving autonomous cars. Reinforcement learning is also powering some of the initiatives like AlphaGo, where the agent tries to act in the world to maximize its rewards.

The advancements in deep learning have been many, but still, we want to reach the ultimate goal at some point in time — Artificial General Intelligence.

The Internet of Dogs: How to Build a $50 IoT Dog Collar That Locates Your Pet

I love side projects. They give me the opportunity to flex my creative muscles and tinker with tech like the Internet of Things (IoT) in new ways. Fortunately, I didn't have to look far for my next one; a common conundrum for pet owners fueled this concept for an IoT dog collar.

My dog had been out in the backyard for a while. When I decided it was time to bring him back into the house, I couldn't find him anywhere! After several minutes of searching and calling his name, I found him napping in the shade of a tree. If this scenario sounds all too familiar to you, then this post is for you!

Timeline and Review of OpenAI’s Robotic Hand Project

Solving Rubik’s Cubes in Pursuit of Generalized Robotic Manipulation

Rubik's Cube

An impossible scramble. This scramble is impossible to solve using any known Rubik’s Cube algorithms without employing disassembly methods. Cube state rendered with MagicCube

Good robotics control is hard. Plain and simple as that. Don’t let the periodic videos from Boston Dynamics fool you: pulling off untethered back-flips and parkour are very rare skills for robots. In fact, as was readily apparent at the 2015 DARPA Robotics Challenge, falls and failures are the standard operating mode for state-of-the-art robots in the (somewhat fake) real world.

Robot Skills and Messaging APIs

Messaging services set the stage for humans to interact with programmable robots using the same devices we already use to talk with each other. That kind of interaction feels a little like magic, but it’s magic that anyone who codes can conjure. To show you what I mean, we need to look at Misty’s Photo Booth skill, which Misty demoed at Twilio SIGNAL 2019.

When this skill runs, you can send an SMS to ask your Misty robot to take your picture. When Misty gets your text, she stops what she’s doing, turns to look at you, and snaps your portrait with the camera in her visor. She then sends that picture right back to your phone via MMS.

Simulation Testing’s Uncanny Valley Problem

No one wants to be hurt because they're inadvertently driving next to an unproven self-driving vehicle. However, the costs of validating self-driving vehicles on the roads are extraordinary. To mitigate this, most autonomous developers test their systems in simulation, that is, in virtual environments. Starsky uses limited low-fidelity simulation to gauge the effects of certain system inputs on truck behavior. Simulation helps us to learn the proper force an actuator should exert on a steering mechanism, to achieve a turn of the desired radius. The technique also helps us to model the correct amount of throttle pressure to achieve a certain acceleration. But over-reliance on simulation can actually make the system less safe. To state the issue another way, heavy dependence on testing in virtual simulations has an uncanny valley problem.

First, some context. Simulation has arisen as a method to validate self-driving software as the autonomy stack has increasingly relied on deep-learning algorithms. These algorithms are massively complex. So complex that, given the volume of data the AV sensors provide, it’s essentially impossible to discern why the systems made any particular decision. They’re black boxes whose developers don’t really understand them. (I’ve written elsewhere about the problem with deep learning.) Consequently, it’s difficult to eliminate the possibility that they’ll make a decision you don’t like.

Exercising Misty’s Extensibility

Misty knows the importance of playing as hard as you work. That’s why she’s willing to risk a few grass stains in her Follow Ball skill.

In this skill, Misty employs the object recognition capabilities of a Pixy2 vision sensor to chase a soccer ball as it moves around the room. Here’s a high-level overview of how it works:

Area Man Very Concerned ‘Black Mirror’ Is Real

Kansas City, MO – Perusing his Facebook feed early Wednesday as he got ready to leave for work, area man Steven Cummings, 34, reportedly came across a video that shook him to his core. In it, shown above, 10 robotic dogs can be seen dragging a semi truck to what Cummings apparently believed was its imminent demise.

“Didn’t these same robots do this exact thing to some poor woman on the show Black Mirror?” said Cummings, a look of confused horror replacing his generally stoic demeaner. “I’m pretty sure it’s just a popular series on Netflix, but then I see things like this, and I just don’t know anymore.”

Is a Human Life Worth as Much as a Robotic Life?

You might think that it would be impossible for people to value a piece of hardware over human life, yet new research from Radboud University suggests that such circumstances may exist. Bizarrely, one of these circumstances might involve a perception that robots feel pain.Image title

"It is known that military personnel may mourn a robot that is used to clear mines in the army. Funerals are organized for them. We wanted to investigate how far this empathy for robots extends, and what moral principles influence this behavior towards robots. Little research has been done in this area as of yet, " the authors explain.

Sphero Kickstarter Rewards Include Chance to Visit HQ for Hackathon. Oh, and a Robot

If you’re a self-professed (or otherwise acknowledged) robotics geek (like me), you don’t want to miss this Kickstarter project from Sphero – you know, the company that brought you the take-home version of Star Wars’ BB-8.

Launched just this morning and already up to more than 500 backers for a grand total of almost $135,000 pledged, the Kickstarter project hopes to make the company’s newest robot, RVR, a reality. (I’ve actually had to update those numbers twice, strike that, three times while attempting to post this article.)

Working With Face Recognition: Misty as a Security Guard: Part 1

Misty’s got a new job: security guard.

Misty loves to be helpful, and this job combines a few of her different capabilities into a single working skill. When the skill runs, Misty activates face recognition and scans the environment for strangers. If Misty sees an unknown face, she calls out to third-party APIs and prints a picture of the intruder. Face recognition, external HTTP calls, and image capture are the core capabilities that Misty uses for this job.

The State of AI in 2019: Top 5 Trends to Watch Out For

Artificial Intelligence will continue to be a hot topic of discussion in 2019. It’s getting attention from start-ups, enterprises, vendors, media, research firms, and government institution to name a few. They are all trying to achieve and improve its bottom line using AI. The coming year is going to be a crucial year in the establishment of new AI applications and the growth of existing ones. Here’s a look at the top 5 trends that I would want to emphasize on.

  1. AI Workspaces: AI and related technologies are increasingly being used across workplace environments. Although this will continue to grow, we are likely to witness standardization in terms of both functional and non-functional aspects of technology adoption. With standardization, we can expect an increase in intelligent interactions between humans and technology in a natural setting leading to intelligent collaboration, more productivity, and efficiency in workspaces. The use of AI capabilities in healthcare and Financial industry is expected to take the industry to the next level.

Cloud Robotics: Part 2 of the Robot Development Platforms Series

If you’re a developer interested in robots, you may have heard the news — we’re experiencing a “moment” in cloud robotics services. In under four months (late September 2018 to early January 2019), four technology titans stepped forward to stake major claims in this space. This. Is. Not. An. Accident.

Cloud-based services — from AI to computer vision systems to fleet management—have the potential to make owning, managing, and coding robots much more efficient. Cloud robotics services may be an important step forward on the path to increased robot affordability and ease of development,especially for medium-sized businesses.

Robot Development Platforms Part 1: Frameworks and Libraries

New industrial, personal, enterprise, and toy robots are being announced pretty much daily. If you’re a developer looking to start coding for our shiny friends, it’s a lot to take in. You may find yourself plowing through links to entirely unfamiliar software stacks and wondering where to start.

There are no simple answers, but a quick overview may help. We can break down the complexity of developing for robots as a platform in a few ways: