The Machines Are Biased: Fixing the Fatal Flaws That Plague Modern Algorithms

"We've arranged a society based on science and technology, in which nobody understands anything about science technology. And this combustible mixture of ignorance and power, sooner or later, is going to blow up in our faces." -Carl Sagan

While applicable to most of our technology, Sagan's quote hits the hardest when thinking about algorithms. We live in a digitally automated world, yet most of us lack a working understanding of how digital automation works. 

How to Overcome AI Distrust With Explainability

AI has permeated our lives in nearly every aspect. We rely on it to get accurate search results, to enable conversational marketing, ad personalization, and even to suggest medical treatment.

In spite of headways in AI and it’s widespread use, there’s considerable distrust in it. Such distrust and unease arise from popular media representations of AI in movies where robots threaten to overthrow the human race. 

Working Next to Robots

Recent estimates suggest that expenditure on robotics is set to reach $115 billion this year before rising to over $210 billion by 2022. Whereas traditionally industrial robots would be complex and heavyweight bits of equipment that worked largely in isolation from their human "colleagues," it's increasingly common to see man and machine working together.

This is resulting in a growing interest in the psychology and practicality of these interactions. For instance, a few years ago,  researchers explored how people feel about having robots for colleagues.

Google Ends AI Ethics Board

Google announced yesterday that its week-old AI ethics advisory board is no more. The board almost immediately attracted criticism, much of it stemming from Google employees, regarding one of its chosen members.

"It’s become clear that in the current environment, [the AI ethics board] can’t function as we wanted. So we’re ending the council and going back to the drawing board," SVP of global affairs Kent Walker wrote yesterday in an update to Google's original blog post about the board. "We’ll continue to be responsible in our work on the important issues that AI raises, and will find different ways of getting outside opinions on these topics."