In my previous post, I highlighted the difference between efficiency and effectiveness and how it maps to artificial versus human intelligence. Doing things fast and with minimum waste is the domain of deterministic algorithms. But to know when we’re building the right thing (effectiveness) is our domain. It’s a slippery and subjective challenge, tied up with the confusing reality of trying to make human existence more comfortable with the help of software.
Today I want to talk about essential complexity. A fully autonomous AI programmer would need to be told exactly what we want, and why, or it should be sufficiently attuned to our values to fill in the gaps. Sadly, we cannot trust AI yet to reliably connect the dots without human help and corrections. It’s not like telling an autonomous vehicle where you want to go. That has a very simple goal – and we’re nowhere near a failsafe implementation.