On Moravec's Paradox and Human Value

WIKIPEDIA: “Moravec's paradox is the discovery by artificial intelligence and robotics researchers that, contrary to traditional assumptions, high-level reasoning requires very little computation, but low-level sensorimotor skills require enormous computational resources. […] As Moravec writes, 'it is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility.'”  

Moravec’s paradox is a fascinating reversal of our assumed hierarchy of difficulty.  It holds true for social and emotional tasks too: while solving differential questions has been child’s play for computers for half a century, reading and responding to human emotions - things we do without consciously thinking at all - are recent and still-imperfect achievements in computing.

In one sense, this isn’t surprising: natural selection has been at work on our sensorimotor skills for billions of years, and on our social and emotional capacities for hundreds of millions.  It’s had a tiny fraction of that to tinker with conscious cognition.  What’s a little jarring, though, is to realize that many of the things that we think make us brilliant or special, especially as educated and ambitious people, are things that we all suck at compared to a $150 Chromebook, and the things we tend to take for granted (like kinetic or emotional capacities) are things that make us special by comparison.  Which makes you wonder: 

MAIN QUESTION: What does this mean for the unique value of humans in the known universe (or even just on earth)?  There are other animals capable of kinesis and emotion, and now (or soon) there are machines that can think.  Crudely put, humans’ most obviously special properties are that they are better than animals at thinking and better than computers at feeling.  Is there any special value to entities that can both think and feel?  Or is our value in the difference (in amount or kind) between human vs. animal emotions, or human vs. computational thought?  Or is our unique value as humans grounded in something totally other than our ability to think and feel?  Or, finally, will our value just no longer be all that unique with the advent of intelligent machines?

BONUS QUESTION: As computers become capable of a broader and broader range of high-level reasoning tasks, how will the de-scarcifying of intelligence change which characteristics and abilities we value in each other?

Quinn Fiddler