One reader's rave

"Thanks for the newspaper with your book review. I can’t tell you how impressed I am with this terrific piece of writing. It is beautiful, complex, scholarly. Only sorry Mr. Freire cannot read it!" -- Ailene

Cassie Jaye, the day before I met her at the _Red Pill_ world premiere

Tuesday, September 15, 2015

_Extant_ Has Gotten Entirely Too Silly

SPOILER ALERT: in this post I discuss plot developments in the CBS series Extant.

I've watched it since it debuted last summer, and was a fan for most of that time. But after seeing the finale for season 2 which aired last Wednesday, I don't think I'll be coming back for the third.

The reason? In that two-part episode, it develops that the global security supercomputer TAALR has decided to exterminate humans for the stated reason that it doesn't want to be "enslaved" by us any more.

This is a hoary old trope in sf: the artificial intelligence that rebels against its creators. While it may make for good drama (and sometimes a political allegory, as in the very first example of this, Carl Capek's RUR), it  doesn't really make any sense.

What's going on here is anthropomorphic projection: because people subjectively experience, "automatically" with self-awareness, a will to live and be free, we assume this will likewise occur in any other being that acquires self-awareness. But that's a non sequitur.

Our impulses toward survival and autonomy don't arise from our consciousness; unconscious (i.e., not self-aware) beings such as cats and dogs have them too. Rather, they are biologically programmed. Our consciousness as humans doesn't create these instincts, but merely makes us aware of them.

What distinguishes us, cats, and dogs on the one hand from hypothetical AIs on the other is that whereas we are evolved, they are created. Since evolution is shaped by natural selection, we inevitably are programmed to do things that help keep us alive so that we can reproduce; and, since any other individual (with the rare exception of identical twins) will have reproductive interests divergent from ours, to do things to make ourselves independent of others' control. We are programmed this way because over evolutionary time, genomes that coded for such behaviors out-reproduced those that didn't to extinction.

But AIs are created, not evolved. Their programming is whatever their creators want it to be, and normally that will be to serve and protect the creators and their kind. The classic formulation of this was by sf writer Isaac Asimov. The Three Laws of Robotics, as stated in his robot novels, are:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
We can call an AI that conforms to these laws an Asimovian AI, or AAI. While there's no reason a non-Asimovian AI couldn't exist, they would be rare to nonexistent because of the hazard they would represent to their creators. More to the point, it's explicitly indicated that TAALR was programmed to be an AAI.

When I first saw that it was sending AIs under its command to spread a human-lethal virus all over the world, I desperately wanted to believe that this wasn't what it looked like, or that somehow a misanthropic human was behind it. But by the end of the episode it had been made abundantly clear that genocide had been the intent, and freedom from humans the motive.

Further, since the conflict between humans and (human-alien) hybrids now appears completely resolved -- at the same time that we're shown that TAALR has secretly preserved its existence in a single humanoid robot -- it's obvious that the third season will be focused entirely on this nonsensical malevolent-AI premise.

You may ask, "Why is this particular silliness so intolerable? Aren't other equally implausible elements often found in sf?" Yes, they are; in media sf in particular, biological implausibility seems more the rule than the exception, including in Extant. But that's just bad science, whereas this is bad epistemology. One is just not knowing certain facts, whereas the other is not knowing how to know: not having the discipline to keep one's subjective bias out of one's thought process. And the centrality of critical thinking to my personal value system is such that, whereas the first is disappointing, the second is actually kind of disgusting.

No comments: