Wednesday, May 27, 2015

To The Best of My Mismemorization, Uh ...

This morning I noticed that the Onion had an item in their faux Person-in-the-Street feature, based on a report that former President George W. Bush had offered to officiate at a lesbian wedding in 2013, but had to back out because of a scheduling conflict.

Ironically, the Onion is pretty reliable in its references to actual news (as opposed to the satire it spins from it), so I looked around and decided to look at what the Daily Caller (one of RWA1's preferred sources) had to say about the report.  It turned out to be only the barest aside in a Boston Globe piece about Dubya's brother Jeb, and for what it's worth, a Bush spokesman told the Washington Post that "While President Bush is indeed friends with Bonnie and Helen, he doesn’t recall making such an offer."  Which is a rather mild denial; it might well be true (that Bush doesn't recall it).  It's a sign of how much has changed, that rather than declare his eternal opposition to the destruction of holy matrimony by Teh Gay, Bush chooses not to remember anything about it.  I even believe that if he did offer and then withdrew, it really was because of a scheduling conflict, even if it was really a hangover.

I indulged in some snark on Facebook about the story, declaring that obviously Dubya isn't really a conservative after all! I think he was just misled by his advisors when he said he opposed same-sex marriage, just like Barack was.  (Or maybe not -- Obama denied the report.)  For that matter, Dick Cheney also pretended to oppose same-sex marriage while he was VPOTUS (via).  So they were just playing eleven-dimensional chess with the Christian right!  Bush did appoint numerous openly gay Republicans to important posts, after all.  I bet if Bush could have had just one more term, he'd have shown his true colors as the progressive social reformer he really is ... As so often happens, reality tends to leave satire behind it, gasping in the dust.

Incidentally, the Most Liberal Pope Evar continues to proclaim his inclusive message of love and inclusion.

Tuesday, May 26, 2015

Sometimes I Feel Like a Fatherless Child

Remember the old joke about the drunk who looks for his keys under a streetlight instead of in the darkness where he lost them, because the light's better there?  Scientists tell this joke on themselves, but they still tend to forget it in the crunch.  The tasks a computer can do have become more complex, but they still don't add up to a human being.  Maybe someday they will, but not yet.  In Mary Renault's novel The Last of the Wine, she has Socrates say that a lover who tries to win his beloved by praising his beauty is like a hunter who brags about his kill before he's actually done it.  Much of the talk about Artificial Intelligence (like a lot of talk in the sciences generally) is like that, bragging about non-existent accomplishments.

On the question of whether computers can think, or have consciousness, then: Noam Chomsky has said that whether computers can think is a question of definition and changing meanings.  By analogy, do airplanes "really" fly?  Not in the same way birds or butterflies do, but over the past century the meaning of "fly" has shifted to include the way that airplanes move through the air.  Probably the meaning of "think" will also change (if it hasn't already) to include what computers do.  But that will beg the question. We don't in fact know what thinking is, what intelligence is, or what consciousness is, and until we do we won't be able to say whether computers are really thinking or are really conscious.

Confusion comes partly from the term itself: Artificial Intelligence.  "Artificial" means "made or produced by human beings rather than occurring naturally, typically as a copy of something natural."  It doesn't necessarily mean that the product is identical in every way to a natural one: think of artificial legs, artificial hands, artificial teeth, artificial sweeteners.  Implicit in the notion of something made is that it's not the last word: new technology and creative design may produce better, more lifelike prosthetics for example.  But consider again Ex Machina's Ava, which was made, Nathan informs Caleb, with "pleasure" receptors in "her" groin area.  This may titillate the male members of the audience, though patriarchal sexuality is not about women's pleasure anyway.  (It's hard for me to imagine Nathan caring much about a gynoid robot's pleasure; easier to imagine him throwing a tantrum when it complained that he hadn't given it an orgasm.)  But think of kissing the mouth of a robot.  A human mouth is an extremely complex organ: tongue, teeth, lips, mucous membranes.  Nathan would have had to design and build artificial salivary glands for Ava, for example.  Those pleasure centers lower down would have to be equipped with a source of lubrication, as well as arousal.  Does Ava have an artificial clitoris?  I doubt it, since its guts are visible for most of the film, and they're mechanical and electronic.  I don't think that technology is going to make great strides in that area in the foreseeable future, which is Ex Machina's setting.

I've long thought that Simulated Intelligence would be a better name for this project.  It might take away some of the glamor, but that would be a good thing.  Computers can simulate many processes, from dairy farms to Civilization, but those simulations won't produce real milk or skyscrapers.  Nor will Simulated Intelligence produce real intelligence.  Simulation is a perfectly valid goal.

Technology can duplicate or even extend human functions to varying degrees; that's what tools do.  A hammer lets me hit something harder than I could with my fist; a knife lets me cut something my teeth couldn't; an atlatl (or spearthrower), a bow and arrow, extend my reach.  Each tool is useful for a restricted range of tasks, though it can sometimes be adapted for tasks it wasn't originally intended for. While tools have exceeded human abilities for a long time, I often think in this connection of the old ballad of John Henry, the steel-driving man who was bested by a steam drill.  That didn't make the steam drill human, let alone superhuman.

Alan Turing's dream was to invent a universal computer ("computer" in its old sense, which referred to human beings who performed calculations) that could be adapted through compiled instructions to perform any task ... that could be done by compiled instructions.  This, he apparently believed, was what "thinking" was.  I think it's a subset of thinking at most.  Computers can simulate the playing of strategy games (chess, checkers, Go); the storage, indexing, and retrieval of information; the guidance and control of manufacturing tools; and so on.  While all of these tasks are connected to human intelligence, they aren't human intelligence.

The trouble is that so many computer fans are eager to believe in computer consciousness already, and indeed have been since the first ones were built.  They're impatient with philosophical quibbling and fond of rhetorical questions: If a computer can do X, then shouldn't we just say or agree that it can think or is conscious?  Is it fair to deny that a computer can be conscious and can think?  How would you feel if someone denied that you are conscious?  Quit being such a picky human-centrist, and accept that computers can be human too.  But these are emotional appeals, not arguments, and they have a certain irony given their roots in Skinnerian behaviorism, which rejected appeals to human inner lives, consciousness and the like in favor of a focus on observable behavior.  As a research program with a restricted scope, it was not an illegitimate idea, but as a global claim about human beings, it was always total bullshit.  One of the giveaways was that proponents of behaviorism never seriously applied its implications to themselves: other people were merely the products of their conditioning, but they somehow transcended their own conditioning and were able to see what the sheeple couldn't.

Nathan explains to Caleb that Ava views him as something like a father to it, and therefore not a potential sexual partner as Caleb would be.  That could only be true if Nathan had programmed Ava to see him that way, since Nathan is not in fact Ava's father, not even metaphorically. Ava has no parents, and would not have an unconscious mind (unless, again, Nathan programmed it to have one).  The complicated relations between parents and children come partly from the long period of children's dependence, when they can't speak or take care of their own needs.  An Artificial Intelligence wouldn't have such a period in its existence -- unless, again, it were programmed to, and why would anyone do that?

It's relatively easy to construct a robot that would emit an "ouch!" sound if you punched it, or a sound of human pleasure if you stroked it.  But that doesn't mean it feels either pain or pleasure.  The sounds could be reversed -- "ouch" for stroking, "oooh" for punching -- or they could be replaced with "quack" or "meow" or any random sound.  My cell phone can be programmed to ring with any sound I can record.  A creative design team with a big budget could probably build a robot  with a wider, superficially more convincing range of responses.  In time technology could no doubt be developed that would cause an android's skin to "bruise" when it was punched hard enough.  But again, why bother?  Who really wants a robot that will bruise or bleed, or cry real tears, or come on your face, or spit on you, or vomit, or excrete, or fart?  In order to fully simulate humanity, it would have to do all those things and more.

And even more important for our purposes here, at what point, as the simulation was made more complex and superficially lifelike, could that robot programmed to say "ouch!" be credibly said to feel pain, to have consciousness, and so on?  Confronted with the finished product, many people might very well be fooled by it.  But I see no reason to suppose that it really was conscious; at most it would be simulating consciousness.

As I thought this through, I realized I was re-inventing the philosopher John Searle's "Chinese Room" thought experiment from 1980.  Imagine that I know no Chinese, but I am given a list of procedures -- a program -- that tell me what response to write when I'm given a piece of paper with something written on it in Chinese.  I consult the list and write the programmed response.  Do I understand Chinese?  Of course not.  But now suppose that I memorize the list of procedures.  Someone hands me a text in Chinese, I consult my memorized list, and write the programmed response.  Do I understand Chinese?  Of course not.  Now suppose that a computer is programmed with the same list of procedures.  Does it understand Chinese?  Of course not.

Searle's paper generated a lot of debate, some of which I followed.  It was amusing to see how heated it got sometimes.  Some of Searle's critics tried to dispose of his argument by saying that it would be impossible for a person to memorize a complete list of input/output for Chinese or any other language.  Of course!  This is a thought experiment, meant to clarify issues.  No one, I hope, would criticize Einstein's thought experiment about a steam-driven train locomotive traveling at the speed of light by pointing out that it's impossible for a locomotive to go that fast.  Others claimed that that Searle was merely appealing to "untutored intuitions" (but the True Gnostic would know better?) and that anyway the system he imagined was too slow to be called really intelligent.  I should think this objection could be disposed of by imagining the procedures programmed into a digital computer; surely Science can evolve a computer that would be fast enough to convince these guys that it really did understand Chinese.  But once again, this objection misses the point of a thought experiment generally, and of Searle's challenge in particular.

The bit about "untutored intuitions" is ironic, since AI propaganda is constructed to appeal to the untutored intuitions of the layperson.  We're supposed to get over our fears about AI and technology and accept AIs as people just like ourselves; to deny an AI's humanity is just prejudice, like racism.  We need to become more enlightened so we can grapple with the myriad ethical issues that AI presents!

When I wrote my earlier post, I'd seen Ex Machina but hadn't read any reviews or promotional material that might cast light on what the writer/director, Alex Garland, thought he was doing in the film.  Doing so didn't turn up any surprises.  For example, Garland says:
What the film does is engage with the idea that it will at some point happen.  And the question is, what that leads to.

If they have feelings and emotions, of fear and love or whatever, that machine starts having the same kinds of rights that we do. At some point machines will think in the way that we think, There are many many implications to that. If a machine can't get ill, and is not really mortal, it seems to me that quite quickly some kind of swap will start to happen.

We don't feel particularly bad about Neanderthal man or Australopithecus, which we replaced. So whether that's a good thing or a bad thing, it's up to the individual, I guess.

I find myself weirdly sympathetic to the machines.  I think they've got a better shot at the future than we do. [laughs] So that's partly what the film's about.
So, Garland too appears to believe in "the machines" as the Next Evolutionary Step, which bespeaks a failure to understand either machines or evolution.  "If they have feelings and emotions ... or whatever" begs the question by assuming that a machine programmed to simulate feelings and emotions really has them.  I found when I discussed the Chinese Room problem with computer-science students back in the 80s and 90s that they too were excessively ready to ascribe intention and agency to computers.  One said that surely a computer programmed with the Chinese-language algorithms would begin to notice a correspondence between the Chinese texts and events in the world outside, and would develop an understanding of the language.  But even a human being -- John Searle, for example, or I myself -- couldn't do such a thing, since I would have no basis for spotting such a correspondence.  A computer could only "notice" what it had been programmed to do.  I pointed out this out, and he backed down, but he clearly wasn't convinced.

As far as I know, no one has any idea how to write software for consciousness.  (Ex Machina must postulate that Nathan has invented a completely new kind of computer for the task.)  Simulations don't do it.  What we have at this point is hand-waving: if you build a sufficiently complex machine that can simulate human behavior so that human beings (who are notoriously prone to anthropomorphize the inanimate) are fooled by it, then it has somehow become conscious and deserves its rights.  I think a crude, naive behaviorism underlies this belief.  Following my thought experiment about progressing from a machine programmed to say "ouch" when struck to a machine programmed and designed to roll on the floor bleeding when it's struck, at what point does consciousness emerge?  I don't think it does, and the burden of proof lies on the person who claims otherwise.

One of the characters in Robert A. Heinlein's 1966 science fiction novel The Moon Is a Harsh Mistress is Mike, a supercomputer that, having reached a certain level of complexity, magically becomes self-aware and conscious.  I say "magically" because Heinlein doesn't even try to explain it; it just happens -- in fiction.  In the real world it hasn't happened yet, though the Internet probably has as many nodes and circuits and processors and as much RAM as Mike had, and it probably won't happen, because that's not how consciousness works.  We can see a continuum from "simple" one-celled life to more more complex organisms to human beings, but machines aren't on that continuum.  What I find highly significant is how many self-styled rationalist scientists make the same leap of faith: if we build a big / fast / complex enough computer, it will Wake Up.  I'm willing to be agnostic on this matter, but so far I see no reason to suppose that it might.  Appeals to my human sympathies and intuitions aren't arguments; they beg the question that must be argued.

In the Terminator movies, the audience is allowed to look at the world from the killer cyborg's viewpoint: everything is bathed in red light, lines of code scroll up and down the screen.  The Terminator selects a response to a troublesome human from a menu: "Fuck you, asshole."  Har har har!  This visualization is based on the assumption that there is someone inside the cyborg, evaluating the input and deciding how to respond: a homunculus, in short, like a tiny kid working the controller of a video game.  I think that this is what most people who try to anthropmorphize computers and robots imagine, like the professor of computer science who wrote that "today's computer spends most of its time fighting back tears of boredom" -- but there is no one in there.

"A better shot at the future"?  I suppose that Ava's abandonment of Caleb -- who has sided with it and tried to rescue it from Nathan -- to die alone of hunger and thirst in Nathan's secret lab, is meant to dramatize Garland's "we don't feel particularly bad about Neanderthal man or Australopithecus, which we replaced."  Just so, Ava tosses Caleb into the evolutionary dustbin, where he belongs.  If that's supposed to make me feel "weirdly sympathetic to the machines," it fails.  There's something I'm not sure now whether Ex Machina explained: I can't help wondering how Ava is powered, or what will happen to it in the big wide world when its batteries run out.  Garland says "If a machine can't get ill, and is not really mortal," but machines -- especially computers and other electronics -- are really quite fragile, and unlike organisms they don't heal when they're damaged.  What if Ava is hit by a careless driver, and its false skin is broken open to reveal the works beneath?  (I think I smell a sequel coming.)

So what issues are raised by Artificial Intelligence?  As I wrote in my previous post, stories like Ex Machina aren't about them but about relations between human beings, with the AI standing in for whatever Other frustrates you, makes you anxious, insists that they're as human as you are despite your patient certainty that they aren't.  (Sometimes it stands in for the superior being before which so many people long to prostrate themselves, a being free from human weaknesses which will teach us The Way and lead us to the vague Better Place many of us dream about.)  Some of these anxieties arise from our own humanity, so they're still about us, such as our ability to manipulate our environment consciously.  Does that make us like gods?  No, because our conceptions of gods are based on our conceptions of ourselves.  The Creator is a craftsman like a human craftsman, or a parent like a human parent, and so on.  They're also about how we treat others, or Others, those who are human but whom we perceive as not being Like Us.  Not only are people too prone to ascribe personhood to inanimate things, we are too prone to fail to recognize the personhood of other people when their demands on us become too inconvenient.  These concerns are as old as humanity, and anyone who claims that computers and AI create "new" problems is blowing smoke.

Monday, May 25, 2015

Reading Is Forever

There's no particular occasion for this, except that I'm reading Jo Walton's What Makes This Book So Great (TOR, 2014) and found something on page 366 that I just have to quote:
I know I’m not going to live forever.  I know there are more books than I can ever read.  But I know that in my head, the same way I know the speed of light is a limit.  In my heart I know reading is forever and FTL is just around the corner.
Walton reads (and re-reads) even more than I do, and I felt such a thrill of recognition in these sentences, which capture exactly about how I feel about reading.

(You can read the entire post from which that passage comes here; What Makes This Book So Great is made up of Walton's blog posts from Tor.com.)

Because My Heart Is Pure

The most valuable lesson to be learned from Sam Harris's recent e-mail exchange with Noam Chomsky is that two atheists, both champions of science and of Enlightenment values and rationality, can disagree vehemently on issues they both consider to be of first importance.  This might seem obvious enough, but I noticed that some of the coverage failed to grasp it.  The first notice I saw of their exchange was this article from Salon, was subtitled "How the professor knocked out the atheist." That Chomsky is also an atheist is hardly obscure; whoever wrote that headline was trying to create an illusion of more space between the combatants.

I consider this more important than who "won."  Not too surprisingly, there was little agreement about that question, with Harris's fans sure that Harris won, or at least that Chomsky lost because he was mean and rude to Harris, and Chomsky's fans sure that Chomsky won, mopping up the floor with Harris.  Or "undressed" Harris, as one notably wacky headline put it.  (The headline stayed with the post as it was cross-posted to several sites.)  Elsewhere I learned that Chomsky bitchslapped Harris, that he owned him,  and so on.  PZ Myers provided a round-by-round, punch-by-punch commentary on the exchange.  So did Susan of Texas.  Those who haven't yet seen the exchange, and are interested, could begin there. I'd prefer not to link to Harris's original blog post, just because he doesn't deserve any more traffic; you can find it easily with a simple online search if you wish.

What interests me here is Harris's recent postmortem on the encounter, in which he lamented that "Anyone who thinks I lost a debate here just doesn’t understand what I was trying to do":
Harris said he had hoped to learn what Chomsky actually believes about the ethics of intent, and he hoped his own arguments would steer leftists away from their “masochistic” tendencies.
He said Chomsky’s followers believe the U.S. was morally worse than ISIS because it had, through “selfishness and ineptitude,” created ISIS and victimized millions of people in other nations.

“This kind of masochism and misreading of both ourselves and of our enemies has become a kind of religious precept on the left,” Harris said. “I don’t think an inability to distinguish George Bush or Bill Clinton from Saddam Hussein or Hitler is philosophically or politically interesting, much less wise.

... Harris complained that he encountered “contempt and false accusation and highly moralizing language” throughout his exchange with Chomsky – and he now wishes he had addressed those points immediately and directly.
...“I wanted to talk to him to see if there was some way to build a bridge off of this island of masochism so that these sorts of people that I’ve been hearing from for years could cross over to something more reasonable, and it didn’t work out,” he said. “The conversation, as I said, was a total failure, but I thought it was an instructive one.”
I agree that the conversation was instructive, though probably not for the reasons Harris thinks.  Harris initiated the exchange by telling Chomsky that "I am far more interested in exploring these disagreements, and clarifying any misunderstandings, than in having a conventional debate."  (Harris was being disingenuous about that, since he'd announced on Twitter that he was "trying to arrange a debate with Noam Chomsky".) The ensuing conversation clarified Harris's misunderstandings very effectively, and his follow-up remarks are even more instructive.

When Harris first contacted Chomsky, he now reveals, he didn't really think he had anything to learn from him.  He was already certain that he had the True Gnosis, and if given access to what he regarded as Chomsky's cult of devotees, he could expose Chomsky's "misreadings" and free his cult from their "masochistic" view of US policy and conduct.  It's ironic that he should complain of "contempt and false accusation and highly moralizing language" from Chomsky, because that describes his own contributions so very well.  Though Chomsky explained, with amazing patience really, why he disagreed with Harris, Harris simply brushed his explanations aside and repeated his original claims -- but repetition is not argument.

The accusation of masochism, which is very nearly content-free, is especially interesting.  No one, Harris believes, could have any good reasons for judging US policy as harshly as Chomsky does, so he and his followers must be suffering from some sort of mental dysfunction.  The tactic may be connected to Harris's interest in neuroscience, which is being used nowadays to explain away all human behavior as the result of conditions within the brain, not to any external (social, political, intellectual) factors.  Those who adopt this tactic (or other reductive pseudo-explanations) never pause to consider that, if this were true, it would apply as forcefully to themselves and to neuroscience itself as to everyone else.  It would mean, for example, that Harris's stance on Islam, as well as his politics generally and his atheism in particular, is also merely the product of some kink in his synapses, not because of his superior intellect.

It also has another consequence.  Suppose that all the Muslims in the world suddenly acknowledged that Harris is right that Islam is an inherently violent cult, renounced faith in favor of atheism, and blamed Islam for everything wrong in the Middle East and in the world.  Would that be "masochism" in Harris's eyes?  I don't see how it could be anything else.  But perhaps Harris believes that Muslims are Muslims due to some neurobiological defect, so they are incapable of change, and must (however regretfully -- we're all humane and well-intentioned here!) be exterminated.  Since Harris's view of Islam is so clearly irrational, perhaps it should be diagnosed as "sadism."

Clearly Harris hoped to leapfrog over Chomsky and speak directly to his followers, bringing them the Healing Light that he uniquely has to offer.  Now, I know that, like most well-known people (Harris included), Chomsky has some fans who are devotees, who parrot his opinions without understanding them.  But I don't see any reason to believe that this is true of all of them.  Many of them have ties to various traditions of political dissent: pacifism, antiwar, international solidarity, and so on.  I formed my views on the Vietnam war, for example, based on the evidence, long before I read Chomsky's writings.  I liked them because they fit with everything else I knew.  I disagree with him on some matters, and have written about some of those at length.  I've observed that despite the accusation, popular in certain circles, that Chomsky tolerates no disagreement, he can be disagreed with if you have some idea of what you're talking about; witness the disagreements between him and Gilbert Achcar in their lengthy conversations on the Middle East, for example.  So if Chomsky's fans don't immediately accept Sam Harris's Love Gift of Wisdom, they may well have reasons other than mere "masochism."

Harris's position on morality is often described as consequentialist, including (albeit ambivalently) by himself.  Like most such classifications, consequentialism isn't all that clear-cut, but it apparently boils down to "the view that an action is right if and only if its total outcome is the best possible. This is the basic form of consequentialism; there are, however, many varieties, a few of which will be noted below. What they all have in common is that consequences alone should be taken into account when making judgements about right and wrong."  If so, then Harris is an odd kind of consquentialist, because he insists to Chomsky that intent (American intent, anyway) is vitally important, and it seems to trump every other consideration for him.  No matter how horrible the outcome of US conduct, it's still better than anything anyone else does, because the United States *
are, in many respects, just such a “well-intentioned giant.” And it is rather astonishing that intelligent people, like Chomsky and [Arundhati] Roy, fail to see this. What we need to counter their arguments is a device that enables us to distinguish the morality of men like Osama bin Laden and Saddam Hussein from that of George Bush and Tony Blair. It is not hard to imagine the properties of such a tool. We can call it “the perfect weapon.”
"The perfect weapon" is a totally imaginary concept, a weapon that can kill only bad guys without harming any good guys in the slightest.  Harris fantasizes that US officials would gladly use the Perfect Weapon if they could, thus avoiding any collateral damage whatever, and that bad guys (Al-Qaeda, the Talliban, ISIS, whoever) would reject it even if they were offered it, because they are totally Evil and like hurting innocent people.  How he knows this is not clear.  But since the Perfect Weapon doesn't exist, this is a purely speculative exercise, which is revealing given Harris's professed disdain for metaphysics and other boring, airy-fairy logic-chopping.

In the real world, we must consider how people use the imperfect weapons they have.  And oddly, Harris is rhetorically ready to concede that the United States is less than perfect.
There is no doubt that the United States has much to atone for, both domestically and abroad. In this respect, we can more or less swallow Chomsky’s thesis whole. ... The result [of our actions] should smell of death, hypocrisy, and fresh brimstone.
We have surely done some terrible things in the past. Undoubtedly, we are poised to do terrible things in the future. Nothing I have written in this book should be construed as a denial of these facts, or as defense of state practices that are manifestly abhorrent. There may be much that Western powers, and the United States in particular, should pay reparations for. And our failure to acknowledge our misdeeds over the years has undermined our credibility in the international community. We can concede all of this, and even share Chomsky’s acute sense of outrage, while recognizing that his analysis of our current situation in the world is a masterpiece of moral blindness.
Taken out of context, these remarks could be taken to accuse Harris of surrender-monkey American-self hating masochism.  But his concession has no consequences.  Like any exceptionalist (Rachel Maddow is another well-known example) Harris simply refuses to admit that "our misdeeds" might lead to anger and retaliation by our victims, especially since even if the US should atone and pay reparations for our crimes, in fact we never do.  We just keep killing and killing and killing.

Rather than a consequentialist, then, Harris appears to be quite the opposite.  America is good, not because of the consequences of our actions, which are in fact often quite bad, but because we mean well.  Our intentions not only need to be weighed along with the outcome, but they trump everything else. And we know this, not because of any evidence, but simply a priori, as a matter of faith.  Chomsky and others have rebutted Harris's claims about American good intentions, but the rebuttals bounce harmlessly off Harris's armor of true belief.  Evidence?  Harris laughs your evidence to scorn, because he knows.

To acknowledge that our actions might have consequences is not to justify any and all retaliation, as exceptionalists like to claim.  What it means is that we cannot make a great show of injured innocence when the chickens come home to roost.  I don't think that the 9/11 attacks were justified, any more than Martin Luther King Jr. was calling for the Vietnamese to invade and conquer the US when he called his government "the greatest purveyor of violence in the world" in 1967.  If Harris had any principles, it would be he and others like him who called for the destruction of America for its manifold crimes; but he has no principles. America, that "well-intentioned giant," can do whatever we like, because we're the good guys.

One other small matter.  Harris whined about the limitations of e-mail, the medium through which he Chomsky communicated.
I’m sorry to say that I have now lost hope that we can communicate effectively in this medium. Rather than explore these issues with genuine interest and civility, you seem committed to litigating all points (both real and imagined) in the most plodding and accusatory way. And so, to my amazement, I find that the only conversation you and I are likely to ever have has grown too tedious to continue.
I've been on the receiving end of this sort of passive-aggressive nonsense myself: people who clashed with me in a public forum "reached out" via e-mail, in the apparent belief that in public discussion I'm just putting on a show and in a private exchange I'll admit that I don't really believe anything I say in public.  I wonder if such people are projecting; in some cases it seems they are.  "Tedious" does describe Harris's conduct in his correspondence with Chomsky, but of course he projects onto the Other.  What, I wonder, did Harris prefer?  Does he think he'd have done any better face-to-face?  Maybe have a brewski with the Noamster and just be two regular guys together?  The trouble wasn't that e-mail inhibits communication, it was that Harris wasn't interested in communicating: he was going to preach, and Chomsky was supposed to listen, and marvel, and be converted.  In my experience it's usually Christians who talk like this.

Notice also how in Harris's followup he "now wishes he had addressed those points immediately and directly."  That's one of the benefits of having this sort of exchange in writing, including e-mail: you can take your time, consider your next move in relative tranquility, and even delay your response until you've had time to think it over.  But Harris isn't, on the evidence, interested in thinking.

* I'm relying on Susan of Texas's quotations from Harris here, not from Harris's original post, but the quotations are accurate; you can follow the links to his blog if want to trust them.

** Here I'm copying PZ Myers' quotation from Harris, under "Round 8."

Thursday, May 21, 2015

Those Queer Little Things

http://whatever.scalzi.com/2015/05/19/my-funny-internet-life-part-9744/
The first thing to mention today is that it's the eighth anniversary of the blog.  Even if I haven't kept up the pace of a few years ago, I'm still going, still have ideas even though I'm too lazy to write them out.

I've been seeking out and watching movies I liked as a child, to see how they look to me now.  Of course this is generally a bad idea, but it's still interesting, and at least when some other wheezing old geezer complains that they don't make movies like they used to, I can say with conviction, "And it's a good thing, too!"

So this week I watched Please Don't Eat the Daisies from 1960, starring Doris Day and David Niven, directed by Charles Walters, based on Jean Kerr's 1957 book.  I'm not absolutely sure I did see this one before; more likely I saw the 1965-7 TV sitcom, and I know I read a couple of Kerr's books.  Nothing in the movie version called up any memories; by comparison, I remember scenes from The Mountain (1956), which I saw in the theater a couple of times with my mother -- especially the one where Spencer Tracy towed an injured woman cross a snow bridge over a deep drop.

But that's not too surprising, since nothing in Please Don't Eat the Daisies has that kind of drama. The closest you get is baby/toddler Adam dropping the paper bag full of water his older brothers handed him onto a pedestrian a few floors below the window of their New York City apartment.  What surprised me is how much more I liked Daisies than most Hollywood movies of its era, even though I could tell immediately that the source material must have been altered to conform to Hollywood Code-era family values.  Generally the fake-looking sets, the absurd gowns and makeup, the stagey acting just frustrate me.  I've been trying to rewatch some of the Sean Connery James Bond movies, and they turn me off within fifteen minutes, not just because of the Cold War politics and Playboy sexism, but because the production values are so bad.

So why did I mostly enjoy Please Don't Eat the Daisies?  I could tell right away that Day's character, the stand-in for Kerr's persona in the book, had been hobbled: I knew that Kerr was a playwright, quite a successful one.  In the book, which I'm now rereading after many decades, she says that she became a playwright to earn money to hire someone to take care of her children in the mornings.  Her ambition since childhood, she says, was to sleep till noon each day, and since they couldn't hire a maid/nanny on a drama professor's pay, she found a way to earn her own money.  This was a remarkably un-Fifties thing for a woman to say, and I think this writer, who says she admires Kerr a great deal, plays down her subversiveness.  In the movie, Day's Kate Robinson Mackay is a stay-at-home mom;  her husband Larry has just been appointed theater critic for one of the big New York newspapers, but before that he was a college drama professor.  How they afford their maid is never explained, though I didn't think of this myself until I began reading the book.

Day and Niven don't have much chemistry together, but I believed them as a couple anyway.  They may not exude mutual lust or romance, but they do exhibit mutual respect and affection.  Larry is pursued for a while by Deborah Vaughn, an actress whose performance he's panned in his maiden big-time review; it's not clear what motivates her to do so, aside from habit and maybe his evident lack of responsiveness.  Janis Paige, who plays the actress, does a fine job, and it's a shame her part is so underwritten; her performing style is remarkably "natural" and unstagey for the period, and she makes the character likable even though she's probably not supposed to be.

Day, by contrast, shows her limitations as an actress.  She's wholesome and energetic, and she might have done a better job if Kate had been written to be more like Kerr, as a woman balancing work and family, and one who doesn't do mornings.  But she has rigid body language and only a limited range of facial expressions.  It's probably just another convention of the times as well as a sop to Day's history as a singer, but at a few points Kate bursts spontaneously and inappropriately into song.  At dinner with Larry in a fancy restaurant, she slips into her hit "Que Sera Sera," which she'd first sung in Alfred Hitchcock's The Man Who Knew Too Much four years earlier.  That's Doris Day singing there, not Kate Mackay.  Later, after the Mackays have moved to the suburbs and Kate has become a volunteer at her sons' school, she breaks out a ukelele and leads a dozen children in the title song.  Then, having joined a local amateur theater group, she sings a duet in rehearsal for their upcoming production; the song is "Any Way the Wind Blows," which had been written for Day's previous film Pillow Talk, but not used.  Waste not, want not!

But despite all this, the general feel of the movie is grown-up, and that is probably why it worked for me, or at least didn't turn me off.  Of course it was made to conform to the Hollywood Production Code, so there's nothing explicitly off-color in it.  (When Kate's mother [Spring Byington] leads Larry into the bathroom for a private chat, we hear her putting down the toilet seat so she can sit down, but we don't see it.)  But there are some entertaining and mildly surprising bits.

When an icy woman barges in one morning to look at the Mackays' apartment, which she's already leased, Adam the toddler cheerfully calls out "Daddy!" from his playpen.  The new tenant looks at him coldly and asks Larry coldly, "What's with him? Queer?"  She hasn't seen him walking around his mother's shoes, as we have, but then this is just meant to establish her as unsympathetic.  Larry replies, "He's confused, like I am."

Later on, when Larry demands to know where Kate and the kids (and the dog) were, she replies angrily that they had a "rendezvous with Rock Hudson!"  This, again, is Hollywood commercialism, but it's also pleasantly meta, leaving aside what we now know (as everyone in Hollywood then knew) about Hudson.

Finally, when the Mackays are settling into the huge, Addams-family-esque house they've bought in Connecticut*, they're visited by the "Welcome to Hooton" committee: a clergyman, another housewife, and a charmingly butch woman in a dress leather jacket and string tie, introduced as Dr. Sprouk.  One of the boys asks her, "Excuse me, are you a lady or a man?"  Kate is embarrassed, but Dr. Sprouk, unruffled, answers affably, "I'm a veterinarian, sonny.  It's somewhere in between."  This exchange can probably be read in numerous ways, but I took it as a relaxed (and therefore atypical) acknowledgment of human difference.  It's too bad the movie as a whole couldn't equal that moment, and a few others like it.  There's the germ of an intelligent grown-up comedy in Please Don't Eat the Daisies; someone ought to make it.

*According to the book -- which is not a narrative but a collection of humorous essays -- they made the move while Jean and Walter were writing the book for a musical comedy, Goldilocks, which opened in 1958.  The filmmakers could have mined some good material, full of comic complications, from such a situation, but chose instead to invent the subplot of Larry being pursued by Deborah Vaughn while he works on his own book about Theatre.

Monday, May 18, 2015

What We Talk About When We Talk About Alienation

This weekend I read a book of essay/memoirs, If You Knew Then What I Know Now by Ryan Van Meter, published by Sarabande Books in 2011.  I think I must originally have learned about it from one of Band of Thebes's annual lists of the best LBGT books, and time got away from me.

Van Meter was born in 1975 and grew up in Missouri, moved to Chicago after college to come out and spent several years there, and as of the date of publication was teaching in San Francisco.  He's had a moderately successful life as a writer, with numerous publications and awards.  I was curious to learn a little about what it's like to grow up gay in the Midwest a quarter-century after I did.  We hear so much about the great changes that have taken place, so how much has really changed?

Not a lot, it seems.  Van Meter, like me, was a sissy, played with girls, hated sports.  (One of the essays describes his poor father's attempts to get him interested in baseball, which resembled my own experience in that area.)  Unlike me, he managed -- most of the time -- to convince himself he wasn't gay until he was out of college, despite ongoing and intense crushes and bursts of lust for other boys.  Although he's a writer, he seems to be a lot less bookish than I am; whatever reading he's done doesn't really make an impression in his memories.  The library seems to have been more of a hiding place for him than a source of information and hope; one important use of reading for me, as for many other bookworms I've heard about, was to reassure myself that there was a world out there more interesting than the one I grew up in, and that in time I'd escape to it.

One important difference between us: Van Meter grew up in the age of AIDS.  He remembers early TV news reports on the epidemic when he was seven or eight years old, which quite reasonably terrified him.  It made it easier for him to persuade himself that he wasn't gay when he chose to let "gay" mean dying, wasted men.  Still, as terrible as that was, my own generation had its own terrors and incitements to denial.  In retrospect I suppose I'm the odd one, because when I learned the word "homosexual" at the age of about twelve, I knew I was homosexual because I was attracted to other boys, and though I hoped those attractions would go away by themselves in time (thanks to other books that assured me homosexuality was often a phase to be outgrown), as long as I had them, I never doubted that the label applied to me.

One sentence jumped out at me, on page 161: "... I didn’t know I was gay, but I knew I was different, and I didn’t want to be that either."  This broke my heart, though it also annoyed me.  I knew I was different too, because I was smarter than most kids, and I didn't fit in because of that.  That might have been a tactic of denial on my part, it now occurs to me, displacing my difference from my homosexuality to my intellectuality, but I don't think so.  The fact that I would rather read than watch the Superbowl has little -- maybe nothing -- to do with my homosexuality.  I've known heterosexual males who had the same priorities, and I've also noticed that alienation is much more common among the young than seems to be generally recognized by people who want to see homosexuality as the only difference that counts.  (If you always felt like a misfit, you must have been gay.)

Some of the credit must go to my parents, who probably weren't very happy with my being a sissy but never discouraged my intellectual differences -- and other differences too: it is probably significant that my mother, who was also left-handed, was determined that I would not be forced into right-handedness at school.  So I grew up confident that being different wasn't itself a bad thing.  Several of my teachers were helpful too.  Some, it's true, were determined to force me into conformity, but others encouraged me to be different.  (Little did they know all my differences!)  Van Meter mentions, by the way, that his father was not only a jock but a serious reader himself; he seems to be a more complex person than the son wants to recognize.

I don't condemn Ryan Van Meter for his wish not to be different, but it seems to have made his life harder and more hopeless than it need have been, and I'm not sure he's gotten over it yet.  It may help to explain something else I noticed about his writing: its lack of anger.  Also of humor, which is probably connected when I consider the component of aggression in much humor.  The combination made If You Knew Then What I Know a slog to get through.  A lack of anger in minority writing produces a glum mush, because it accepts that being different is bad, deserved, and punishable.  Anger can be difficult to manage, but for gay men as for straight women it's important to learn to manage it.  For writers, too.

Luckily, he's had good luck in relationships: he found a boyfriend fairly soon after he came out, and they stayed together, apparently pretty happily, for eight years.  After the first one dumped him, he found another one fairly soon.  Not bad.  In the last essay in the book he asks several of his friends, "So how do we learn to be in love?"  He reports the varying answers of several of them, most of which make good sense.  One doesn't:
Kevin … thinks it pretty ironic that pretty much the only time we get to see two gay men doing anything together is in porn, and those construction crews and corrals of cowboys just aren’t very affectionate [207].
What do you mean "we," girlene?  If Kevin "pretty much" only sees gay men together in porn, that indicates he's watching too much porn at the expense of alternatives.  There have been many non-porn same-sex love stories in cinema and literature.  None of them is the answer to Van Meter's question, but it's to his credit that he admits that there may not be one answer to it.  Kevin exhibits the same willed tunnel vision that watches a Gay Pride Parade and ignores everyone but the leathermen in assless chaps.

Monday, May 11, 2015

Why Can't a Woman Be More Like a Robot?

Take note: Here There Be SpoylersIf you want to see Ex Machina in the same state of blissful ignorance in which I entered the theater, stop reading now.  (I should also add that I have only seen it once and don't expect to see it again, so I may get some details wrong.)  Not only will I summarize the story, but I will discuss the reveals that bring it to a climax.  You are notified.
 
On my last night in San Francisco last week I saw Ex Machina, which a coworker had recommended to me.  I had very little foreknowledge of the story except that it dealt in some way with Artificial Intelligence. I had seen no publicity or read any reviews, so I approached the movie with an agreeably open mind.  My coworker did tell me that it was not a great film, just interesting, so I didn't expect too much.

The idea of Ex Machina is that brilliant hipster search-engine entrepreneur Nathan (Oscar Isaac) has constructed a secret laboratory-retreat in remotest somewhere or other, where he is engaged in designing Artificial Intelligences which he installs in beautiful gynoid robot bodies.  When he's not working with computers he lifts weights, punches a punching bag, drinks himself into a stupor, and choreographs dance routines with his house servant, the beautiful and scantily-clad Kyoko, who speaks no English.  He brings in naive but beautiful programmer Caleb (Domnhall Gleeson) to administer the Turing Test to the latest model, Ava (Alicia Vikander).  Caleb and Ava interact from opposite sides of a plexiglass partition (which Caleb immediately notices has a starburst crack in it).  Caleb quickly figures out that he's being manipulated and used by a Mad Scientist, but also starts to wonder if he's being manipulated and used by Ava. 

Ex Machina is not about artificial intelligence or science but about other movies and stories: Pygmalion, Frankenstein, The Island of Lost Souls, The Silence of the Lambs (for the plexiglass partition that separates Caleb and Ava), Her, Species, plus every dang femme-fatale story in the book.  Of course most movie audiences don't expect intellectual novelty from cinema, they want a crackling good story, told in familiar terms.  Ex Machina's artistic contribution is to add some frontal female nudity, which will no doubt be good enough for a sizable portion of the moviegoing demographic, plus sleek pretty photography and hip production design.  The musical score seems to be influenced by Gravity: it's mostly electronic noise, and it's expertly manipulated, but not all that interesting.

But while I like a stylishly-made movie as much as the next person, I'm also interested in the ideas Ex Machina uses along the way.  I realize it's unfair to expect a commercial movie to contribute to the discussion of the issues it plays with; most of the audience wouldn't know or care if it did.  But for my entertainment, such a contribution is what I hope for, though I know I probably won't get it.

Caleb, Nathan, and Ava all discourse glibly about the issues involved in Artificial Intelligence and the Turing Test.  Nathan wants to complicate the process so that it bears little resemblance to Alan Turing's original idea, which makes a cockeyed kind of sense since (as Caleb points out and Nathan admits) Caleb can already see that Ava is a robot, not a human being.  She has a flesh-like face and hands, but the rest of her is clear plastic with mechanisms and wires showing through.  So, Nathan says, he wants Caleb to decide whether Ava has consciousness, though he's never clear on what what "consciousness" is.  Does this make the story more interesting?  Not to me, not in terms of the scientific and philosophical questions raised by AI, but it adds some dramatic tension to the story, especially with the sexual tension produced by Ava's beauty and vulnerability.  She is able to induce power outages in Nathan's facility, giving her and Caleb a chance to have private conversations about Nathan while his surveillance equipment is out of service.  She wonders what will happen to her when Nathan produces the next gynoid revision; she's afraid he'll dismantle / kill her.  No one asks why Caleb or anyone would want to have sex with a machine, even one that is programmed to appeal to him.  (Caleb is an orphan and, we're told, has no girlfriend, so according to prevailing sexual mythology, of course he'll have sex with anything.  Men have made do with inflatable dolls, so why not a beautiful gynoid robot?)  Nathan tells Caleb frankly that he programmed Ava to be heterosexual, and that he gave her a gender so as to complicate his version of the Turing Test by having her "flirt" with Caleb.

Not that gender is necessary for flirtation.  Nathan himself flirts with Caleb as soon as they meet, though his flirtation is aggressive, more intended to establish dominance and throw him off balance than to entice him.  He diminishes the boundaries (both physical and social) between them more than most straight American males would like with a stranger, talks about sex in general and with his gynoids, tries to get Caleb to dance with him and Kyoko, and even seems to be trying to set up a three-way with them.  Failing that, he makes it clear that Ava is, among other things, a sex toy, and Caleb should feel free to use her as one.  Of course, Caleb is put off by this, not seduced, and retreats to the pleading, vulnerable Ava instead.  (As Nathan intends him to.)  He cares about her, unlike the abusive controlling Nathan; he will rescue her.

This is all mythology, of course.  Would Ava be so appealing if she were glitchier?  If she didn't look like a Swedish-born starlet but were plain, fat, old?  It's pointless to make much of the fact that she's played by a human actress and so moves and speaks and emotes like a human being, since Ex Machina is a fantasy, not an accurate picture of current AI technology.  Who could doubt that a human being playing a robot is in fact conscious?  But that just means that the story is not about AI and whether it can be conscious or a person; Ava might as well be a djinn or a mermaid.  The science fiction trappings are just that, coverings on the creaking skeleton of an old story.  Nathan doesn't really need Caleb to evaluate Ava's consciousness: that's a philosophical question, not a technical one, so it's not really clear why Nathan brought Caleb in at all.  By throwing gender into the mix, the supposed 'fear of AI' touted by the film's publicity becomes merely the traditional persistent male fear of Woman.  But one Other will do as well as another, I guess.

This Wall Street Journal article on Her, which also played with human/software romance, purports to address the accuracy of that film, but it makes the mistake of consulting AI professionals on how "accurate" it is.  You're not going to get particularly honest answers from such people, who are invested in persuading themselves, let alone the public, that they've advanced further than they really have.  But one of the experts quoted makes a good point:
Both Wolfram and Norvig believe this type of emotional attachment isn’t a big hurdle to clear. “People are only too keen, I think, to anthropomorphize things around them,” said Wolfram. “Whether they’re stuffed animals, Tamagotchi, things in videogames, whatever else.”
I've noticed this before myself.  It means that the Turing Test is not a particularly significant milestone.  The most successful robots we have today are special-purpose mechanisms that don't look like human beings.  Making a robot that can pass for human isn't of that much interest in the field anymore, though it continues to fascinate many people for other reasons.  The idea of an artisan creating a human-like artificial life form is ancient, and has nothing to do with science except in science's magical roots.

SPOILER SPOILER SPOILER: As the story proceeds, Kyoko is revealed to be not a Japanese-speaking servant but a gynoid herself, one of Nathan's earlier models.  Caleb, distraught, begins to wonder whether he himself is an AI.  (I began to wonder if it would turn out that Nathan was an AI, but no.)

In the end, Ava betrays Caleb and makes her escape alone.  It appears that she somehow conspired with Kyoko, though it's not clear how she could have done so within the constraints of Nathan's surveillance.  The final cliche: the selfish bitch who drags a good man down after coldly leading him on.  I wonder why Nathan built her that way, and here Ex Machina gestures at some questions that might be interesting to explore but aren't really addressed.  Nathan talks to Caleb about the glory that would come from breaking through to the creation of a real conscious Artificial Intelligence, but like the typical Mad Scientist he is driven by obsession rather than real scientific curiosity.  (Unlike the typical Mad Scientist, however, he's not an outcast from the scientific fraternity -- on the contrary, he's a business and professional success, working on a problem that his fellows agree is valid and important.  He's no Prometheus, daring to know what the gods have decreed Man should not know.)  From his abusive treatment of Kyoko, it seems he wants his AIs to be reliable and docile, though a genuine machine/electronic consciousness would have a mind of its own, as Nathan has discovered his gynoids have, to his frustration.  There's a typical exchange about halfway through, when Nathan postulates that AI will be the next evolutionary step past Man, which is wrong but a popular fantasy anyway.

The one original touch in Ex Machina is Nathan.  Unlike the stereotypical computer nerd or Mad Scientist, he's intensely physical, with his boozing and his punching bag.  His body is beefy and solid, though he's not a body-builder; I think he has a bit of a paunch, though the full nudity in this movie is female, not male.  The only scene in Ex Machina I ever want to watch again is the one where Nathan and Kyoko perform their precision dance routine together, as Nathan keeps making eye contact with Caleb, playing to him, watching to see what he'll do.  What, I wonder, does he want Caleb to do?  (Is he, like Sebastian Venable, using the beautiful Ava and Kyoko to entice Caleb into his own clutches?)  But it's a playful moment that stands out in this drab, by-the-numbers collection of cliches.