Using Technology to Deepen Democracy, Using Democracy to Ensure Technology Benefits Us All

Wednesday, January 14, 2015

Futurology's Shortsighted Foresight on AI

Also posted at the World Future Society.
 The idea of a ban on "existentially-risky" artificial intelligence -- a term which is concerned with quite a lot of stuff that isn't or wouldn't be intelligent -- is momentarily very much in the news right now (or what passes for news in the illiterate advertorial pop-tech press) due to a recent Open Letter from the Future of Life Institute -- an "institute" which is concerned with quite a lot of stuff that isn't or wouldn't be alive. This Letter happens to be getting a lot of signatures from celebrities and celebrity CEOs, but also some computer scientists who are no more expert than you or me or Alan Alda (who has signed the Letter) to wade into the philosophy of consciousness or personhood at issue.

Actually, many of the signatories to the letter are outright boosters, one might even say dead-enders, for the serially failed project of good old fashioned artificial intelligence (GOFAI), and while much of the public discussion of AI/superAI in these circles is framed in terms of bans, the Letter itself indulges in loose talk of "responsible oversight" of AI. Mostly, this seems to me to involve giving more money and more attention to the people who still take GOFAI seriously. The key folks behind the Letter are techno-transcendentalists explicitly associated with transhumanist and singularitarian and techno-immortalist movements and sub(cult)ures, and it is interesting how rarely even those ridiculing the Letter are pointing out this fact (you will find Nick Bostrom, George Dvorsky, Ben Goertzel, Elon Musk, Jaan Tallinn, Eliezer Yudkowsky all over my Superlative Summary).Would commenters be so reticent to notice were all these figures to happen to be Raelians or Scientologists?

It is a bit demoralizing to find that the public debate on this topic seems to be settling into one between those who say something on the order of "well, some of these extreme arguments seem a bit crazy, but this problem needs to be taken seriously" versus those who ridicule the debate by joking "I for one welcome our robot overlords" and then declaring that when the Robot God arrives we don't stand a chance. In other words, every position concedes the validity of the topic and its essential terms while at once pretending to step back from it. These gestures essentially concede the field to the futurologists and invigorate the legibility of their AI discourse and hence the profitability of the marketing agenda of the tech companies that deploy it, which is the only victory they want or need in any case.

Now, I for one think that there is no need to ban AI/super-AI because our present ignorance and ineptitude form barriers to its construction incomparably more effective than any ban could do. We lack even the most basic understanding of so many of the phenomena associated with actually-existing biologically-incarnated consciousness, affect, and identity, while our glib attributions of intelligence and personhood to inert objects and energetic mechanisms all attest to the radical poverty of our grasp however marvelous our reach. We don't need to get the problem of the Robot God off the table, because there is no Robot God at the table nor will there be any time soon.

I daresay all this need not be the case forever, after all. Perhaps human civilization will one day confront the danger of AI/super-AI, but that day is not soon -- and those who say otherwise seem to me mostly to be laymen in the field of computer science making claims about the state of the art for which they are unqualified, or computer scientists making philosophical arguments in ways that reveal little philosophical rigor or historical awareness.

There is no reason to think that a sensible assessment of the state of the art in computer programming here and now would undermine reassessment in the future should our models and techniques improve. Indeed, there is every reason to think, to the contrary, that premature concern from our limited perspective will introduce false formulations and figures the legacy of which might interfere with sensible deliberation later when it is actually relevant.

To repeat: I think it is extremely premature to deliberate over banning or regulating non-existing nor soon-to-exist AI/superAI here and now; and, if anything, to do so is more likely to undermine the terms of such deliberation should it eventually become necessary. My critique does not end there, however, since this utterly unnecessary and premature and eventually possibly damaging AI/superAI deliberation here and now is happening nonetheless, and seems to be attracting greater attention, and so does have real effects in the world even without any justification on its own terms or real objects of concern.

This takes me to a critical proposal at a different level: namely, that the time and money and the conferral of authority on "experts" devoted to the "existential risk" of unregulated/unfriendly AI/superAI functions to divert resources and attention from actual problems and actually relevant experts, and indeed is sometimes mobilized precisely to trivialize urgently real problems (as the increasingly influential Nick Bostrom's worries about AI are directly connected to a rejection of the scope of anthropogenic climate change as a public problem, for example).

Returning to the Letter's recommendation of "responsible oversight," consider this paradoxical result: nobody can deny that there are incredible problems and enormous risks associated with the insecurity of networked computers and with user-unfriendliness of programs and with the dangerous political consequences of substituting algorithms for judgments about human lives. Such questions are usually not the focus of the futurological discourse of AI/superAI, and usually serve at best as dispensable pretexts or springboards for heated "technical" discussion debating the Robot God odds of robocalypse or roborapture. Indeed, it is one of the more flabbergasting consequences of AI/super-AI discourse that they not only distract from actually real problems of computation, but that AI/superAI discourse becomes a distortive lens of false and facile personifying figures and moralizing frames that confuse the relevant terms and stand in the way of deliberation over the problems at hand.

Incredibly, if AI/superAI eventually does become a matter of real concern in anything remotely like the terms that preoccupy futurologists I would say we will be better prepared to cope with it through ongoing and gathering practical experience with actual coding problems as they actually exist, than ignoring reality and instead imagining idealized future machines from our present, parochial, symptomatic perspective.

The primary impact of AI/superAI discourse as it ramifies in the public imaginary has been instead to denigrate human intelligence as it actually exists: calling cars "smart" to sell stupid unsustainable car-culture to consumers, calling credit cards "smart" to seduce people into ubiquitous surveillance the better to harass them with targeted ads, and to rationalize crappy "AI" software like autocorrect and crappy computer-mediated "smart" analyses like word-clouds, and crappy "decision" algorithms to determine who gets to start a business or who gets to be extrajudicially murdered by a drone as a potential "terrorist." As always, talk of artificial intelligence yields artificial imbecillence above all.

AI discourse in its prevalent forms solves no real problems, is not equipped to deal with eventual problems, and functions in the present to cause catastrophic problems. It seems to be of use primarily as a way to promote crappy computation for short-term plutocratic profit.

It is no surprise that this shortsightedness is what futurologists and tech-talkers would peddle as "foresight."

No comments: