The two parts of the basilisk

There is an increasingly common account of the basilisk, in which fear of the future AI is supposed to be based on straightforward prediction of the future, plus a little self-referential hocus-pocus.

Here is an analogy for this popularized version of the basilisk:

Straightforward prediction of the future: “Climate change is going to kill large numbers of people, so obviously there will be a search for villains to punish. There will be tribunals, and they will be especially harsh on people who knew what was coming, and who could have made a difference, but didn’t.”

Self-referential hocus-pocus: “If that all happens, dear reader, the tribunal will be especially hard on you – unless you devote your life to saving the climate – because unlike most people, you knew that the tribunal and its punishments were coming – because I just told you so.”

The current version of the new RationalWiki page devoted to the basilisk conforms to this pattern. It may also be seen in an October 2011 “explanation” at “Bo News” which was highly upvoted in a /r/skeptic thread at reddit, devoted to the topic of the basilisk: “People will build an evil god-emperor because they know the evil god-emperor will punish anyone who doesn’t help build it, but only if they read this sentence.”

There is a sense in which Roko’s basilisk really does consist of these two parts, a straightforward prediction and a self-referential hocus-pocus. Future AIs are anticipated to exist, because of the general advance of computational and algorithmic power; that’s a straightforward prediction. (Whether they are likely to have the further specific traits ascribed to them in Roko’s scenario is another matter.)

However, the mechanism of the self-referential hocus-pocus (circular logic, self-fulfilling prophecy) is badly understood or not at all understood, by latecomers to the basilisk saga. For example, the commentator at Bo News says “… you work on the AI now because the AI in the future will reward/punish you, which in lesswrong logic means the AI is actually controlling the past (our present) via memes“. Still, at least that commentator grasped that something odd was being asserted about causality, in the basilisk scenario. Nitasha Tiku’s “Faith, Hope, and Singularity” misses this hocus-pocus element entirely – though she can hardly be blamed for missing it, given the cult of secrecy surrounding the basilisk.

There has recently been a small renaissance of basilisk discussion, at Reddit and RationalWiki. The objective of this post is just to point out that the original basilisk was based on a rather specific hocus-pocus, which may be summed up in the words “acausal trade” and “timeless decision theory”. I won’t try to define those terms right away, let alone examine their credibility as concepts; I just want to point out that the popularized basilisk has largely devolved into straightforward fear of punishment by a future AI, whereas the original basilisk was something a lot weirder.

Advertisements
This entry was posted in Uncategorized. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s