A compilation of comments

For the reader who wishes to understand what this is about, here are all the comments I’ve written about the “basilisk”. Most of them have been “censored”, meaning that they still exist on the site, but the viewing permissions have been altered by a moderator so that only the author (me) can see them. As censorship goes, it’s relatively minor, but this topic just shouldn’t be censored at all, because no-one has anything to fear from it.

The basilisk was introduced in a post in mid-2010, so my first comment was made in the context of discussion of the post (indented words were written by someone else; I am quoting and responding). Later the post was “hidden”, in the way I described above, and that was the beginning of the basilisk saga, which has continued, on and off, for over two years.


The problem with this post is raving madness of its presentation

From my perspective, the directness of the exposition is a virtue, but that’s because I’m reading it with entertained admiration. Roko… dude… I’d say more than half of the conceptual ingredients here don’t apply to reality, but I have to respect your ability to tie them together like this. And I’m not just reading it as an exercise in inadvertent science fiction.

I’ve actually been waiting to see what the big new idea at the end of your MWI/copies/etc series of posts would be, and I consider this an excellent payoff. It’s not just an abstract new principle of action tailored to a particular ontology, it’s a grand practical scheme featuring quantum investment strategy, acausal trade with unFriendly AIs in other Everett branches, the threat of punishment by well-meaning future superintelligences… And best of all, it’s a philosophy you can try to live out right here in the world of daylight reality. I salute the extent to which you have turned yourself into a cognitive reactor for futurist pioneering. This is one of the craziest-in-a-good-way posts I’ve read here. Of course, pioneering generally means you are among the first to discover the pitfalls, mistakes, and dead ends of the new territory.

I’ll try to say something more constructive once I’m done with the enjoying.


The parent article says “deleted” and yet here it is. I don’t understand that. Does that mean it’s flagged for deletion but not yet deleted? Anyway, under the circumstances I feel free to offer a perspective on acausal blackmail. Reduced to a slogan, it is:

There is no such thing as “acausal blackmail”, there is only fear of your own imagination.

“Acausal blackmail”, under current circumstances, is a matter of you imagining an entity somewhere that does not exist locally (that’s why this is “acausal”) and then you imagine it making threats of some sort (that’s the “blackmail”). Maybe it will simulate you just as you are “simulating” it (in your imagination) and it will do terrible things to your other self over there! Maybe it’s a possible post-singularity entity in this world and it will do terrible things to this-you, if it one day gets to exist and if you don’t now do what it wants! … sorry, if you don’t do what it would want you to do now, if it existed here… Blackmail doesn’t even have to involve you, or a copy of you, of course. The imagined entity in your head might say, in a powerful voice which brooks no opposition,

Build a billion-Earth-dollar shrine to the Flying Spaghetti Monster, or I will kill the trillion puppies of the Greater Puppy Galaxy – which is a galaxy that exists in my dimension, full of vulnerable puppies.

Would we take seriously that sort of “acausal blackmail”? Someone who has a voice in their head telling them to start a bogus religion, to save a trillion puppies in another dimension? And yet, is there any significant difference, psychologically or causally, between this sort of self-administered Pascal’s Mugging, and the one that we are all so worked up about here?

When a pre-singularity human being imagines a superintelligence, there is no superintelligence actually present. Any demonic ingenuity you attribute to it is entirely your own. There is simply no opportunity for a determined superintelligence to turn itself into an inescapable platonic mind-bomb because there are zillions of other possible superintelligences that don’t do that and which you could be thinking about. You may as well speak of the “acausal hair-burning” which caused you to do something stupid like set your own hair on fire. After all, it worked the same way: you imagined the possibility of setting your own hair on fire, and the idea just grew on you, and eventually you did it, and ended up in the burn ward. Well, saying that the hair-burning blackmailed you into instantiating is nonsense, isn’t it? It has or had no intelligence or agency of its own. All the impetus came from your own psyche. And the same thing applies to any imagined superintelligence. You are not superintelligent, so there is no actual superintelligence in your imagination of the entity, and therefore it has no power over you except the power that you give it.

The reason these scenarios have power over a person is just because they fear a certain possibility, not because there is a cunning super-AI elsewhere in the multiverse acausally taking advantage of that fear. The possibilities may be genuinely fearful, but the psychological mechanism at work here is entirely causal, entirely locally generated, and based on an illusion.


Roko’s Final Post was a response to the possibility of post-Singularity AIs who will punish you after the Singularity for not doing everything you could before the Singularity to make it a Friendly one. This is a variation on “acausal trade” (we are not causally connected, but we know about each other via simulation, and make a deal to carry out acts on each other’s behalf in our separate causal domains), except it’s a post-Singularity AI using an acausal precommitment to punish, in order to force pre-Singularity people to act a certain way. And Roko’s idea was the “quantum billionaire trick”, where you make a high-risk, high-return investment so as to become a billionaire in the small number of Everett branches where the investment pays off, having similarly precommitted to spend the winnings on FAI research. Eliezer stepped in and said, you do not talk publicly about the possibility of acausal blackmail; the ostensible reason was that it would cause psychological damage to people who take it seriously, but I believe he thinks it may be a real risk as well. Eliezer deleted the post, and Roko then deleted everything else he’s written here, having decided to make a better world by first making a lot of money, rather than by publicly espousing dubious futurist ideas.

I don’t believe these acausal interactions make sense – if you find yourself thinking about a hypothetical superintelligence that makes threats in another possible world, you can always think about another hypothetical superintelligence with different motivations – and I don’t believe in MWI, so I think the quantum billionaire trick is just an eccentric way to throw away money. But I admired Roko’s ability to even come up with ideas like that, ideas out on these frontiers of thought where no-one understands anything very well. That is what I meant by the comparison. [..]


Out of all the terrible possibilities periodically discussed on this site, this is one that we do not have reason to fear, and it would be a (very) minor advance to get it out of the way.

Torture is real and is something to fear. Transhuman torture – in which the pain is worse and lasts longer than anything mere humans can produce – is also presumably possible, but it would require extremely bad luck to end up in the hands of an unfriendly AI which explicitly wanted to hurt people. An UFAI which just kills you because you get in the way seems far more likely. But we’re not even talking about the possibility of transhuman torture by a UFAI; we are talking about “acausal blackmail” by a possible future AI.

How does this work? There is an AI which is Someplace Else – another possible world, or a possible future that doesn’t exist yet. It configures itself so as to do something Bad under certain circumstances. Then, you here in the present are supposed to discover this possible AI, by imagination or simulation, including the fact that if the ominous circumstances happen, it will do the Bad thing. This is supposed to motivate you to avert those circumstances.

But the scenario is nonsense logically. What exactly compels you to imagine one possible AI rather than another? If you imagine an AI which demands that you walk on your hands for the rest of your life, or else it will destroy the Earth, why not imagine an AI which demands that you do whatever you want, or else it will be very sad? Oh, but then you imagine the first AI adding new conditions: ‘I’ll also destroy the Earth if you dare to think about some other AI! I’ll destroy the Earth if you try to debunk this argument!’ Etc. It’s all in your own head!

If propriety compels us to rot13-encode discussions of that possibility, then a lot of other discussions here need to be hidden as well.


I tell you that I have done the computation, and that the utility of hearing, discussing, and allowing discussion of the banned topic are all negative. Furthermore, they are negative by enough orders of magnitude that I believe anyone who concludes otherwise must be either missing a piece of information vital to the computation, or have made an error in their reasoning.

Jim, just by virtue of being embodied human beings, we are all already under sentence of death, we are all physically exposed to the possibility of torture, and to the possibility of transhuman torture should we have the extreme misfortune to fall into the clutches of a maximally unfriendly AI. All of that is already true before anyone brings up the possibility of “acausal blackmail” by an AI that doesn’t even exist yet and may never exist. What possible further secret awful truth can there be?

And besides, the supposed mechanism of the blackmail does not work. If you are just an ordinary intelligent being, you cannot be superintelligently blackmailed by imagining or simulating an evil super-AI, because you do not have the resources to actually realize superintelligence in your simulation. If someone is haunted by these ideas, that is because other psychological mechanisms are at work, like plain old fear of punishment by an angry God.

Not only should the rest of us fearlessly and openly discuss this topic (so as to bring out the illogic of it), but if there really are people haunted by some variation of the concept, it will be much healthier for them to talk about it and get it over with. The idea that this is some sort of mind contagion that’s just too dangerous to talk about is nonsense and should be exposed as such.


Make sure you back up your comment, if you value it.

The mild LW censor is more subtle than that. Comments can continue to exist but do not show up unless you find the right path to them.

It’s apparent that to have a sane policy on this matter, Eliezer would have to change his mind. I cannot tell whether the existing policy is mainly supposed to prevent people from thinking scary thoughts, for the sake of their own well-being, or whether there is some genuine fear that possible AIs in the future will malevolently affect the past by being sketchily imagined in the present – which is absurd. Or maybe it’s some other variation on this idea which we’re all supposed to be tiptoeing around. But the effect of the censorship (however mild it is) is to make people unable to think and talk about the problem in a rational and uninhibited manner.

I really think that the key issue is the possibility of transhuman torture, and whether we permit that to even be mentioned. The current policy seems to be, that I can talk about the possibility of a maximally unfriendly postsingularity AI torturing the human race for millions of years, but I am not allowed to talk about whether a proposed information channel, whereby a possible but not yet existent AI supposedly threatens people with this in the present, makes any sense at all, because just thinking about it is traumatic for some people. I submit that this policy is inconsistent. The proposed information channel does not actually make sense, and in any case all the trauma is contained in the raw possibility of transhuman torture occurring to us, some day in the future. You shouldn’t need the extra icing of quasi-paranormal influences to find that possibility scary.

We should separate these two factors – the mechanics of the information channel, and the terror of transhuman torture – and decide separately (1) whether the proposed mechanism makes sense (2) whether the topic of transhuman torture, in any form, is just too psychologically dangerous to be publicly discussed. I say No to both of these.


“Acausal influence” is superficially a contradiction, and this phrase deserves skeptical scrutiny.

The only sort of “influence” I can think of, that might defensibly be described as acausal, is the “influence” of an object (actual or possible) which is being imagined or otherwise represented in a non-perceptual way (i.e. the representation was not being caused by sense impressions ultimately caused by the object itself). But even then there may be a “causal” interpretation of where the representation’s properties came from – it’s just that these would be “logical causes”. A representation of the Death Star has some of its properties because otherwise it wouldn’t be a representation of the Death Star; it would be a representation of something else, or not a representation at all.

There seems to be a duality here. The physical properties of a physical symbol will have physical causes, while the semantic properties will have “logical” causes. I don’t know how to think about these logical causes correctly – it doesn’t seem right to say that they are caused by objects in other possible worlds, for example. But isn’t the talk of acausal anything due simply to ignoring logical causes of properties at the semantic level?


Are you implying that there is an irrational focus on cooperation?

I don’t know what’s going on, except that peculiar statements are being made, even about something as mundane as voting.

if cooperation yields the best results, our decision theory should probably cooperate… If it’s impossible in practice, then the decision theory should reflect that.

That’s what ordinary decision theory does. The one example of a deficiency that I’ve seen is Newcomb’s problem, which is not really a cooperation problem. Instead, I see people making magical statements about the consequences of an individual decision (Nesov, quoted above) or people wanting to explain mundane examples of coordination in exotic ways (Alan Crowe, in the other thread I linked).

I don’t know what postulating this ‘time’ thing gets you, really

Empirical adequacy? Talking about “time” strays a little from the real issue, which is the denial of change (or “becoming” or “flow”). It ends up being yet another aspect of reality filed under “subjectivity” and “how things feel”. You postulate a timeless reality, and then attached to various parts of that are little illusions or feelings of time passing. This is not plausible as an ultimate picture. In fact, it’s surely an inversion of reality: fundamentally, you do change; you are “becoming”, you aren’t just “being”; the timeless reality is the imagined thing, a way to spatialize or logicize temporal relations so that a whole history can be grasped at once by mental modalities which specialize in static gestalts.

We need a little more basic conceptual and ontological progress before we can re-integrate the true nature of time with our physical models.

Why do you think acausal trade wouldn’t be a good idea, decision theoretically speaking? Or why is the concept confused, metaphysically speaking?

To a first approximation, for every possible world where a simulation of you existed in an environment where your thought or action produced an outcome X, there would be another possible world where it has the opposite effect. Also, for every world where a simulation of you exists, there are many more worlds where the simulated entity differs from you in every way imaginable, minor and major. Also, what you do here has zero causal effect on any other possible world.

The fallacy may be to equate yourself with the equivalence class of isomorphic computations, rather than seeing yourself to be a member of that class (an instantiation of an abstract computation, if you like). By incorrectly identifying yourself with the schema rather than the instantiation, you imagine that your decision here is somehow responsible for your copy’s decision there, and so on. But that’s not how it is, and the fact that someone simulating you in another world can switch at any time to simulating a variant who is no longer you highlights the pragmatic error as well. The people running the simulation have all the power. If they don’t like the deal you’re offering them, they’ll switch to another you who is more accommodating.

Another illusion which may be at work here is the desire to believe that the simulation is the thing itself – that your simulators in the other world really are looking at you, and vice versa. But I find it hard to refute the thinking here, because it’s so fuzzy and the details are probably different for different individuals. I actually had ideas like this myself at various times in the distant past, so it may be a natural thing to think of, when you get into the idea of multiple worlds and simulations.

Do you know the expression, folie a deux? It means a shared madness. I can imagine acausal trade (or other acausal exchanges) working in that way. That is, there might be two entities in different worlds who really do have a mutually consistent relationship, in which they are simulating each other and acting on the basis of the simulation. But they would have to share the same eccentric value system or the same logical errors. Precisely because it’s an acausal relationship, there is no way for either party to genuinely enforce anything, threaten anything, or guarantee anything, and if you dare to look into the possible worlds nearby the one you’re fixated on, you will find variations of your partner in acausal trade doing many wacky things which break the contract, or getting rewarded for doing so, or getting punished for fulfilling it.


Until we understand the game better, the winning move is not to play (especially with much more powerful opponents).

In the game of “acausal control”, you are the only player. (I mean “you”, the person worried about having your mind hijacked, not you, Vladimir Nesov.) It’s a game you play against yourself, and you supply your “opponents” with all the power.

Worries about being acausally influenced by agents which are not actually present, but are just imagined, seems to be a problem existing on the boundary of some problem classes that should be a lot more familiar. On one side, we have worries about future possibilities, or worries about the actions of other agents with which you are causally connected and whose actions depend on you in various ways; on the other side, we have the phenomenon of being taken over by a subsystem of yourself (e.g. because of addiction, or because external circumstances force that subsystem to dominate). To the extent that acausal control can occur at all, it requires an input from both these “sides”.


Since the other agent is known “logically” or “mathematically” (the important property is that it is not known empirically), a starting point is to describe an agent that is “controlled by” a logical possibility or a mathematical fact. Once you know how to do that, then you want to refine the model, so that the controlling possibility/fact is about another agent; and in a final refinement, so that it is about another agent which is perceiving and being influenced by the first agent in this same non-empirical way.


If I wanted to develop a rigorous and relevant theory of acausal control, I’d build on my two previous comments in this thread.

On the formal side, I’d look at programs which attach value to the properties of possible programs – and I can imagine that such an investigation would reveal that acausal control requires finetuning: two programs will acausally control each other only if they “care about” each other to an unusual, i.e. unlikely, degree.

Relevancy could mean applying this theory to AI design, but if the “finetuning hypothesis” is right, then an AI is not at all likely to end up being acausally controlled by something. So the real “relevance” of a formal theory of acausal control would be its impact on the small number of humans for which this is a meme to care about. This is where the analogous, but more down-to-earth, problem classes mentioned in my other comment would become relevant; one would be trying to explain, as an exercise in informal cognitive psychology, how someone ended up being worried about acausal control or believing in acausal control, and why this is something of an illusion.


You are being humorous, but here is the answer to your question: People are talking about it obliquely because they want to talk about it openly, but don’t believe they can, without having their discussions disappear.

LW is not a police state. Discussions are free and fearless, except for this one thing. And of course that makes people even more curious to test the boundaries and understand why, on this one topic, the otherwise sensible moderators think that “you can’t handle the truth”.

We can seek a very loose historical analogy in the early days of nanotechnology. Somewhere I read that for several years, Eric Drexler was inhibited in talking about the concept, because he feared nanotechnology’s destructive side. I don’t know what actually happened at all, so let ‘s just be completely hypothetical. It’s the early 1970s, and you’re part of a little group who stumbled upon the idea of molecular machines. There are arguments that such machines could make abundance and immortality possible. There are also arguments that such machines could destroy the world. In the group, there are people who want to tell the world about nanotechnology, because of the first possibility; there are people who want to keep it all a secret, because of the second possibility; and there are people who are undecided or with intermediate positions.

Now suppose we ask the question: Are the world-destroying nanomachines even possible? The nano-secrecy faction would want to inhibit public consideration of that question. But the nano-missionary faction might want to encourage such discussion, either to help the nano-secrecy faction get over its fears, or just to make further secrecy impossible.

In such a situation, it would be very easy for the little group of nano-pioneers to get twisted and conflicted over this topic, in a way which to an outsider would look like a collective neurosis. The key structural element is that there is no-one outside the group presently competent to answer the question of whether the world-destroying nanomachines are physically possible. If they went to an engineer or a physicist or a chemist, first they would have to explain the problem – introduce the concept of a nanomachine, then the concept of a world-destroying nanomachine – before this external authority could begin to solve it.

The deep reason why LW has this nervous tic when it comes to discussion of the forbidden topic, is that it is bound up with a theoretical preoccupation of the moderators, namely, acausal decision theory.

In my 1970s scenario, the nano-pioneers believe that the only way to know whether grey goo is physically possible or not is to develop the true (physically correct) theory of possible nanomachines; and the nano-secrecy faction believes that, until this is done, the safe course of action is to avoid discussing the details in public.

Analogously, it seems that here in the real world of the 2010s, the handful of people on this site who are working to develop a formal acausal decision theory believe that the only way to know whether [scary idea] is actually possible, is to finish developing the theory; and a pro-secrecy faction has the upper hand on how to deal with the issue publicly until that is done.

Returning to the hypothetical scenario of the nano-pioneers, one can imagine the nano-secrecy faction also arguing for secrecy on the grounds that some people find the idea of grey goo terrifying or distressing. In the present situation, that is analogous to the argument for censorship on the grounds that [scary idea] has indeed scared some people. In both cases, it’s even a little convenient – for the pro-secrecy faction – to have public discussion focus on this point, because it directs people away from the conceptual root of the problem.

In my opinion, unlike grey goo, the scary idea arising from acausal decision theory is an illusion, and the theorists who are afraid of it and cautious about discussing it are actually retarding the development of the theory. If they were to state, publicly, completely, and to the best of their ability, what it is that they’re so afraid of, I believe the rest of us would be able to demonstrate that, in the terminology of JoshuaZ, there is no basilisk, there’s only a pseudo-basilisk, at least for human beings.


without getting too specific, that threat’s not a cognitive hazard, but instead relies on opening up certain exotic avenues for coercion

Let’s be totally specific. The “Less Wrong basilisk” is an unwanted byproduct of the Less Wrong interest in “timeless” or “acausal” decision theories. There are various imaginary scenarios in which the “winning move” appears paradoxical according to conventional causal analysis. The acausal decision theories try to justify the winning move, as resulting from a negotiated deal between two deciders who cannot communicate directly, but who reason about each other’s preferences and commit to a mutually beneficial pattern of behavior. Such acausal deals are supposed to be possible across time, and even across universes, and the people who take all this seriously speculate about post-singularity civilizations throughout the multiverse engaging in timeless negotiations, and so on.

But if you can make a deal with your distant partner, then perhaps you can be threatened by them. The original “basilisk” involved imagining a post-singularity AI in the future of our world which will send you to transhuman hell after the singularity, if you don’t do everything you could in the past (i.e. our present) to make it a friendly singularity. Rather than openly and rationally discuss whether this is a sensible “threat” at all, or just an illusion, the whole topic was hurriedly hidden away. And thus a legend was born.


That would be a victory of politeness over rationality. As part of its discussion of the “dust specks” thought experiment, this site has hundreds of references to the possibility of someone being tortured for fifty years, and often to the possibility of being morally obligated to choose to be tortured for fifty years. Meanwhile, what would happen if you took the original basilisk scenario seriously? You end up working really hard to make a better future!

Also, it’s an extremely significant fact that the use of timeless decision theory creates the possibility of timeless extortion and timeless coercion – and that if you stick to causal decision theory, that can’t happen. And there were other lessons to be learned as well, which might yet be unearthed, if we ever manage to have an open and uninhibited examination of the dark side of timeless interactions.


Not only do I agree that belief in the basilisk is enough to make it work, I would say that’s the only way that it can work. A human being cannot actually be acausally blackmailed by a future AI just by thinking about it. However, someone can imagine that it is possible, or they can even imagine that this blackmail is occurring, and so we get the only form of the basilisk that can actually happen, a case of “self-blackmail”.

Incidentally, Nitasha Tiku’s article doesn’t mention the acausal component of the basilisk. She just describes the possibility of a post-singularity AI which for some reason decides to punish people who didn’t work as hard as possible to make a friendly singularity. There are two “fearsome” components to this scenario which have nothing to do with anything acausal. First is the power of punishment possessed by a transhuman AI; it would be able to make you suffer as much as any vengeful god ever imagined by humanity. Second is the stress of being required to constantly work as hard as possible on something as difficult and unsupported as the creation of a friendly singularity.

Something resembling this second form of stress is evidently sometimes experienced by extreme singularity idealists, for reasons that have nothing to do with angry acausal AIs; it’s just the knowledge that all those thousands of people die every day, that billions of other beings are unhappy and suffering, combined with the belief that all this could be ameliorated by the right sort of singularity, and the situation that society at large doesn’t help such people at any level (it doesn’t understand their aspiration, share their belief, or support their activities). This is tangential to the topic of the basilisk, but I thought I would note this phenomenon, because it is potentially a subliminal part of basilisk syndrome, and independently it is of far greater consequence.

As for TDT itself, there’s a wiki page with references. I am mostly but not totally skeptical about the subject.


The basilisk is a fallacy and censorship of basilisk discussion is a mistake.


what drew this particular scenario to your attention out of all possible scenarios

In reality this goes back to the basilisk. I’ve noticed that recent discussions are still being censored, and I believe not out of embarrassment, but because the people in charge still take the “basilisk threat” seriously. This is multiply counterproductive: the censorship prevents the basilisk scenario from being analysed and shown to be fallacious, it blocks an opportunity for a general clarification of issues surrounding TDT-like theories, and it looks juvenile.

You and Vladimir have mentioned the basic reason not to take it seriously: probability. Why should I focus on this scenario rather than some other? Another, similar reason is that I’m not actually in a position to know what will happen. Maybe when I leave the house today, a sniper will kill me because I wasn’t wearing a baseball cap. Even if that does happen, I wouldn’t have been rationally justified to put on the baseball cap, because I had no way of knowing that there was a baseball-loving sniper outside, rather than a baseball-hating sniper, a hockey-loving sniper, or any of a trillion other outlandish possibilities. This similarly applies to fear of a future UFAI – any details of what the UFAI will do to me and why are just being dreamed up by me in the present.

I think part of the reason why the reaction to Roko’s post was so hysterical was because he was essentially proposing that one should seek to be “blackmailed” – blackmailed into doing good things; the “threat from the future” was supposed to be extra incentive. The scenario involved imagining a quasi-FAI who will punish you unless you work hard to make FAI. The whole thing still fails because of the probability issue and the knowledge issue… But the hysteria was because it was not just talking about the possibility of acausal blackmail, but it was encouraging people to think about ways for it to happen. And that sounds like looking for trouble, if you think there’s anything at all to the “threat”.

But this is a sort of trouble that you simply can’t get into, even if you go looking for it. It’s a fear based on a fallacy. One of the problems with realizing Newcomb’s paradox in the real world, is that you’re not in a position to know that an entity who claims to have the powers of Omega really can predict your response perfectly. And yet you have to know that Omega is a reasonably accurate predictor for the logic of TDT to apply. For the logic of TDT to apply to a situation, you have to know that the agent on the other end of the acausal deal really will behave in a certain way; that knowledge does not exist in Roko’s scenario or anything like it, therefore it’s a fallacy.

This sort of discussion is what has been blocked by the policy of basilisk censorship, and that policy needs to be abandoned.

It is inevitable that a few people will latch onto basilisk concepts in an unhealthy or annoying way, in the same way that happens with AI, nanotechnology, and many many other concepts. Neurotic basilisk phobia is apparently an occupational hazard of acausal decision theorists working in conceptual proximity to the singularity. But the situation isn’t helped if everyone just whispers about the issue in secret.

If there was a separate public forum somewhere, devoted to basilisks and related topics, then I could support a LW policy, not of censorship, but of “take it to the forum”. Basilisk discussions do have the potential to be tedious, self-involved, time-consuming, and of no interest to anyone but the participants. An open basilisk forum could minimize the amount of basilisk noise here once the censorship is stopped. But it would need to be open, despite the “risk to humanity”, or else the current neurosis would just be perpetuated.


That was the most recent comment. When that one too was hidden from view, I gave up and decided to create a basilisk-busting forum somewhere else.

Advertisements
This entry was posted in Uncategorized. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s