Boring Russell Brand Take

The man is a scumbag. He’s always been a scumbag, and it was never a secret that he was a scumbag.

The reports over the years from women who claim to have been mistreated by him are what you expect to hear about a scumbag. Some of them are reports of criminal behaviour, and probably few if any of them are practically prosecutable.

The women in question should have known better and probably should have been better advised. That doesn’t begin to justify any of his behaviour, whether criminal or merely scummy.

Not actively celebrating his scumminess would be helpful to women in the position that they were in. Scumbags like him should not be paid to advertise products or participate in elite media. It is disgraceful that he was widely promoted in the past, and a further disgrace that promotion is being withdrawn directly he becomes inconvenient to the establishment through whatever it is he’s been pissing them off with lately. I’m not even sure if it’s stuff I agree with or disagree with — why would I care what an idiot scumbag like him thinks?

On a generally open forum like X or YouTube, people shouldn’t be excluded from participation, including payments, just because someone thinks (correctly) that they’re a scumbag. But choosing to promote scumbags is scummy behaviour.

AI Doom Post

I’ve been meaning for a while to write in more detail why I’m not afraid of superintelligent AI.

The problem is, I don’t know. I kind of suspect I should be, but I’m not.

Of course, I’m on record as arguing that there is no such thing as superintelligence. I think I have some pretty good arguments for why that could be true, but I wouldn’t put it more strongly than that. I would need a lot more confidence for that to be a reason not to worry.

I think I need to disaggregate my foom-scepticism into two distinct but related propositions, both of which I consider likely to be true.

Strong Foom-Scepticism — the most intelligent humans are close to the maximum intelligence that can exist.

This is the “could really be true” one.

But there is also Weak Foom-Scepticism — Intelligence at or above the observed human extreme is not useful, it becomes self-sabotaging and chaotic.

That is also something I claim in my prior writing. But I have considerably more confidence in it being true. I have trouble imagining a superintelligence that pursues some specific goal with determination. I find it more likely it will keep changing its mind, or play pointless games, or commit suicide.

I’ve explained why before: it’s not a mystery why the most intelligent humans tend to follow this sort of pattern. It’s because they can climb through meta levels of their own motivations. I don’t see any way that any sufficently high intelligence can be prevented from doing this.

The Lebowski theorem: No superintelligent AI is going to bother with a task that is harder than hacking its reward function

Joscha Bach (@Plinz), 18 Apr 2018

@Alrenous quoted this and said “… Humans can’t hack their reward function”

I replied “It’s pretty much all we do.” I stand by that: I think all of education, religion, “self-improvement”, and so on are best described as hacking our reward functions. I can hack my nutritional reward function by eating processed food, hack my reproductive reward function by using birth control, my social reward function by watching soap operas. Manipulating the outside universe is doing things the hard way, why would someone superintelligent bother with that shit?

(I think Iain M Banks’ “Subliming” civilisations are a recognition of that)

The recent spectacular LLM progress is very surprising, but it is very much in line with the way I imagined AI. I don’t often claim to have made interesting predictions, but I’m pretty proud of this from over a decade ago:

the Google/Siri approach to AI is the correct one, and as it develops we are likely to see it come to achieve something resembling humanlike ability.
But the limitations of human intelligence may not be due to limitations of the human brain, so much as they are due to fundamental limitations in what the association-plus-statistics technique can practically achieve.

Humans can reach conclusions that no logic-based intelligence can get close to, but humans get a lot of stuff wrong nearly all the time. Google Search can do some very impressive things, but it also gets a lot of stuff wrong. That might not change, however much the technology improves.

Speculations regarding limitations of Artificial Intelligence

I don’t think we’ve hit any limits yet. The current tech probably does what it does about as well as it possibly can, but there’s a lot of stuff it doesn’t do that it easily could do, and, I assume, soon will do.

It doesn’t seem to follow structured patterns of thought. When it comes up with an intriguingly wrong answer to a question, it is, as I wrote back then, behaving very like a human. But we have some tricks. It’s a simple thing, that GPT-4 could do today, to follow every answer with the answer to a new question: “what is the best argument that your previous answer is wrong”. Disciplined human thinkers do this as a matter of course.

Reevaluating the first answer in the light of the second is a little more difficult, but I would assume it is doable. This kind of disciplined reasoning is something that should be quite possible to integrate with the imaginative pattern-matching/pattern-formation of an LLM, and, on todays tech, I could imagine getting it to a pretty solid human level.

But that is quite different from a self-amplifying superintelligence. As I wrote back then, humans don’t generally stop thinking about serious problems because they don’t have time to think any more. They stop because they don’t think thinking more will help. Therefore being able to think faster – the most obvious way in which an AI might be considered a superintelligence – is hitting diminishing returns.

Similarly, we don’t stop adding more people to a committe because we don’t have enough people. We stop adding because we don’t think adding more will help. Therefore mass-producing AI also hits diminishing returns.

None of this means that AI isn’t dangerous. I do believe AI is dangerous, in many ways, starting with the mechanism that David Chapman identified in Better Without AI. Every new technology is dangerous. In particular, every new technology is a threat to the existing political order, as I wrote in 2011:

growth driven by technological change is potentially destabilising. The key is that it unpredictably makes different groups in society more and less powerful, so that any coalition is in danger of rival groups rapidly gaining enough power to overwhelm it.

Degenerate Formalism

Maybe an AI will get us all to kill each other for advertising clicks. Maybe an evil madman will use AI to become super-powerful and wipe us all out. Maybe we will all fall in love with our AI waifus and cease to reproduce the species. Maybe the US government will fear the power of Chinese AI so much that it starts a global nuclear war. All these are real dangers that I don’t have any trouble believing in. But they are all the normal kind of new-technology dangers. There are plenty of similar dangers that don’t involve AI.

Housekeeping

It is three years since I first discovered that Twitter was hiding tweets with links to my blog.

I’m pretty sure the root cause is the “.party” domain I used (because when I migrated from Blogger it was really cheap and I thought it kind of made sense for something political, though really I’m explicitly not party-political). Twitter seems to treat links to these little-used top level domains as probable spam.

There was an interesting incident at around the same time: links to the World Health Organisation on its “.int” domain got the same treatment. This was early in the pandemic.

Anyway, I had a workaround which was to tweet the link, copy the “t.co” shortened form, and then delete the first tweet and tweet it again with the shortened form. I think that worked at first, but not after a while.

Then I used a free subdomain, pointed it here and wrote a little static web page that could pull the path out of the URL, and generate a link to the right one. That was clunky as hell and fiddly to use.

So I have finally given in, spent a few more quid on a boring .co.uk domain, so https://www.anomalyblog.co.uk/ is the official address of this blog now.

All the old anomalyuk.party addresses still work, and I hope will do for a long time, unless it gets pointlessly expensive to keep renewing it. Most of the links on old posts here don’t work; it’s a great shame how history disappears. I was very impressed, looking at one 2011 post, that links to Robin Hanson’s “overcomingbias” still work, although he has migrated to Substack since then.

Inspired by that, I’ve done some coding to fix up the redirection from blogger, so that even the old anomalyuk.blogspot.com links now work properly, as they did when I first migrated but which I had let rot since then.

To complete the housekeeping, I’ve switched themes: as far as I can see the default themes that come with wordpress are better for a simple plain text-centric blog than most alternatives, including the one I picked back in 2017. I’ve taken one of the old ones.

On the Culture War

In my review of David Chapman’s Better Without AI, I fundamentally agreed with his assessment that recommender engine AIs are “deliberately”1 whipping up the culture war for ad clicks, and we need to somehow prevent this.

However, unlike Chapman, I can make no claim to be neutral in the culture war. I am clearly on one side of the divide.

It isn’t necessary to be neutral towards culture war issues, to be against the culture war. The key, if you are roused by some event linked to the culture war, is to think, “what can I practically do about this”.

Lets say, for instance, that I wish that The Rings of Power had been written to be as nearly true to Tolkien’s vison as The Fellowship of the Ring, and not an excuse for a load of left-wing propaganda.

What can I practically do about it?

Pretty obviously, nothing. I can refuse to watch the one they made, except that it never occurred to me to watch it in the first place. I can cancel my Amazon subscription, but that won’t mean much to the people who made the show, and again I will probably be doing that anyway once I’ve finished Clarkson’s Farm 2 because there isn’t really anything I watch on it any more.

I could write to Amazon explaining that I don’t find their programming relevant to me any more. That actually makes sense, but the cost/benefit ratio is very high.

What most people do is bitch about it publicly on social media. That is surely completely ineffective, and might even be counterproductive.

An anonymous2 voice shouting in public is not persuasive to anyone. In private, with people I know (whether they are likely to agree or not), I will share my feelings. That might help, and is a natural form of communication anyway. It also doesn’t generate ad clicks for the AI.

The reason I say it might be counterproductive is that by the behaviour of the leftist agitators, stirring up opposition is obviously their aim. As I said a while ago, In the case of the drag shows, the only credible motivation behind it that I can imagine is desire to upset the people who are upset by it.3 Yes, there are some perverts who want to actually do this stuff, but the public support for it comes from people who want to make it a hot public issue. Getting involved is playing into their hands.

Should we just let all this stuff happen then? Mostly, yes. The exception is, again, “what can I practically do about this”. If this is happening in some school in North London I have no connection with, the answer is nothing. Still more if it is happening in another country. I wrote in 2016: I consider local stories from far away as none of my business and refuse to consider them4. There is no reason anyone who is involved should care about my opinion. On the other hand, if it is happening somewhere I do have a connection, I should act — not to express my feelings, but to have an actual effect. This is similar to the recommendation that Chapman has for companies — not that they should launch into the culture war, but that they should drive it out. “Take this out of my office”.

This isn’t a clear route to victory. Nothing is that easy. But playing the culture war game, screaming in public, is no route to victory either. Take only measures that actually help, and those will generally be private and local.

From my biased viewpoint, the culture war is very asymetrical. One side is trying to overturn cultural standards, and the other is resisting and retaliating. In that sense, I think even Chapman’s call to drop it is actually a right-wing position. I think without a loud public clamour, most of the recent excesses would be quietly rolled back in private by people who care about reality. Unfortunately, the loud public clamour is not going away any time soon, but playing the recommender AI’s game by joining it, even on the “right” side, is helping to maintain it rather than stop it.

Once you rule out participating in the culture war, the next step is to stop consuming it. A family member watches Tucker Carlson regularly. The presentation of his show disgusts me slightly. Not because I disagree with Carlson; I think he is right about practically everything. But then what? All he is doing is getting people outraged about things they can’t do anything about. What is the effect on them of watching this show? They make noise on social media, which is harmful, and they vote for right-wing politicians, which is the thing that has been proved by experiment to not do any good.

Better Without AI

How to avert an AI apocalypse… and create a future we would like

Just read this from David Chapman. Really excellent, like all his stuff. What follows is going to be a mixture of boasts about how this is what I’ve been saying all along, and quibbles. Read the whole thing. In fact, read the whole thing before reading this.

It draws probably more on Chapman’s current work on meaning than on his previous life as an AI researcher, which is a good thing.

The book starts by discussing the conventional AI Safety agenda. I interpret this as mostly a bait and switch: he is putting this issue up first in order to contrast the generally discussed risks with what he (correctly) sees as the more important areas. That says, he isn’t, or at least isn’t clearly, as dismissive of it as I am. The thing about unquantifiable existential risks that can’t be ruled out is that if there was only one of them, it would be a big deal, but since there are somewhere between hundreds and infinitely many, there’s no way to say that one of them is more worthy of attention than all the others.

He makes the correct point that intelligence is not the danger: power is. As I said in 2018, If we’re talking teleology, the increasing variable that we’re measuring isn’t intelligence or complexity, it’s impact on the universe. This also leads to being dismissive of “alignment” as a concept. A significant proportion of humans are more than adequately motivated to cause catastrophe, given enough power — while completely inhuman goals or motivations are conceivable in AI, they don’t obviously increase the risks beyond those of powerful AI with very comprehensible and mundane human-like goals and motivations. This is one of the most critical points: you don’t need to desire catastrophe to cause a catastrophe. Villains always see themselves as the heroes (though, realistically, more fictional villains should probably see themselves as normal sensible people doing normal sensible things).

All the blather about “real intelligence”, “consciousness” and so on is incoherent and irrelevant to any practical question. Chapman covers this in his other writing better than anyone else I’ve ever read.

He then plays down, or at least draws attention away from, the possibility of “superintelligence”. My own pet theory, expressed here before, is that superintelligence is not a thing. As Chapman puts it: “Maybe an IQ of 14,000 would make you only a little better at science, even though you’d be unimaginably better at the pointless puzzles IQ tests throw at you

Next comes the real meat of the book. The scariest AI scenarios do not involve superintelligence or rogue AIs fighting against humanity, but practical AIs doing fairly reasonable things, much more thoroughly and effectively than before, and those things having very harmful downstream effects.

And while there are no doubt dozens of possible scenarios that meet that description, there is one that is already happening and already doing massive damage, with no clear limit to how much more damage could happen.

The scenario that is actually happening is the collision of two things I have brought up here before, but not explicitly put together as Chapman does.

Facebook hit a billion users a bit after 2010. It is Facebook, Twitter, and YouTube that meant that anyone, if they pitched it just right, could reach a mass audience. And that sent politics insane.

Anomaly UK: Defining the Facebook Era

this same system of user feedback and ML-generated recommendation is shaping the content across all digital media. Whatever you have to do to get the views, those are the rules, even though nobody chose those rules, even though nobody knows what all the rules are, if you are in the business you just have to do your best to learn them.

Anomaly UK: Epiphenomena

(“ML” in my second quote is “Machine Learning”, i.e. today’s AI)

Putting these two things together, what you get is:

The AI uses you to create messages that persuade other humans to do what the AI wants: to look at what it wants them to see, to click on its ads, and to create more messages that persuade more humans to do the same. The technologies of memetic weaponry have improved dramatically over the past decade, optimized by AI running a training loop over coopted humans. (That means you. Do you ever post political comments on the internet? Yes, you do.)

AI has discovered that inciting tribal hatred is among the best ways to sell ads. In collaboration with ideologies and coopted human content providers, AIs have developed increasingly effective methods for provoking fear and rage, which often induce people to propagate messages. Under partial brain control from AIs, we humans create emotion-inducing culture-war messages. The AIs propagate them based on their own alien values (namely, whatever inscrutable factors they predict will result in attention, and therefore advertising revenue).

Better Without AI: At war with the machines

This is not an AI gone rogue and seeking to destroy mankind. This is a business function that has existed for what, 150, 200 years: sensationalist media stirring up drama for advertising revenue. But that existing business has been made orders of magnitude more effective by new communications technology and AI. I suspect it would have become very dangerous even without the AI — my “Defining the Facebook Era” did not take AI into account, and the “Epiphenomena” post was six months later — but quite likely I had underestimated the role that AI was already playing two years ago, and in any case it doesn’t matter: as dangerous as social media without AI might be, social media with AI “recommender engines” is, as Chapman argues, vastly more dangerous still. It is quite reasonable to claim that the AI picked the current and previous US presidents, undermined and destroyed the effectiveness of long-established and prestigious institutions 1, and has the potential to be far more effective and harmful in the immediate future, without any further “breakthroughs” in AI science.

As I tweeted in 2020, If you think a million people dying of a disease is the worst thing that could ever happen, you should read a history book. Any history book would do … in worst-case competitions, politics beat plagues every time., and as I blogged here back in 2006, Humankind has always faced environmental threats and problems, and has a good and improving record of coping with them. We have no such comforting record in dealing with overreaching government and tyranny.2

AI may have many avenues to inflict damage, but the force multiplier effect of politics means that all other ways of inflicting damage are also-rans. Specifically, the primitive, clunky, unreliable AIs we have today are leveraging media, advertising and democracy to suck in human attention. Like criminals, the money they steal for themselves represents a tiny fraction of the damage they do.

Chapman devotes a lot of attention to just how primitive, clunky and unreliable neural-net based AI is, which is all true, but I wouldn’t dwell on it so much myself, since in this case its limitations are not increasing the damage it does at all, and probably are decreasing it. The biggest worry is not the effects of its errors, but how much more damaging it will be if a way is found to reduce its errors. The situation today is very bad, but there is little reason not to expect it to get worse. The “2026 apocalypse” scenario is not overstated in my view – there is no upper limit to mass insanity.


Mooglebook AI does not hate you, but you are made out of emotionally-charged memes it can use for something else

We next come to what to do about it: “How to avert an AI apocalypse”. The first thing, reasonably, is to fight against the advertising recommender engines. Block them, don’t follow them, try to ban them.

My only issue there is that, as I said before, AI is only part of the problem. I mean, since the media companies now know that inciting tribal hatred is among the best way to sell ads, they don’t need AI any more. They can eliminate whatever technical measure you try to define, but carry on doing the same thing. To be clear, that is probably still an improvement, but it’s a half measure.

In fact, the AI that has taken control of politics is exploiting two things: the advertising industry, and democracy. It is not doing anything that has not been done before; rather, it is doing bad things that have long been tolerated, and amplifying them to such a degree that they become (or at least should become) intolerable. The intersection of advertising and democracy inevitably tends towards rollerskating transsexual wombats — without AI amplification that is arguably a manageable threat. However, my personal view is that it isn’t.

The next chapter of the book is about science. We don’t want AI, so instead lets just have decent science. unfortunately in the 21st century we don’t have decent science. I’ve written about this quite a lot recently, and Chapman’s writing is very much in line with mine:

Under current incentives, researchers have to ensure that everything they do “succeeds,” typically by doing work whose outcome is known in advance, and whose meager results can be stretched out across as many insignificant-but-publishable journal articles as possible. By “wasted,” I mean that often even the researchers doing the work know it’s of little value. Often they can name better things they would do instead, if they could work on what they believe is most important.

Better Without AI: Stop Obstructing Science

I have no idea how to fix this. Classic science was mostly carried out by privileged rich buggers and clergymen, plus the occasional outside genius with a sponsor. State funding of science in the last century initially added vast resources and manpower to the same system, with spectacularly successful results. However, over decades the system inevitable changed its form and nature, producing today’s failure. There is no way back to that “transitional form”. We can go back to rich buggers (we no longer have Victorian clergymen), but that means reducing the size of science probably by 99.9% – it’s tempting but probably not an improvement in the short term.

Anyway, that chapter is very good but of minor relevance. It does also contain more good arguments about why “superintelligence” is not a major issue.

The last chapter is about having a wider positive vision (though perhaps “vision” is the wrong word).

Mostly it echoes Chapman’s (excellent) other writings: Eschew lofty abstractions, accept uncertainty and nebulosity, avoid tribalism, and look for things that are simply better. Discovering what you like is a never-ending path of opening to possibility.

you do not have an “objective function”
you do not have any “terminal goal”
your activity is not the result of “planning” or “deciding”
you do not have any “ethics”
these are all malign rationalist myths
they make you miserable when you take them seriously
you are reflexively accountable to reality
    not to your representations of it
your beneficent activity arises
    as spontaneous appreciative responsiveness

Better Without: This is About You

It would be nice to end on that note, but I have to shoehorn my own conclusion in:

I don’t quite recall seeing it stated explicitly, but I think Chapman’s view is that advertising recommendation engines are only the first widespread practical use of AI, and, not coincidentally, the first form of apocalyptic threat from AI. As other practical uses for AI are found, equal or greater threats will result. That is plausible, but, as I’ve said, I think politics is (by far) the greatest point of vulnerability of our civilisation. If we protect ourselves from politics, we are going a long way to protecting ourselves from AI and from other threats.

This is probably my biggest near-disagreement with the book. Yes, AI is an existential risk that we might not survive. But then, Genetic Engineering is an existential risk that we might not survive. Coal is an existential risk that we might not survive. Heck, Literacy is an existential risk that we might not survive. For better or worse, we don’t survive these risks by suppressing them, but by adapting to them. Current AI is indeed unreliable and over-hyped, but I’m more worried by the prospect of it getting better than by the prospect of it keeping the same limitations. There are many imaginable and unimaginable risks that could come from AI in the future, and one solid one that is present today, that Chapman’s second chapter lays out admirably. If we can save ourselves from that one, we are doing well for today. In any case, I suspect that the next risk will, like this one, take the form of amplifying some harm that already exists to the point that it becomes a danger of a different order.

This risk today is the amplification of politics via media, advertising, and democracy. Democracy was well-known for causing catastrophes like the rollerskating transsexual wombats before Leibnitz’s calculator or the US Declaration of Independence. The level of democracy we have in the West today is not survivable, with or without AI. For that matter, the level of democracy in China and Russia is dangerously high.

Update: more on the culture war

Climatic Climax

The last time I blogged about climate was in early 2018. Back then, I said that the climate scare was “primarily a media phenomenon“.

I was seriously wrong. I had underestimated the decline of conspiracy, the degree to which it is impossible in the modern age to sustain insincerity1.

I also ignored everything I knew about the Cathedral. The media is part of the ruling structure; if the media believes something, then by definition the ruling structure believes it.

My mental model, at the time, was that the media promoted the climate scare because it was good TV. The politicians went along with it because it was good politics. But at the end of the day, real action on the climate would be superficial, fake, or indefinitely postponed to the future, because the sensible people behind the scenes would never actually cripple our entire civilisation over something so silly.

What an idiot.

In reality the climate scare was and is primarily a political phenomenon — one of the non-partisan runaway manias I discussed recently, under the title Loyalists without a cause. As I tweeted, “Since the end of the cold war, the most damaging movements have been non-partisan: environmentalism, social justice, global democracy.”

In the modern system, where nobody is responsible for results, and everyone is responsible for tomorrow’s papers, it is just very much easier to support something that makes you seem selfless or kind than to oppose it. If it is actually a live partisan issue, then you can and should take your side, in order to appeal to your party, but only a few things can be live partisan issues at once. Those are the important issues, and if you weaken your position by taking an unattractive position on an unimportant non-partisan issue, you risk concrete losses on the important partisan issues. (You also risk your own personal advancement.)

I did touch on this, back in 2010 — the left-wing commentator Jonathan Hari claimed that 91% of Conservative MPs “don’t believe man-made global warming exists.” And yet, I emphasised, they ran on a manifesto commitment to reduce greenhouse gas emissions.

In late 2018, I pointed out that “It is a feature of any large movement that pretending to believe something is effectively the same as believing it.” If Tory MPs in 2010 did not believe that man-made global warming existed, that made no difference. They effectively did believe it. There were no sensible people behind the scenes, keeping the power stations open.

There’s also a generational effect. The 2010 parliamentary conservative party might have been pretending, but newcomers coming in weren’t in on the joke.

There’s also no absolute limit on how far things can go, as Sri Lanka is in the process of demonstrating. There is no fuel on the island, no money to buy any because the export industries have been crippled, and the mob yesterday stormed the presidential palace. Because of environmentalism.

At the same time, it isn’t actually inevitable. To take one of my favourite themes, the unthinkable can become thinkable very fast. This could happen tomorrow.

The German Green party just voted for more coal power 2

The European Commission and Parliament have agreed that Natural Gas is Green and sustainable

The easy way to save civilisation, without looking an idiot on climate change, is just to not talk about it. It all got going because the media would happily report the conflict between “nice” pro-environment politicians and “nasty” anti-environment politicians, and nobody wanted to appear nasty. If the left-wing media see that banging on about climate change is bad for their politicians, they will keep their mouths shut. The population will forget all about it in a matter of weeks. If it stays a non-partisan issue, then politicians will as always take whatever side of the story gives them better press.

Over a longer timescale, when the fanatics counterattack, then an actual counter-narrative will gradually be built. The dangers were over-hyped. Adaptation is feasible. Warm weather is actually good. Those of us who have been saying all of this for decades will be completely ignored, but our talking points, suitably laundered, will be everywhere. As I said before, decades from now the question will be recorded in history as a media fad that got out of hand.

A bunch of scientists will have funding dry up. But this was never really about science. The whole climate scare is fundamentally political, not scientific. Because of that, if the politics change everything else will just topple. In the early years of this blog, I wrote very frequently about the science, or lack thereof, of global warming. There is a small amount of very bad science making the case for a catastrophe. There is a truly vast amount of science explicitly taking that as a given, and wrapped in verbiage that seems to support it, but not itself adding any evidence. There are a lot of papers whose conclusions are phrased to give support to the dominant political narrative, but whose concrete findings are wholly compatible with “negligible effect”. Change the political incentives, and all these papers can be repeated, with identical results and “nothing need be done” abstracts. Again, history will not describe this as a scientific story.

The active propagandists of global warming always knew that this could happen. You can see that very clearly in the climategate emails that leaked in 2009 — they were desperate to keep control of the media narrative, even though to casual observers it looked like their opponents were very few and weak.

I’m not actually particularly confident that it is going to break like that now. Sri Lanka shows that it is not inevitable. But it could happen.

Cars or Police?

Cars as transport

It’s a long time since I wrote about cars.

For most of the history of this blog, I didn’t drive a car. I studied or worked in London for twenty-five years, and London has very comprehensive public transport and not much parking space. I also love walking.

So, when it comes to transport, I am by no means a car fanatic. It’s true that I wrote in 2013 that the advantages of rail travel would come to an end, but that is based on future technological changes that have yet to occur. For the time being, there is still much to be said for rail, and other alternatives to cars have a longer future. So to those who see car usage as a problem to be reduced, I am not really hostile

The thing is that arguments about cars as transport are only addressing part of the story. There is another significant aspect to cars in our society today — a practical aspect, not any psychological mumbo-jumbo about fetishism.

Virtual nations

Particularly in libertarian circles, there is an idea that there could be “Virtual Nations”. Instead of belonging to a country filled with the horrible people who just happen to live near you, you can form a virtual nation along with people like you. You spend all day on the internet anyway, so these people are your real neighbours. You can pay taxes to your virtual nation, vote for its government, invest in online common infrastructure, and make up a really cool flag. It’s been a while since I came across any of these manifestos, but these days blockchains would definitely be involved.

Obviously this is really stupid1 even without blockchains. As Russia has just reminded us, nation states are fundamentally about force. If you don’t have a border you can defend, you ain’t a country. Your relationship with your horrible neighbours is the problem, and a nation-state is the solution. Additional features of nation-states, such as flags, football teams and welfare states, are secondary.

Your country is tied to your geography. It is, however, possible to make a mini-country within a country. Devolution, federalism, subsidiarity are formal mechanisms, but there is an informal kind of partial seccession that goes down to the level of gated communities, office parks, and so on. These are not quite virtual nations, but, being based on physical separation, it is something real.

In the last few decades in our societies it has become something highly prized by the rich. It is a definite social shift, triggered by the rhetoric of equality and enabled by technology, that the rich have much less contact with the rest of the population than ever before. The rich no longer have servants in significant numbers, whereas as I’ve mentioned before, it used to be that 25% of the population worked as servants. Where the rich still rely on service work by humans, huge effort has gone into depersonalising the relationship. This allows us to pretend that we are all equal, that we all do our different jobs. I might be working for you right now, but then you might be working for me later on – there is no relationship of superior to inferior. There is some truth to this, but only some. There are plenty of people who have the practical status of masters over people with the practical status of servants, but they are all theoretically equal and we maintain that illusion by minimising any personal contact that would either dispel it or break the economic relationship.

More importantly, we now live in a society of pervasive violent crime. I have written much about this over the years, because it is controversial, but I think it is possibly the most important single fact about the modern world. My summary is here, and this whole piece is a restatement and elaboration of that one. There are vastly more people in our societies today whose behaviour is dangerously criminal than there were when our civilisation was at its peak, which I would put very vaguely as 1800-1939. To the extent that this isn’t overwhelmingly obvious through crime statistics, it is because of the phenomenon I describe here — people are protecting themselves from crime by physically separating themselves from the criminals.

The polity of drivers

And this is why discussing car usage solely in terms of transport is so pointless. Virtual Nations are in general stupid, but “people with cars” actually do effectively make a virtual nation. To be a citizen of Great Britain you don’t need much paperwork, but to be a citizen of the nation of car drivers you have to register yourself with the bureaucracy and keep your information with them up to date. Because you own an expensive piece of equipment that the state knows all about, you have something that they can easily take from you as a punishment. In fact, they can take it even without going through the endless palaver of a court case. In the last few years, you are even required to constantly display your identification which can be recognised and logged by cameras and computers, so the state for much of the time knows exactly where you are.

I used to find this outrageous, and it is still not my preferred way for a government to govern a country effectively. But it is a way to govern a country, and, unlike Great Britain, the country of British car-drivers is actually governed.

But what about the objection to virtual nations? The virtual nation of car-drivers is not a true province, like Wales or Texas, but it is physically separated from the rest of the nation. That is the point of suburbia, of the windy housing estates full of dead ends, with no amenities and no through roads. If you drive a car, you can quite easily have a home that is not accessible to anyone without a car. When you do have to venture among the savages, you do so in a metal box with a lockable door.

Cartoon by Dave Walker

The above image is taken from a 2020 twitter thread by @JonnyAnstead. It is an excellently written thread, and makes perfect sense if you ignore the question of crime. In the absence of that key item, he is left to think that all these car-centric features are either a mistake, or some weird conspiracy of car manufacturers or road builders. In reality there is massive demand for housing in this form, because it permits the buyers to immigrate into the virtual nation of car drivers. As I tweeted at the time, “The cars vs people question is just another aspect of the central issue: the biggest value of a car is that it enables you to stay away from the people who don’t have cars.”

The alternative to cars

There are reasonable alternatives to cars for transport (in a lot of cases, anyway), but we need an alternative to cars as a safe virtual nation to live in.

If you want a society that is not centred on the car, for everyone who can afford one, then put the criminals in prison. That’s it, end of tweet.

OK, this isn’t a tweet I suppose. How exactly to put the criminals in prison is a somewhat bigger question, but it has to be done. I have written about it many times, but, aside from the post linked already, there is this one, where I mention how it should be, and this one, where I describe how it is today. The police and court system is just too inefficient to function. Issues like antiracism, sentimentality, checklist culture have all have their impact, but I don’t think there is any one cause. It has just got steadily less efficient because it was allowed to, and it probably has to be scrapped and rebuilt from scratch. “Tough on crime” politics is totally useless, because no politician inside the system can actually admit how bad things are, so they always rely on showy but incremental items that have negligible practical effect.

Update: Did a little editing that I should have done before posting. Also, the discussions of town planning that this post arose from were referring to Britain; I didn’t generalise to the US. But Candide tweeted: Uhm. What do you think white flight was if not mass emigration to the nation of America-with-cars? — which seems pretty persuasive to me.

Post-Liberalism

This is another of these posts written to be a reference point for something that’s been talked about quite a bit.

There was once this political philosophy called Liberalism. It was based on the idea that a person shouldn’t be under the authority of another more than was absolutely necessary.

(For the purposes of this post, I am referring to advocates of this philosophy as liberals — do not confuse that with later users of the same label.)

Codified by twentieth-century autists, this became the Non-Aggression Principle — that the only justifiable reason to interfere with anyone else’s actions is because those actions harm someone else.

In its less rigid form, from the 18th century on, the idea that individuals should have wide latitude over their own behaviour, subject to protection of other people, and also subject to various unprincipled exceptions that I’ll get to in a moment, was the foundation of the modern world. Industry and science flourished in conditions of freedom.

Successful and beneficial as liberalism was, it was never entirely logically coherent. First, there were many restrictions on freedom that didn’t have to be justified because they were too obvious to question. Most early liberals were Christian. Even those that weren’t had all been raised in Christian society, and absorbed some degree of Christian morality, often weakened but still present. The few who managed to overcome any trace were far from the mainstream. (Thomas Paine comes to mind).

Second, not every form of liberalism respected private property, but all the ones that worked did. There are theoretical arguments for why liberalism necessarily implies private property, but as I wrote once before, they aren’t very convincing.

Third, and most crucially, the limits of what constitutes harm from one person’s actions on another person are entirely arbitrary. Every action has an expanding and diminishing wave of effects. Every fire has smoke, every building has a shadow, every animal produces waste. Harms such as slander or distress can be caused simply by speaking, even by speaking the truth.

(In the twitter thread that this started from, I linked this excellent piece by Ed West, on just how much outcomes on people’s lives depends on the behaviour of their neighours)

Liberalism worked because there were fairly common understanding of what harms were “de minimis” and what were not, that had been inherited from former much less liberal societies. This common understanding wasn’t rational, it was only traditional. Now those traditions have been lost, there is no way to get them back.

The chief harm that is recognised today, that makes liberalism a dead letter, is not a new one. It is the one that opponents of liberalism always advanced as its chief cost, and which has a history going all the way back to the trial of Socrates.

In the twenty-first century, any public action at all can be seen by one group or another as corrupting the minds of the youth.

So be it. I quite like the results of old-school liberalism, but as a philosophy it is bunk. Since everyone now acts in accordance with the idea that the minds of the youth should be protected from corruption, it is defeatist to be half-hearted about it. Twitter today is full of two controversies: a mild joke about women, and drag shows for children. If the war is to be fought over what corrupts the minds of the youth more, let battle begin.

Some new theoretical justification for freedom would be nice, but it can wait until the cult of universal queerdom has been, if not defeated, then at least fought to a truce to the extent of being one religion among many, not the compulsory True Faith.

Normality

A cloud of related ideas here:

First, what is considered normal comes from subcultures. People get their ideas of what is normal from the people they interact with regularly. Different subcultures can exist in close physical proximity – for example, different social classes traditionally had very different views of what was normal behaviour.

Speculation: are people today more ignorant or dismissive of other subcultures? I observed previously, for example, that the rich used to have more personal contact with the poor – they had servants, tenants, etc. that they knew as actual people, though not the same sort of person. Today technology makes it easier for the rich to avoid dealing with people from other classes, and an ideology of equality makes it embarassing to do so, since you are supposed to believe that they are of the same culture as you, even though they blatantly aren’t.

Social class is just one example, as another, there are obvious differences in the way of life between urban, suburban and rural environments. Young people in cities can meet each other in the evenings easily – young people in suburbia are more isolated from each other.

Really important point: people’s behaviour is much more constrained by what they consider normal, from their subculture, than by what they believe to be true intellectually.

Next consequence of this: crime and order. If, by effective enforcement, you make law-abiding behaviour normal among most subcultures, you will not have much crime. This is really the only way to not have much crime.

A society where it is not normal to commit crimes can do all sorts of things that are otherwise impossible. This goes back to a post I made way back in 2005. The biggest cost of crime is the forgone opportunity – all the things we could do, but don’t because we would run too much risk of crime. As I mentioned on twitter this week, the concept of a supermarket — goods displayed in the open for customers to pick for themselves and bring to a checkout — depends on an assumption that people just walking out with the stuff will be rare enough that you can handle it. (That assumption is apparently starting to fail in some areas now, such as parts of San Francisco). In Britain in the 19th and 20th Century, rarity of crime was one of the basic presumptions that people didn’t have to think about.

Aside: Not only that, but, in accordance with my original point, what crime there was was largely in certain subcultures — the immigrant “rookeries” of London’s East End, for example. Away from those subcultures, it was rarer than average statistics suggest. Even today, much of the civilised world still lives in an environment of very low crime. (That’s a point Steve Sailer makes from time to time).

This basic presumption obviously gets taken for granted. That’s the root of my divergence from libertarianism — given the presumption of an ordered society, it is fine. However, that ordered society needs to be actively preserved.

When I made the point about supermarkets on Twitter, obviously there was a lot of feedback to the effect that, as in the Dickensian rookeries, it is in minority subcultures that the law-abiding norms are not present. Even accepting that, though, it is possible for effective law enforcement to change what is normal in those subcultures. Obviously the story in San Fransisco is that the abdication of law enforcement is the immediate trigger. (I say “story” deliberately — I’m always cautious about pretending to understand what is going on so far away, and the reality may be a lot more complex than what I can see. However, I will stand by the logic of what I am saying here while being open to more information on the detail).

Chokepoints

Quick placeholder here to identify a concept that comes up repeatedly.

Governing (in the very broadest sense) is partly about principle, and partly about practicalities. You can decide you want something to happen, but it might be easy to act effectively, or difficult. You can pass a law, but it might be easy to enforce, or difficult.

Those practicalities are affected by what the normal behaviour of people is.

One example: if most people are employed by one company or another, government can have a lot of influence by attaching rules to that employment relationship — it can collect income taxes, ensure minimum welfare, regulate safety, etc. The employers can be conveniently be made agents for the govermnent — information-gatherers, or providers or enforcers.

There are many other examples. If goods come into the country through a few ports, government can exert a great deal of control easily by closely regulating those ports. If people all go to the same church, the government can monitor and influence their views by acting through that church.

However, behaviours like this change. In the case of the employment relationship, as one example, it has in the last decade become much easier to work short-term. The canonical example is Uber: Uber can provide a lot of the function of an employer — giving a worker a fairly steady stream of work for different end consumers, doing marketing, payment handling, paperwork — without actually being an employer. Youtube makes TV programmes without employing producers and presenters. The influence that government used to have at that “employment” choke point is gone in those cases.

The most topical example of this wider phenomenon is of course media. If news and entertainment came from a small number of newspapers and broadcasters, those were choke points that allowed government to amplify its control.

When a valuable control point, such as TV broadcasting or long-term employment, dissolves away, government has a serious problem. It has four choices:

  • Expend more resources to achieve the same amount of control
  • Give up control
  • Find new choke points
  • Try to force people back into the old choke points

There’s no value judgement here. I’m not an anarchist, government needs to govern, and the optimal mechanisms for governing, at any point in time, are affected by the affordances provided to the government by common patterns of behaviour.

Whenever you see controversy around technology — because technology changes the way people interact and moves choke points — it usually comes down to this question.

Update July 2023: This is a two-way process. Chokepoints can disappear, as described above, and also new chokepoints can emerge.

For better or worse, cash is on the way out. More of everyday life is being mediated by banks and other money transfer institutions, which are accessible to government regulation like the media companies.

For better or worse, this enables government to have more policy control over commerce. You can campaign for a government to abjure that new power, but in the long run it is unlikely that any will do so.