He might of course have been lying, but if not he has been punished for what his computer, and facebook’s computers, did on his behalf.
The point is that the law has to decide how much responsibility a person has for what their computer decides to do.
Up till now, the assumption has been that whatever your computer does, is done at your request, and you are wholly responsible. This despite the fact that that has never been true, and is getting further from the truth every year.
There is no legal tradition to apply here. The nearest analogy to the relationship between a person and his computer is the relationship between a man and his dog.
People have kept dogs for thousands — most likely tens of thousands — of years, so everyone has a rough idea what the deal is. The general legal view is that you have a duty to keep your dog from causing harm under foreseeable circumstances, but there is a distinction between what your dog does and what you do. If your dog attacks a child, you are not guilty of Grievous Bodily Harm, but you might be guilty of keeping a dangerous dog. If your dog craps on the street, that is different than if you crap on the street, but you might still be fined.
If you are found guilty of not properly controlling a dog, you can be banned from keeping one. If your dog causes harm and is considered not to be controllable, the court can order it to be destroyed.
(If you deliberately cause your dog to kill someone, that is still murder of course, but your intention is crucial)
This is the only rational legal framework for crimes committed by a computer without the intention of its owner.
I commented on a post at Tim’s:
The gist is that the CiF poster he quotes does not believe that we can go on with national governments acting purely in their own countries’ interests:
“Gordon Brown needs to change the course of New Labour and replace the national agenda with a new cosmopolitan realism in order to tackle the challenges of terrorism, globalisation and climate change.”
The problem is that this is anything but a change of course for New Labour. As I quoted in my comment:
Today the impulse towards interdependence is immeasurably greater. We are witnessing the beginnings of a new doctrine of international community. By this I mean the explicit recognition that today more than ever before we are mutually dependent, that national interest is to a significant extent governed by international collaboration and that we need a clear and coherent debate as to the direction this doctrine takes us in each field of international endeavour. Just as within domestic politics, the notion of community – the belief that partnership and co-operation are essential to advance self-interest – is coming into its own; so it needs to find its own international echo. Global financial markets, the global environment, global security and disarmament issues: none of these can he solved without intense international co-operation.
That was Tony Blair in 1999, encouraging the US to stay the course – behind Bill Clinton – of subjugating the Balkans.
The election of the relatively anti-internationalist Bush in 2000 was a setback for New Labour’s “International Community”, but luckily for Blair, September 2001 brought him over into the internationalist camp.
If one truly wants a global authority to deal with global warming, or anything else, there are two things that need to be done:
- Create a global authority.
- Get it to agree with your policies.
It’s conceivable that a global authority, once existing, could change its policies, but not that a bunch of people that agree with some policy, but have no power, could become a global authority. So the appropriate strategy would be to encourage whatever practical internationalism exists, and then to change its policy. The only internationalist movements with realistic access to power in the world today are the US neoconservatives, and the EU. I have already explained why the EU does not, and will not, have sufficient power to challenge the US, so any internationalism today must start with neconservatism.
If I believed what Ulrich claims – that only a system of global cooperation can save us from catastrophe, my political strategy would be to throw in totally with the War on Terror. If the US gained the support of the EU to make Iraq into a colony, and then conquer Iran, world government would be that much closer. A powerful military base in the Middle East would put more pressure on the other major oil producers in the region. Venezuela, Canada and Nigeria are all relatively easy to handle. The next stage would be to bring Putin to heel. I admit I can’t see an easy way to do that, unless our Empire’s oil production can be hugely ramped up. A carefully placed nuclear “accident” might do the job, perhaps.
Once substantially all the world’s oil comes under the control of the Empire, it could rule the world. The politics of environmentalism would at that stage be very useful as a rationale for politically managing the oil supply, so it should not be too difficult to apply stage 2 of the climate change strategy, and convert the Emperor to the desired policy.
This whole political programme is, I must admit, very unpleasant. We are talking about at least two decades of continuous war of Imperial conquest. But, as Ulrich Beck says:
When taken seriously and thought through to its logical conclusions, climate change demands a political paradigm shift.
so, we must ask, are we prepared to make the necessary sacrifices, or aren’t we?
What do I think of democracy? I’ve been contradicting myself like mad recently, so I need to take stock.
The Mencius Moldbug theory, which I referred to this morning, is that democracy is something which the ruling caste wastefully pretend to be governed by. It has no substantive effect on policy, but carrying out the rituals helps to prevent the masses from rising against the permanent government.
I don’t buy that. I don’t really think that democracy is the “rule of the people”, but I do think its effects can be underestimated. What in many cases produces the underestimate is the observation that elections rarely change anything significant. However, that would be the case even if democracy were working perfectly. Politicians in the modern age know pretty well what will get them elected and what won’t, and therefore take the positions that will get them elected. The election, provided the politicians are acting sensibly, is a non-event. Looked at that way, it is a sign of the imperfection of the democratic system that elections have any effect at all.
So, we have some democracy. Good thing or bad thing?
I am going to be boringly conventional and say it is better than the alternatives I have come across. Mencius has not really explained his alternative: Abu Dhabi, Singapore and other port city-states are not necessarily replicable across real countries, and while I get that the enlightened self-interested despot would produce an open, free, high-economic-growth society that he could extract the maximum tax revenue from, I don’t see how he would prevent his subjects using their freedom to try to grab his loot. I don’t think today’s AR-15 vs armour comparison really covers the difficulty of holding onto power without a highly militarised police state. I stand by what I wrote here last year: The biggest cost (in the widest sense) of any political system is that which it expends in preventing its overthrow.
So if democracy is a necessary expense for a society free enough to have a really good economy, what about the story today that repressed societies are growing faster? Well, I agree with Tyler Cowen that they are not yet at the level of productivity that would be inconsistent with their lack of freedom. That is, I am claiming that repression limits productivity more than does freedom, not growth.
It still remains to decide whether – given that democracy is just part of the overhead cost of freedom – we should have lots of democracy, or just a minimum. This morning I was arguing for a minimum, but in the past I have asked for more than we actually have currently in the UK. Bryan Caplan claims that the US government follows better economic policy than it would if it actually obeyed public opinion.
I’m not sure. I suppose that despite the undemocratic features in the UK that I’ve complained about, the actual policies I object to are not ones that are opposed by the large mass of public opinion, and so more democracy would not actually help.
Via Arnold Kling, I find Unqualified Reservations. What a rollicking good read. The key insight is one which I have accepted but never managed to make so vivid – that the supernatural component of any religion is relatively unimportant and malleable. One point I did make earlier is that modern dominant “secularism” is rather different from 19th-Century underdog “freethinking”, but this blogger “Mencius Moldbug” not only makes it but explains it.
If there’s a criticism, it’s “so what”. That’s not a strong criticism: describing the world accurately is worthwhile even if it doesn’t lead to obvious courses of action, but we must remember the “why do we care” test to distinguish real meaning from word games.
For anyone who works for a living, the biggest threat to his livelihood is that his job will be made easier. For if it is made easier, someone else might be able to do it.
On the other hand, making jobs easier is the main effect of technological progress. It is the process that has given us the wealth that we now live in.
When is it then a bad thing for a job to be made easier — to be deskilled?
First, when it doesn’t work. That is, in my experience, the most visible form of bad management — an attempt to codify a job with a set of procedures, in the hope that the particular skills of the worker can be replaced by the written procedures. If it worked it would be socially beneficial, but all too often it just means that a the job is just as difficult as it was, but there is then an added difficulty of pretending to follow the procedures.
The other time is when it would be better to make workers more skilled. After all, workers becoming more skilled is equivalent overall to jobs becoming less demanding. However, the incentives are different, as the benefits of deskilling a job stay with the employer, whereas the benefits of improving a worker move with the worker.
Historically, I think efficiency has come much more from deskilling jobs than from improving workers, but it would be wrong to ignore the other process.
Of course if a job isn’t done quite as well by relatively unskilled workers, that doesn’t necessarily make it bad. A handmade shoe might be better than a mass-produced shoe, but the general replacement of handmade shoes with mass-produced shoes is surely a huge improvement in efficiency.
In the market there is a constant pressure to improve efficiency by using fewer or cheaper workers. At the same time, workers want to become more skilled, and to use their skills. The task of improving efficiency and getting it right is difficult, and seems to me to depend mostly on the managers actually in touch with the workers, not the top of the hierarchy.
Back in the 1980s, the big thing was deskilling those middle managers. In a static situation, that would make sense: the workers know how to do their jobs, the senior management to strategy, and the middle managers are a waste of space. But to actually produce change, skilled middle managers are needed.
In the public sector, the process does not operate the same way. There is an unending trend towards workers becoming more skilled, and more expensive, and not the steady pressure to find ways to do the job with slightly less skilled workers. Instead, we see skilled public-sector workers like doctors, teachers and police officers becoming steadily more trained and scarcer, until senior management (the government) is forced to try to fill gaps by dragging a whole new layer of worker in to do the job which the original workers are now too skilled and too expensive to do. That is the story of the Nurse Practitioner, railed against so steadily by Dr Crippen. It is the story of the Police CSO and the Learning Assistant to the class of 40 pupils.
The case of teachers is particularly striking, because it is necessarily a skilled job, and because the system needs so many teachers. As of 2003 the country had over 400,000 teachers (full time equivalent). As more pupils stay in education to 18, the demand will rise. There are certainly worries about the standard of some of the teachers. But we aren’t going to get better teachers than we’ve already got – not 400,000 of them. Any improvement in schools can only possibly come by making it easier for actually existing teachers to teach effectively — by, whereever possible, deskilling their jobs. The solutions that actually come down from government, however, always seem to involve demanding extra skills from teachers. If you can teach well, but you’re not good at writing formal lesson plans, you’re now not a good teacher. If you can teach well, but you can’t impose discipline on a gang of rowdy teenagers, you’re now not a good teacher. If you teach well, but you refuse to pay lipservice to the many political nostrums handed down from on high, you’re now not a good teacher.
If we had a surplus of good teachers, we could get away with all this, but demanding more skills from a profession that numbers in the hundreds of thousands can’t be done. If you employ 400 people, you might be able to get better workers to do a more demanding job. If you employ 400,000 that’s out of the question.
As I said, in the private sector attempts at deskilling jobs often fail. The only way we will see any improvement in these public sectors, without a large risk of catastrophe as possible improvements fail, is to allow variety. And that, of course, is the one thing this government more than any other has stamped out.
This discussion has been slightly aimless, but it’s a huge question — the driving force of human progress — and there’s a great deal more that needs to be said. It was brought to mind by Theodore Dalrymple’s piece on the medical student problem, and by chris dillow’s comments on it.
Further to my thoughts this morning on the separation of public and intimate relationships, it occured to me that I missed some interesting connections.
I wrote that we need emotional commitment where we can’t achieve commitment via public enforcement (contracts) because the considerations required can’t be specified precisely enough (perhaps because flexibility is itself a key consideration). Possibly more important is the fact we can’t enforce publicly (using the law) something that is supposed to happen in private, without witnesses. This came up before when I defended old-fashioned courtship patterns as a way of avoiding the unpleasantness that can result from being alone without witnesses with an untrusted partner.
The concept that keeps coming up is the cost or difficulty of enforcing any arrangement. Whether I am talking about intimate relationships, the basis of property, the structure of government, law and order, or the business models of entertainment products, it keeps coming up as the decisive factor. Either I have a bee in my bonnet about it, or it is being generally overlooked: treated as a minor implementation detail to be worked out later. Or both, I suppose.
Another stray thought on drawing a boundary around the intimate is Linus Torvalds’ famous quote: Software is like sex — it’s better when it’s free. Taking the idea altogether too seriously, what might there be about the writing of software that makes it more suitable to being motivated by emotional commitment rather than public bargain?
It might just be the undefinability of the requirements. A piece of software isn’t much to look at, it’s very difficult to assess its value in advance. Even if you can determine that it functions correctly, that’s not a complete assessment — quality of software is notoriously difficult to define. If you have the freedom to take what you need from software, that is perhaps more valuable than a predefined functional specification.
Patri Friedman points out in a comment that, since “correlation is not causation”, using the correlation between my vote and those of others to estimate an amplified effect for my vote is bogus.
Oh yes, so it is.
That almost disposes of the question. But my thought experiment about identical robots all voting the same way is still valid, I believe. And while I and some other voter I pick out are not robots and not identical, we are phenomena in a physical universe with some strong mechanical resemblances.
Like Newcomb’s paradox, it comes down to the nature of human choice. The traditional view is that each person is an independent entity that can make uncaused choices at any point in time.
That traditional view is implicit in the question, “what difference does it make whether I vote or not?”. The assumption is that, in imagination at least, we can hold the whole world constant and consider it with or without me voting.
As I have implied by talking about robots, the traditional view is not true. My mind is part of the world, and you cannot “hold the world constant” without holding my decision also.
One response to the problem is to say that the whole question is invalid, humans do not make choices, they are “moist robots” (as Scott Adams would say) following their predetermined programs.
But the question clearly is valid. We maybe cannot hold the world constant in every last detail while varying my decision, but surely we can come close enough for the question still to make sense. We will just have to assume some small changes to the world to be consistent with my decision being changed.
Now if we vary, for instance, how much of an idiot the candidate is, we will get an answer to my question very much greater than one. But that’s silly. Whatever the question really means (because I’ve demonstrated it’s not quite as unambiguous as it looks), it doesn’t mean that. Facts we have observed must be held constant.
It would be a more sensible interpretation of the question to, for instance, hold the universe outside my skin constant, while varying the inside as far as necessary to be physically consistent with different votes.
If we do that, then the answer we will come up with is that my vote makes exactly one vote of difference – the whole argument I made in the first place is wrong.
But varying my brain is not straightforward, even in principle, because it breaks continuity over time. In order to be imagining a physically possible universe, that nonetheless is consistent with the history we have observed. I might have to vary unobserved facts that extend beyond my brain and body. Those facts may even extend into other voters’ brains and bodies, possibly giving me the >1 answer I wanted. This is what was nagging at me in the first place: the notion that “my mind” is not quite something that can have a neat boundary drawn around it, that it is some kind of extended phenotype. In the identical robots examle, there is only really one mind, that is duplicated or distributed in space, which is why one decision produces many votes. As Dennett says in Freedom Evolves, “if you make yourself very large, you can internalize anything”. In order to internalize the decision to vote, that is, to be able to describe it as something I have done, might I need to make myself large enough that I overlap with others?
That is a coherent possibility, but it seems much more likely that to create the hypothetical implied by the original question, we could vary my vote without varying past observed facts by merely varying quantum randomness in my brain between now and when I vote, or, failing that, that varying unobserved facts in my brain back to my birth would be sufficient. In either case, 1 is a reasonable answer to the question “How many votes of difference does my decision to vote make”
The question is: How many more votes will my candidate get if I vote for him than if I don’t?
The question is too vague to give an absolutely rigorous answer – changing my vote requires, in order that physics be consistent, that other things (by implication, things that are too small for us to have observed) are changed also. Depending on which other things are changed, the answer possibly could vary.
However, there is a large probability that the most straightforward possible answer to the question is, one vote, meaning that unnoticeable changes inside my body are enough to change my vote without being inconsistent with the observed past.
I’m slightly disappointed (I liked the idea of getting free extra votes), but, on the other hand, the answer is the one that is consistent with “free will”, so if you’re insecure about whether you have free will, the answer is good news for you.
And I’m pretty sure I’m close to having a good answer to Newcomb’s paradox, which is the same kind of question. It’s an attempt to turn the question of free will into a motivated question. Asking about things like free will in the abstract tends to degenerate into arguing what the words mean, and unless there’s some reason to care, then one meaning is as good as another. Taking both boxes is an assertion that you have independent free will, and that you are not just a cog in a machine, but at the same time it’s a choice that matters and could cost you money if you’re wrong.
This has rumbled on for a long time.
I fall more in the pro than the anti camp, but with reservations.
I am not convinced by claims that Wikipedia is as accurate as Britannica, and it would be very surprising if they were true. The “latest snapshot” of Wikipedia cannot be authoritative in the way a managed encyclopedia or a textbook can be, and I am disturbed to see Wikipedia cited in scholarly articles or legal opinions.
However, to me those aren’t the main point. Wikipedia is not really in competition with premium encyclopedias or university-level textbooks – its easy availability and massive scope put it into a different category. It makes more sense to compare it with other casual ways of gathering information – conversations in the pub, the popular press, TV programmes, memories of junior school lessons.
In my opinion, we get most of our information about most subjects from sources considerably less reliable than Wikipedia. Take the question of of early-20th century history I mentioned last week. My first source of information was a historical novel – low reliability. Next was Wikipedia – surely more reliable than a work of fiction. Thirdly I discussed it with my wife – not in general a high-reliability source, but since I happen to be married to a history teacher, the source has a certain authoratitiveness. Less detailed, accurate, information, in fact, than Wikipedia, but, while it is possible that Wikipedia might be seriously misleading or incomplete, it is less likely that a history graduate, even one rusty in the particular subject, would be so.
But authority is a niche market, though an important one. When you need an authoritative answer, nothing else will do. Most of the time, you don’t. Unless you have a good reason to find an authoritative answer, you’re not likely to find one – they don’t grow on trees.
The other question is how much better Wikipedia will get. I think the answer is not much – it has grown rapidly to the limits that its structure puts on it. The organisers will continue to tweak the rules to balance new contribution, vandalism and editing, and it will continue to expand in scope, but the basic level of quality is probably about where it will remain. As I’ve said, that quality is very high for most purposes, but not high enough to displace truly authoritative sources of information.
A couple of asides on possible derivatives of Wikipedia: It might be possible to take a snapshot of Wikipedia as a starting point for producing a truly authoritative encyclopedia – it would probably be easier to check the articles there than produce authorititave ones from scratch. Such a product would compete with “real” encyclopedias, but would not compete so much with “live” Wikipedia, which gets much of its value from its currency.
Also, I’m not up to speed with current work on A.I., but I’ve tended to the view that a missing element is the very large amount of stuff you need to know to have any kind of ordinary conversation with a human being. I can’t help wondering whether the enormous database of “general knowledge” that is wikipedia might at some stage form a key part of the first natural-language speaking A.I.
Update: Tim Bray makes the point that Wikipedia’s popularity as a reference is partly due to the fact that those who could provide authoritative information in the public domain aren’t doing so in a sufficiently organised way.
From my comment on Tim Lee’s question about Blair:
Blair’s “third way” is the traditional socialist belief that the economy, the country and the world can be managed and moulded to greater effectiveness, but with the old socialist economics modified by a magic sprinkling of private-sector fairy dust that would prevent repetition of the failures of the old state-run industries.
There is a perfect consistency between the belief that every public service and every industry can be improved by expert target-setting and regulation, and the belief that the Middle East can be made better by expert regime change.
The fairy dust is worth elaborating on. What I am talking about, of course, is PFI – the Private Finance Initiative, the idea that private-sector efficiency can be achieved in public functions by means of contracting with private suppliers to fulfil the functions.
The idea is not totally false. If there genuinely is an already-existing market for a particular service – say rubbish collection – then there is a good chance that the government can do better by entering that market than by organising and employing its own collectors. But it is usually the case that if there is a working market for something, the government should not be doing it at all in the first place, either directly or indirectly. PFI has most often been employed in areas which are in practice pretty much government monopolies. There is no competitive market in running prisons, and not much of one in building hospitals.
The reason I refer to PFI as “fairy dust” is because it is employed without any understanding of what makes the private sector different. The point is not the manner of organisation, but the pattern of incentives. The sales manager of a business unit which sells services to the government under PFI is as much a part of the public sector as any civil servant. His personal success depends on satisfying his government superiors/clients, accounting to them for the services he delivers and the resources he expends. If he satisfies them, he will win more contracts. He is in competition only with his peers – those who are selling the same class of services to government.
Von Mises produced an incredibly precise critique of PFI decades before its introduction to the UK, in his 1944 book Bureaucracy
It is a widespread illusion that the efficiency of government bureaus could be improved by management engineers and their methods of scientific management. However, such plans stem from a radical misconstruction of the objectives of civil government.
Like any kind of engineering, management engineering too is conditioned by the availability of a method of calculation. Such a method exists in profit-seeking business. Here the profit-and-loss statement is supreme. The problem of bureaucratic management is precisely the absence of such a method of calculation.
By coincidence, just after posting my prevous piece on the importance of considering political survival as a constraint on any government, the EconTalk podcast on The Logic of Political Survival came out. Like others, I found this fascinating, and not satisfied with the 88 minutes of interview with Bruce Bueno de Mesquita, I bought the book, which arrived on Friday.
The book in many ways lives up to its promise, but there are a few annoyances with it. The errors in the English are quite distracting: the first chapter is titled “Reigning in the Prince”. It’s just conceivable that this is some kind of clever pun, but if so it doesn’t quite work – it looks much more like ignorance of what reining in is. The title is taken from the sentence on page 4, “On the basis of our analysis, we propose ways of reigning in not ony Hobbes’s Leviathan, but Machiavelli’s well-advised Prince as well”. That makes perfect sense if it means “reining”, but has a quite different meaning if “reigning” were really intended, as well as sloppy grammar. (That is, it could refer to ways of reigning in the context of Leviathan or The Prince. I think it doesn’t).
The other problem is more general, not specific to this book. As you can see, the book compares the authors’ theories with those of other political scientists, from Hobbes and Machiavelli to the present day. There is a glaring ommission, though: an influential political thinker who produced a large body of work looking at the same questions. I shudder to think of the sarcasm that will come my way when I tell a Marxist about a new theory that governments are necessarily constrained for their survival to act in the interests of a definable powerful subset of the population.
This is not a problem with the theory – while the “Selectorate” is in many instances identical to what some call “The Ruling Class”, in other cases it isn’t, and the reasons for both the similarities and the differences between the two concepts are illuminating. But because of that, I think the comparison would have been worth making by the authors. If the Selectorate is identifiable as a class in the Marxist sense of sharing the same relationship to the means of production, that makes it easy for the leader to produce semi-public goods that benefit the Selectorate at the expense of the rest of the population, and harder to produce goods that benefit the “Winning Coalition” at the expense of the rest of the Selectorate. Conversely, if the Selectorate cuts across classes, then many policy choices would be available to favour one class within the Selectorate at the expense of others. This will have interesting implications on the behaviour of the government within the context of the theory, which in the part I’ve so far read and in the parts I’ve skimmed, don’t seem to be studied.
Of course, there are many avenues for further development opened by the theory, and that is just one, but it is an obvious one that occurs to anyone who has, like me, a passing even if hostile aquaintance with Marxism.
I have a bit of an interest in Catholic theology, on the basis that since this is what the brightest minds half the world could produce spent about a thousand years on, it is likely to have some value, even if it is fundamentally flawed. In the same way, a large proportion of political science in the twentieth century was carried out in a Marxist framework, and while it is no doubt the worse for it, it is a stretch to dismiss it as worthless, less worthy as a point of comparison than Hobbes or Machiavelli, or to examine Lenin and Mao as political practitioners without giving any attention to the theories they expounded before coming to power.
Even if I am wrong about there being useful insights in Marxist theory that are worth looking for, it is also the case that the world today contains a large number of ex-Marxists, ex-Marxist political parties, and even ex-Marxist countries. Is it really the case that they need to forget everything they ever learned about politics? So long as the dogmatic approach is rejected, it would seem more productive to show that modern free political scientists are looking at the same questions as the Marxists in much the same way, and drawing conclusions that in some cases agree and in others disagree with aspects of Marxist political theory.