Nothing really new here, but pulling a few things together.
Start with Joseph K’s observation:
Between the replication crisis and the Great Poll Failure of 2016, quantitative social science has basically committed suicide
— Joseph K. (@fxxfy) November 9, 2016
This is a good point, and I added that the failure of financial risk models in 2008 was essentially the same thing.
The base problem is overconfidence. “People do not have enough epistemic humility”, as Ben Dixon put it.
The idea in all these fields is that you want to make some estimate about the future of some system. You make a mathematical model of the system, relating the visible outputs to internal variables. You also include a random variable in the model.
You then compare the outputs of your model to the visible outputs of the system being modelled, and modify the parameters until they match as closely as possible. They don’t match exactly, but you make the effects of your random variable just big enough that your model could plausibly produce the outputs you have seen.
If that means your random variable basically dominates, then your model is no good and you need a better one. But if the random element is fairly small, you’re good to go.
In polling, your visible effects are how people answer polling questions and how they vote. In social science, it’s how subjects behave in experiments, or how they answer questions, or how they do things that come out in published statistics. In finance, it’s the prices at which people trade various instruments.
The next step is where it all goes wrong. In the next step, you assume that your model—including its random variable to account for the unmeasured or unpredictable—is exactly correct, and make predictions about what the future outputs of the system will be. Because of the random variable, your predictions aren’t certain; they have a range and a probability. You say, “Hillary Clinton has a 87% chance of winning the election”. You say “Reading these passages changes a person’s attitude to something-or-other in this direction 62% of the time, with a probability of 4.6% that the effect could have been caused randomly”. You say, “The total value of the assets held by the firm will not decrease by more than 27.6 million dollars in a day, with a probability of 99%”.
The use of probabilities suggests to an outsider that you have epistemic humility–you are aware of your own fallibility and are taking account of the possibility of having gone wrong. But that is not the case. The probabilities you quote are calculated on the basis that you have done everything perfectly, that you model is completely right, and that nothing has changed in between the production of the data you used to build the model and the events that you are attempting to predict. The unpredictability that you account for is that which is caused by the incompleteness of your model—which is necessarily a simplification of the real system—not on the possibility that what your model is doing is actually wrong.
In the case of the polling, what that means is that the margin of error quoted with the poll is based on the assumptions that the people polled answered honestly; that they belong to the demographic groups that the pollsters thought they belonged to, that the proportion of demographic groups in the electorate are what the pollsters thought they were. The margin of error is based on the random variables in the model: the fact that the random selection of people polled might be atypical of the list they were taken from, possibly, if the model is sophisticated enough, that the turnout of different demographics might vary from what is predicted (but where does the data come from to model that?)
In the social sciences, the assumptions are that the subjects are responding to the stimuli you are describing, and not to something else. Also that people will behave the same outside the laboratory as they do inside. The stated probabilities and uncertainties again are not reflecting any doubt as to those assumptions: only to the modelled randomness of sampling and measurement.
On the risk modelling used by banks, I can be more detailed, because I actually did it. It is assumed that the future price changes of an instrument follow the same probability distributions as in the past. Very often, because the instruments do not have a sufficient historical record, a proxy is used; one which is assumed to be similar. Sometimes instead of a historical record or a proxy there is just a model, a normal distribution plus a correlation with the overall market, or a sector of it. Again, lots of uncertainty in the predictions, but none of it due to the possibility of having the wrong proxy, or of there being something new about the future which didn’t apply to the past.
Science didn’t always work this way. The way you do science is that you propose the theory, then it is tested against observations over a period of time. That’s absolutely necessary: the model, even with the uncertainty embedded within it, is a simplification of reality, and the only justification for assuming that the net effects of the omitted complexities are within error bounds is that that is seen to happen.
If the theory is about the emission spectra of stars, or the rate of a chemical reaction, then once the theory is done it can be continually tested for a long period. In social sciences or banking, nobody is paying attention for long enough, and the relevant environment is changing too much over a timescale of years for evidence that a theory is sound to build up. It’s fair enough: the social scientists, pollsters and risk managers are doing the best they can. The problem is not what they are doing, it is the excessive confidence given to their results. I was going to write “their excessive confidence”, but that probably isn’t right: they know all this. Many of them (there are exceptions) know perfectly well that a polling error margin, or a p-value, or a VaR are not truly what the definitions say, but only the closest that they can get. It is everyone who takes the numbers at face value that is making the mistake. However, none of these analysts, of whichever flavour, are in a position to emphasise the discrepancy. They always have a target to aim for.
For a scientist, they have to get a result with a p-value to publish a paper. That is their job: if they do it, they have succeeded, otherwise, they have not. A risk manager, similarly, has a straightforward day-to-day job of persuading the regulator that the bank is not taking too much risk. I don’t know the ins and outs of polling, but there is always pressure. In fact Nate Silver seems to have done exactly what I suggest: his pre-election announcement seems to be been along the lines “Model says Clinton 85%, but the model isn’t reliable, I’m going to call it 65%”. And he got a lot of shit for it.
Things go really bad when there is a feedback loop from the result of the modelling to the system itself. If you give a trader a VaR budget, he’ll look to take risks that don’t show in the VaR. If you campaign so as to maximise your polling position, you’ll win the support of the people who don’t bother to vote, or you’ll put people off saying they’ll vote for the other guy without actually stopping them voting for the other guy. Nasty.
Going into the election, I’m not going to say I predicted the result. But I didn’t fall for the polls. Either there was going to be a big differential turnout between Trump supporters and Clinton supporters, or there wasn’t. Either there were a lot of shy Trump supporters, or there weren’t. I thought there was a pretty good chance of both, but no amount of Data was going to tell me. Sometimes you just don’t know.
That’s actually an argument for not “correcting” the polls. At least if there is a model—polling model, VaR model, whatever—you can take the output and then think about it. If the thinking has already been done, and corrections already applied, that takes the option away from you. I didn’t know to what extent the polls had already be corrected for the unquantifiables that could make them wrong. The question wasn’t so much “are there shy Trump voters?” as “are there more shy Trump voters than some polling organisation guessed there are?”
Of course, every word of all this applies just the same to that old obsession of this blog, climate. The models have not been proved; they’ve mostly been produced honestly, but there’s a target, and there are way bigger uncertainties than those which are included in the models. But the reason I don’t blog about climate any more is that it’s over. The Global Warming Scare was fundamentally a social phenomenon, and it has gone. Nobody other than a few activists and scientists takes it seriously any more, and mass concern was an essential part of the cycle. There isn’t going to be a backlash or a correction; there won’t be papers demolishing the old theories and getting vast publicity. Rather, the whole subject will just continue to fade away. If Trump cuts the funding, as seems likely, it will fade away a bit quicker. Lip service will occasionally be paid, and summits will continue to be held, but less action will result from them. The actual exposure of the failure of science won’t happen until the people who would have been most embarrassed by it are dead. That’s how these things go.
I’ve heard quite a few times that we can’t get rid of democracy, because we can’t get the votes.
Now, I’m not in any great hurry to get rid of democracy. It’s not ideal, but it sort of works, and when it goes things could get messy.
However, if you wanted to do away with democracy, it wouldn’t be all that difficult. I identified the method back before it was my aim.
The introduction of postal and electronic voting makes elections enormously easy to sabotage. I ranted about the danger, back in 2005, and then gradually lost interest in the subject once I ceased to care who actually won any given election. And the main safety margin is that nobody really cares who wins enough to cheat.
But cheating and getting away with it is hard — messing things up enough that nobody knows for sure who ought to have won is much easier, just as it is easier to take down a website than to take control of it.
And what would happen, if you did successfully DOS an election? It would be pretty spectacular. The nearest we’ve seen was the 2000 US presidential election. That stirred up a lot of trouble, but it eventually more or less settled down. That was not what I have in mind though — there was no obvious large-scale fraud then, rather the problem was that the election was so close that the ordinary minor deceptions and inaccuracies made the difference.
In a near dead-heat like that, it will be accepted that you just can’t be sure. But experts have identified a number of local elections in various counties in the US where it can’t be determined who should have won, because of problems with the voting machines. Still, those involve cases where there is no evidence of determined large-scale deliberate fraud.
If we had a general election in Britain, and it emerged that, because of fraud, it wasn’t clear who won, or that it was very close, I don’t know what would happen. We don’t have the same extreme respect for the judiciary, or even the clear formal rules, that allowed the US Supreme Court to settle Florida 2000.
I suspect that in the event, the parties and the civil service would sew it up as best they could, and the business of government would go on. But in the process it would have lost the legitimacy of democracy.
That is why it was a mistake for me to stop paying attention to voting when I stopped caring who won. Because, the way I look at democracy now, it is the impression that the government represents a popular choice that is important: the actual influence of popular will on government is both minor and mostly harmful. But it is the impression that is endangered by unreliable voting systems, so they constitute a bigger risk to the system as I see it than as a democrat would look at it.
Britain is still on paper votes, so it is only through postal votes that the system is vulnerable at the moment, and that only to a quite large-scale attack. But if the system is changed in the direction of networked or electronic voting, then we know what we have to do if we decide to get rid of democracy.
Well, this is embarrassing.
I haven’t actually changed my position, that “I think AV would give voters slightly more influence than they have now. I am quite unsure as to whether that’s a good thing or a bad thing”. I think what really has me upset is that it would have have been so interesting to see how party politics would have developed under AV.
Would any of the major parties have split? Would we have got a lot of independents running, and some of them winning? Would the total vote of the three main parties have dropped to about 50%, with several outsiders each picking up 10-20% of 1st preference votes in most constituencies? Now we’ll never know. It’s like having a favourite TV programme cancelled half way through.
In case that sounds shallow, I should point to a few old posts, where I developed the case that the entertainment value of voting actually outweighs any political value. Because this was back in 2007-8, it applies even if, unlike me today, you do believe that voting has some political value.
- Politics and Money
- Democracy and Entertainment
- Entertainment and Policy
- Value of Politicians
- Politics is a Spectator Sport
Sometimes the way to get to a good explanation is to start with a bad one.
The opponents of AV make the claim that it means that voters for fringe parties get their vote counted more than voters for major parties. This seemed a stupid objection, but I couldn’t quite explain why, clearly and simply.
Yesterday I read John Humphrys’ complete failure to explain why (via Matt Ridley), and it became obvious:
Yes, in AV, your vote can be counted more than once — whether you vote for a fringe party or a winner or runner-up. If there are only two rounds of counting in a particular example, then the person A who votes for the eliminated candidate gets their vote counted twice: for their first choice in the first round, and for their second choice in the second round.
The voter B for any other candidate also gets their vote counted twice, for their first choice both times.
So in the last round, the one that actually decides the winner, voter A gets counted for their second choice and voter B for their first.
That doesn’t settle the larger argument of course: you can still argue whether AV has a tendency to produce centrist coalitions and whether that is a bad thing. But there should be no argument claiming that AV is less fair than FPTP, for what that’s worth.
(Disclaimer: I argue about this out of habit, not because I think it matters)
The main valid argument for AV is that it isn’t as sensitive as FPTP to which candidate people think is going to win. It may get rid of the truly inane feature that I reported on at the last general election, where the parties argued more about who was likely to win than about who ought to win.
A second valid argument for AV is that it encourages the expression of non-mainstream views, by not penalising voters for unpopular parties. It doesn’t actually give unpopular parties any more representation, as PR does, but it gives them more visibility.
The main valid argument against AV is that it is likely to produce centrist coalitions, whatever the changes in views of the voters.
Putting the three points together, I have to be in favour. In my theory, the value of democracy is that it has perceived legitimacy, reducing the amount that the ruling establishment hsa to do to protect itself. The one anti argument actually helps in this regard, as it makes the establishment even more secure.
However, the pro arguments are still applicable, as it is valuable to make the unconventional more visible, as that will aid thinking about what we should do when and if the current establishment does fail.
I don’t have a strong opinion toward what voting system future General Elections will use. I don’t think that who gets elected is very important: voters don’t have any control over immediate policy; they only have influence over the long-term direction of policy, and that doesn’t depend on who wins any given election.
However, I used to be very interested in voting systems, and I have an intense dislike of bad arguments. The bad arguments in the AV debate come mainly from the No side.
The silliest is the cost argument. They claim that a switch to AV would cost 250 million pounds. That is highly improbable, and includes the cost of the referendum itself, which is a sunk cost in any case since the referendum is now going to happen. But just take it at face value for a moment.
Assume AV is an improvement — if it is not then the cost argument is irrelevant. 250 million is about five pounds per voter. The average voter will probably have the opportunity to vote in another six or seven elections. If a significant improvement in the value of a vote is not worth a quid, then what is a vote worth? The only people who should be influenced by the cost argument are those of us who believe that voting is worthless anyway.
There is also talk of voting or counting machines; that is a much bigger and easier argument than AV itself. Introducing machines is a huge mistake. FPTP hand-counted is far superior to AV with machines, since there is no reason for anyone ever to trust the machines.
A bizarre gem came from John Redwood, who wrote on his blog, “we think it undesirable that elections are settled by the second preference votes of those who vote for minor or unpopular parties”. He doesn’t say why. If you like your local independent, or Green, then the fact that you also prefer Conservative to Labour should therefore be of no interest?
A more cogent objection is that AV would produce Labour/Lib Dem coalitions into the indefinite future. I do not dismiss that, but I think it is mistaken. For one thing, the current situation shows that the support for the Lib Dems, being as it is a historically-produced random collection of highly disparate groups, with no policy positions in common at all, cannot survive the Lib Dems actually holding any power. But more to the point, the biggest effect of AV is within the parties themselves.
In 1981, a handful of senior Labour figures broke away from the party to form the SDP. That was only possible because of the utter failure of the previous Labour government, and the sheer disarray that the party was in. The SDP held a handful of seats for a few years, then merged with the Liberal party.
But imagine how much easier the job of splitting a party would be under AV. The problem the SDP faced was that for most Labour supporters, voting for the SDP instead of Foot was more likely to produce a Conservative MP than an SDP MP. AV greatly lessens that effect: if 50% of voters prefer Labour to Conservative, it is almost impossible for the Conservative to be elected because of the Labour vote splitting between two rival factions.
In fact, other factors might turn out more important than the voting system itself: in the face of the threat of splitting, I would not be at all surprised to see steps taken to defend the leadership of parties from internal dissenters. Pay particular attention to rules on party funding or ballot entry.
I think AV would give voters slightly more influence than they have now. I am quite unsure as to whether that’s a good thing or a bad thing: the Establishment in this country does damage in internal competition and through its religious attachment to Universalism, but on the other hand it is generally less stupid than the voters. So at the end of the day I am in the Whatever2AV camp.
It’s beginning to look quite likely that we could end up with the Alternative Vote (AV) system. Aficionados of electoral reform will tell you that it’s not a proportional system, which is quite true. The results it produces will be quite different from those produced by multi-member STV or d’Hont. That doesn’t mean, however, that it would not be a significant change.
Unlike the multi-member systems, AV will continue to give small parties no seats. What AV does, however, is allow much more effective signalling by voters. It is very plausible that it could help small parties, over time, become big parties.
The point of AV is that it saves the voter from having to do tactical-voting calculations. Currently, anyone who votes UKIP, or Green, or SSP, or BNP is sacrificing their (tiny) influence on the result of the election in favour of making a public statement. With AV, you can do both – vote SSP ,Labour as 1 & 2, and there is less chance that your SSP vote will let in the Lib Dems. (Not no chance – there are still circumstances in which it might turn out that you would have got a different result by voting Labour, SSP, but they’re complex and not very predictable)
2.5 million people voted UKIP last year. Only 900,000 did last week, so quite possibly the other 1.6 million didn’t vote UKIP because of the wasted vote issue. If in the next general election, the constituencies which went 8% or 9% UKIP became 25% or 30%, they probably still wouldn’t get any seats, but they’d get a lot more publicity, and they wouldn’t be far short of getting MPs.
The same logic applies to high-profile candidates who defect from their parties to stand as independents. It becomes a straight popularity contest between them and the “official” candidate, since any supporter of the party can vote rebel-1 official-2.
AV might benefit the BNP most of all, since they have most to gain by giving voters a chance to anonymously show support for them. Today, nobody knows, do the BNP get only 2% because nearly everyone hates them, or because they’re a small party and it’s a wasted vote, or because most people think nearly everyone hates them, since they only get 2%? In the last case, it would only take a few election cycles for them to look less like outcasts to those who are secretly disposed to vote for them, but put off by the opprobrium.
At the end of the day, though, a politician will still win. I’m not paying all this attention because I think it’s important, it’s just more entertaining than the Premier League. But if you do care about who wins, then while multi-member STV is still the first choice, you probably shouldn’t turn your nose up at AV.
3 new leaflets this morning – one from the Labour candidate, one card from Nick Clegg, and one letter from Nick Clegg. All three carry pictures of two running horses. The Lib Dems say only they can beat Labour, but Labour say only they can beat the Conservatives. That’s the main point of all the material.
The Lib Dems seem more convincing – for one thing, unlike Labour, their illustration demonstrates that they understand that horse races involve jockeys. But of course, the authorities on horse races are still considering the Lib Dems outsiders, though at 11/2 they’ve nosed ahead of Esther Rantzen.
Only one of the three documents (the postcard from the Lib Dems) has any mention of policy, and one of the four bullet points there is “action to get our economy moving again”, which doesn’t quite qualify as a policy for me.
Anyone out there who thinks that democracy is a good thing – how can it be right that the vast bulk of the material given to me by candidates is concentrated on the question of who is more likely to win? OK, PR would change that somewhat, but really, what is the explanation?
I was wondering whether they had taken into account the small number of people necessary to elect an MP. Luton South is a 4-way contest, so 30-35% of the vote may well win it. If turnout is around 40%, then the winning total may be no more than 12% of the electorate. So finding any way of demonstrating that even a newly-elected MP has the confidence of his constituents won’t be easy.
It turns out that the Tory plan is that a petition of 10% of the electorate forces a by-election. I think I can safely predict that there won’t be a single MP in the house that 10% of the electorate wouldn’t want to get rid of, so the only obstacle to getting a by-election anywhere in the country is being organised enough to collect the signatures.
Any existing research on how easy it is to get signatures is probably worthless, because existing petitions are a complete waste of everybody’s time. This is the sort of thing where people are getting very much more efficient.
I wouldn’t be surprised if the recall plan led to every week being a by-election week. Should be a laugh.