The New York Times has published a long analysis of the effects of the hacking of Democratic Party organisations and operatives in the 2016 election campaign.
The article is obviously trying to appear a balanced view, eschewing the “OMG we are at war with Russia” hyperbole and questioning the value of different pieces of evidence. It does slip here and there, for instance jumping from the involvement of “a team linked to the Russian government” (for which there is considerable evidence) to “directed from the Kremlin” without justification.
The evidence that the hackers who penetrated the DNC systems and John Podesta’s email account are linked to the Russian Government is that the same tools were used as have been used in other pro-Russian actions in the past.
*Update 4th Jan 2017: that is a bit vague: infosec regular @pwnallthethings goes into very clear detail in a twitter thread)
One important consideration is the sort of people who do this kind of thing. Being able to hack systems requires some talent, but not any weird Hollywood-esque genius. It also takes a lot of experience, which goes out of date quite quickly. Mostly, the people who have the talent and experience are the people who have done it for fun.
Those people are difficult to recruit into military or intelligence organisations. They tend not to get on well with concepts such as wearing uniforms, turning up on time, or passing drug tests.
It is possible in theory to bypass the enthusiasts and have more professional people learn the techniques. One problem is that becoming skilled requires practice, and that generally means practice on innocent victims. More significantly, the first step in any action is to work through cut-out computers to avoid being traced, and those cut-outs are also hacked computers belonging to random victims. That’s the way casual hackers, spammers and other computer criminals work, and espionage hackers have to use the same techniques. They have to be doing it all the time, to keep a base of operations, and to keep their techniques up to date.
For all these reasons, it makes much more sense for state agencies to stay arms-length from the actual hackers. The agencies will know about the hackers, maybe fund them indirectly, cover for them, and make suggestions, but there won’t be any official chain of command.
So the hackers who got the data from the DNC were probably somewhat associated with the Russian Government (though a comprehensive multi-year deception by another organisation deliberately appearing to be Russian is not completely out of the question).
They may have had explicit (albeit off-the-record) instructions, but that’s not necessary. As the New York Times itself observed, Russia has generally been very alarmed by Hillary Clinton for years. The group would have known to oppose her candidacy without being told.
“It was conventional wisdom… that Mrs. Clinton considered her husband’s efforts to reform Russia in the 1990s an unfinished project, and that she would seek to finish it by encouraging grass-roots efforts that would culminate with regime change.”
Dealing with the product is another matter. It might well have gone to a Russian intelligence agency, either under an agreement with the hackers or ad-hoc from a “concerned citizen”: you would assume they would want to see anything and everything of this kind that they could get. While hacking is best treated as deniable criminal activity, it would be much more valuable to agencies to have close control over the timing and content of releases of data.
So I actually agree with the legacy media that the extraction and publication of Democratic emails was probably a Russian intelligence operation. There is a significant possibility it was not, but was done by some Russians independent of government, and a remote possibility it was someone completely unrelated who has a practice of deliberately leaving false clues implicating Russia.
I’ve often said that the real power of the media is not the events that they report but the context to the events that they imply. Governments spying on each other is completely normal. Governments spying on foreign political movements is completely normal. Governments attempting to influence foreign elections by leaking intelligence is completely normal. Points to Nydwracu for finding this by William Safire:
“The shrewd Khrushchev came away from his personal duel of words with Nixon persuaded that the advocate of capitalism was not just tough-minded but strong-willed; he later said that he did all he could to bring about Nixon’s defeat in his 1960 presidential campaign.”
The major restraint on interference in foreign elections is generally the danger that if the candidate you back loses then you’ve substantially damaged your own relations with the winner. The really newsworthy aspect of all this is that the Russians had such a negative view of Clinton that they thought this wouldn’t make things any worse. It’s been reported that the Duma broke into applause when the election result was announced.
The other thing that isn’t normal is a complete public dump of an organisation’s emails. That’s not normal because it’s a new possibility, one that people generally haven’t begun to get their heads around. I was immediately struck by the immense power of such an attack the first time I saw it, in early 2011. No organisation can survive it: this is an outstanding item that has to be solved. I wouldn’t rule out a new recommended practice to destroy all email after a number of weeks, forcing conversation histories to be boiled down to more sterile and formal documents that are far less potentially damaging if leaked.
It is just about possible for an organisation to be able to adequately secure their corporate data, but that’s both a technical problem and a management problem. However, the first impression you get is of the DNC is one of amateurism. That of course is not a surprise. As I’ve observed before, if you consider political parties to be an important part of the system of government, their lack of funding and resources is amazing, even if American politics is better-funded than British. That the DNC were told they had been hacked and didn’t do anything about it is still shocking. Since 2011, this is something that any organisation sensitive to image should be living in fear of.
This is basically evidence-free speculation, but it seems possible that the Democratic side is deficient in actual organisation builders: the kind of person who will set up systems, make rules, and get a team of people to work together. A combination of fixation on principles rather than practical action, and on diversity and “representativeness” over extraordinary competence meant that the campaign didn’t have the equivalent of a Jared Kushner to move in, set up an effective organisation and get it working.
Or possibly the problem is more one of history: the DNC is not a political campaign set up to achieve a task, but a permanent bureaucracy bogged down by inferior personnel and a history of institutional compromises. Organisations become inefficient naturally.
Possibly Trump in contrast benefited from his estrangement from the Republican party establishment, since it meant he did not have legacy organisations to leak his secrets and undermine his campaign’s efficiency. He had a Manhattan Project, not an ITER.
The task of building–or rebuilding–an organisation is one that few people are suited to. Slotting into an existing structure is very much easier. Clinton’s supporters particularly are liable to have the attitude that a job is something you are given, rather than something you make. Kushner and Brad Parscale seem to stand out as people who have the capability of making a path rather than following one. As an aside, Obama seems to have had such people also, but Clinton may have lacked them. Peter Thiel described Kushner as “the Chief Operating Officer” of Trump’s campaign. Maybe the real estate business that Trump and Kushner are in, which consists more of separate from-scratch projects than most other businesses, orients them particularly to that style.
Something that’s cropped up a few times with recent discussion of neocameralism as a concept is the role of shareholders in existing firms.
Conflicts of interest between principals and agents are one of the most significant forces acting on the structure of any kind of organisation, so it is essential when discussing how to apply structures from one kind of organisation to another, to have a feel of how the conflicts are playing out in existing structures and organisations.
In particular, I have seen more than one person on twitter put forward the idea that present-day joint-stock companies totally fail to resolve the conflict of interest between shareholders and managers, with the result that shareholders are powerless and managers run companies purely in their own interest:
In discussion of this piece by Ron Carrier from November 24th the author said on twitter,
“Because they are non-contractual, shares are a useful way of financing a company without ceding control…. Contrary to shareholder theory, power in the corporation is actually located in mgmt. and the board of directors.”
More recently (December 9th), Alrenous followed the same path: from the suggestion that dividend payments from public companies are in aggregate very low, he draws the conclusion that stocks are “worthless” and that those who buy them are effectively just giving their money away for managers to do what they want with.
I’m sure Alrenous understands that the theory is that a profitable company can be delivering value to shareholders by reinvesting its profits and becoming a more valuable company, capable of returning larger amounts of cash in future. And of course I understand that just because someone believes that a company has become more valuable in consequence of reinvested profits, doesn’t mean it is necessarily true.
Discussions like this among people not involved with investment professionally carry a risk of being based on factoids or rumour. In particular, mainstream journalists are fantastically ignorant of the whole subject. But in the end everything to do with public companies is actually public, if you can find the information and not misunderstand it. (Note that I am not including myself among the professionals, though I’ve worked with them in the past in an IT role).
At any rate, here is a publication dealing with aggregate dividends across the NY stock exchange. factset.com
“Aggregate quarterly dividends for the S&P 500 amounted to $105.8 billion in the second quarter, which represented a 0.8% increase year-over-year. The dividend total in Q2 marked the second largest quarterly dividend amount in at least ten years (after Q1 2016). The total dividend payout for the trailing twelve months ending in Q2 amounted to $427.5 billion, which was a 7.1% increase from the same time period a year ago.”
So, that’s getting on for half a trillion dollars in dividends paid out by the S&P 500 over the last year. Throwing numbers around without any indication of scale is another media trope, but that’s about 2-3% of US GDP, which seems like the right sort of scale.
As an aside, if some of these companies hold shares in others, the dividends are effectively double-counted: one company in the set is paying out to another, which may or may not then be paying out to its shareholders. I would assume this is not more than a few percent of the total—even investment companies like Berkshire Hathaway are likely to invest more in private companies than other S&P 500 members—but it’s an indication of the pitfalls available in this sort of analysis.
In addition to dividends, as I pointed out, share buybacks—where a company purchases its own shares on the open market—are economically equivalent to dividends: the company is giving cash to its own shareholders. If every shareholder sells an equal proportion of their holdings back to the company, then the result is that each shareholder continues to hold the same fraction of the company’s outstanding shares, and each has been paid cash by the company. Of course, some will sell and some not, but the aggregate effect is the same. The choice of whether to take cash by selling a proportion of one’s holding, or whether to simply hold shares, thereby effectively increasing one’s holding as a fraction of the company, enables shareholders to minimise their tax liability more efficiently, which is apparently why share buybacks have become more significant compared to dividends.
Alrenous found this article from Reuters, which says “In the most recent reporting year, share purchases reached a record $520 billion.”. That’s not the same period as the one I found for aggregate dividends, so adding them together might be a bit off, but it looks like we can roughly double that 3% of GDP. As I said on twitter, as a general rule, large companies are making profits and paying shareholders.
The reason neocameralism makes sense is that joint-stock companies basically work.
That is not to suggest that the principal-agent conflicts are insignificant. They are always significant, and managing the problem is a large part of any organisational practice. That is what the bulk of corporate law is there to deal with.
I picked up a recent article in Investor’s Chronicle in which Chris Dillow suggests that management is simply overpaid:
“…bosses plunder directly from shareholders by extracting big wages for themselves. The High Pay Centre estimates that CEOs are now paid 150 times the salary of the average worker, a ratio that has tripled since the 1990s – an increase which, it says, can’t be justified by increased management efficiency.”
However, Dillow also links other source with other suggestions: the 1989 Harvard Business Review article by Michael Jensen is particularly fascinating.
Jensen claims that regulation brought in after the Great Depression had the effect of limiting the control of shareholders over management:
“These laws and regulations—including the Glass-Steagall Banking Act of 1933, the Securities Act of 1933, the Securities Exchange Act of 1934, the Chandler Bankruptcy Revision Act of 1938, and the Investment Company Act of 1940—may have once had their place. But they also created an intricate web of restrictions on company ‘insiders’ (corporate officers, directors, or investors with more than a 10% ownership interest), restrictions on bank involvement in corporate reorganizations, court precedents, and business practices that raised the cost of being an active investor. Their long-term effect has been to insulate management from effective monitoring and to set the stage for the eclipse of the public corporation.
“…The absence of effective monitoring led to such large inefficiencies that the new generation of active investors arose to recapture the lost value. These investors overcome the costs of the outmoded legal constraints by purchasing entire companies—and using debt and high equity ownership to force effective self-monitoring.”
A quarter of a century on from Jensen’s paper, the leveraged buyout looks not so much like an alternative form of organisation for a business, but rather an extra control mechanism available to shareholders of a public joint-stock company. The aim of of a buyout today is, as Jensen describes, to replace inefficient management and change the firm’s strategy, but today there is normally an exit strategy: the plan is that having done those things the company will be refloated with new management and a new strategy.
The “Leveraged” of LBO obviously refers to debt: that takes us to the question of debt-to-equity ratio. A firm needs capital: it can raise that from shareholders or from lenders. If all its capital is shareholders’, that limits the rate of profit it can offer them: the shares become less volatile. If the firm raises some of its capital needs from lenders, the shares become riskier but potentially more profitable.
Under the theory of the Capital Asset Pricing Model (CAPM), the choice is arbitrary: leverage can be applied by the shareholders just as by the company itself. Buying shares on margin of a company without debt is equivalent to buying shares of a leveraged company for cash. However, this equivalency is disrupted by transaction costs, and also by tax law.
There is considerable demand in the market for safe fixed-income investments. A large profitable company is exceptionally well-placed to meet that demand by issuing bonds or borrowing from banks, and therefore can probably do so much more efficiently than its shareholders would be able to individually, were it to hold its cash and leave shareholders to borrow against the more expensive shares.
The transaction costs the other way, the ones caused by corporate indebtedness, come through bankruptcy. Bankruptcy is essential to capitalism, but it involves a lot of expensive lawyers, and can be disruptive. For an extreme example, see the Hanjin Shipping case in September. It’s clearly in the interest of the owners of the cargo to get the cargo unloaded, but the international complications of the bankruptcy of the shipping line means that it’s unclear who is going to end up paying for the docking and unloading. If Hanjin had a capital structure that gave it spare cash instead of debt, all this expensive inconvenience would be avoided.
Aside from transaction costs, the argument in Jensen’s paper is that the management of a company with spare cash is better able to conceal the company’s activities from shareholders. In his account, once the company has been bought out and restructured with debt, any expansion in the cost base has to be directly justified to shareholders and creditors, since capital will have to be raised to pay for it. This improvement in the monitoring of the management is part of what produces the increased value (in his 1980s figures, the average LBO price was 50% above the previous market value).
A quarter of a century later, we frequently read the opposite criticism, that pressure from investors makes management too focused on short-term share prices, which is a bad thing. I linked this article by Lynn Stout, and while I think the argument is very badly stated, it is not entirely wrong. The problem in my opinion is not with the idea of managing in order to maximise shareholder value: that is absolutely how a company should be managed. The problem is with equating shareholder value to the price at which a share of the company was most recently traded. Though that is most probably the best measure we have of the value of the company to its shareholders, it is, nonetheless, not a very accurate measure. Given that the markets have a relatively restricted view of the state of the company, maximising the short-term share price relies on optimising those variables which are exposed to view: chiefly the quarterly earnings.
If outside shareholders had perfect knowledge of the state of the company, then maximising the share price would be the same as maximising shareholder value. Because of the information assymetry, they are not the same. Value added to the company will not increase the share price unless it is visible to investors, and some forms of value are more visible than others. Management are certainly very concerned by the share price. As I mentioned on twitter, “in any company I worked for, management were (very properly) terrified of shareholders”
But this is a well-known problem. There are various approaches that have been tried to improve the situation. Where a company has a long-established leadership that has the confidence of investors, shareholding can be divided between classes of shares with different voting rights, so that the trusted, established leadership have control over the company without owning a majority of the equity. This is the situation with Facebook, for instance, where Mark Zuckerberg owns a majority of the voting shares, and most other shareholders hold class B or C shares with reduced or zero voting rights. Buying such shares is an act of faith in Mr Zuckerberg, more than owning shares in a more conventionally structured business. The justification is that it allows him to pursue long-term strategy without the risk of being interrupted by a takeover or by activist investors.
In fact, this year Zuckerberg increased the relative voting power of his holding, by introducing the non-voting class C shares. That has been challenged in court, and is the subject of ongoing litigation.
In summary, the arrangements of public companies consist of a set of complex compromises. There are many criticisms, but they tend to come in opposing pairs. For everyone who, like Alrenous, claims that shares are worthless because companies do not pay dividends, there are some like the Reuters article he found which complain that companies pay out all their profits and do not invest enough in growth. For everyone who, like Chris Dillow, complains that managements are undersupervised and extract funds for self-aggrandizement and private gain, there are others like Lynn Stout who complain that managements are over-constrained by short-term share price moves and unable to plan strategically.
The arrangements which implement the compromises between these failings are flexible: they change over time and adapt to circumstances. A hundred-year-old resource extraction business like Rio Tinto is not structured in exactly the same way as a web business like Facebook. The point of Chris Dillow’s article is that fewer businesses are publicly traded today than in the past (though even that is difficult to measure meaningfully).
The joint-stock company is not a magic bullet, it is a range of institutional forms, evolved over time, and part of a large range of institutiontal forms that make up Actually Existing Capitalism. They are ways of coping with, rather than solving, the basic conflict-of-interest and asymmetric-information issues that are fundamental to everything from a board of directors appointing a CEO to a coder-turned-rancher hiring a farm hand.
My worry is that Moldbug’s form of Neocameralism is an inflexible snapshot of one particular corporate arrangement, which only works as well as it does because it can be adapted to meet changing demands. That’s why I tend to think of it as one item on a menu of management options (including hereditary monarchy!)
Nothing really new here, but pulling a few things together.
Start with Joseph K’s observation:
Between the replication crisis and the Great Poll Failure of 2016, quantitative social science has basically committed suicide
— Joseph K. (@fxxfy) November 9, 2016
This is a good point, and I added that the failure of financial risk models in 2008 was essentially the same thing.
The base problem is overconfidence. “People do not have enough epistemic humility”, as Ben Dixon put it.
The idea in all these fields is that you want to make some estimate about the future of some system. You make a mathematical model of the system, relating the visible outputs to internal variables. You also include a random variable in the model.
You then compare the outputs of your model to the visible outputs of the system being modelled, and modify the parameters until they match as closely as possible. They don’t match exactly, but you make the effects of your random variable just big enough that your model could plausibly produce the outputs you have seen.
If that means your random variable basically dominates, then your model is no good and you need a better one. But if the random element is fairly small, you’re good to go.
In polling, your visible effects are how people answer polling questions and how they vote. In social science, it’s how subjects behave in experiments, or how they answer questions, or how they do things that come out in published statistics. In finance, it’s the prices at which people trade various instruments.
The next step is where it all goes wrong. In the next step, you assume that your model—including its random variable to account for the unmeasured or unpredictable—is exactly correct, and make predictions about what the future outputs of the system will be. Because of the random variable, your predictions aren’t certain; they have a range and a probability. You say, “Hillary Clinton has a 87% chance of winning the election”. You say “Reading these passages changes a person’s attitude to something-or-other in this direction 62% of the time, with a probability of 4.6% that the effect could have been caused randomly”. You say, “The total value of the assets held by the firm will not decrease by more than 27.6 million dollars in a day, with a probability of 99%”.
The use of probabilities suggests to an outsider that you have epistemic humility–you are aware of your own fallibility and are taking account of the possibility of having gone wrong. But that is not the case. The probabilities you quote are calculated on the basis that you have done everything perfectly, that you model is completely right, and that nothing has changed in between the production of the data you used to build the model and the events that you are attempting to predict. The unpredictability that you account for is that which is caused by the incompleteness of your model—which is necessarily a simplification of the real system—not on the possibility that what your model is doing is actually wrong.
In the case of the polling, what that means is that the margin of error quoted with the poll is based on the assumptions that the people polled answered honestly; that they belong to the demographic groups that the pollsters thought they belonged to, that the proportion of demographic groups in the electorate are what the pollsters thought they were. The margin of error is based on the random variables in the model: the fact that the random selection of people polled might be atypical of the list they were taken from, possibly, if the model is sophisticated enough, that the turnout of different demographics might vary from what is predicted (but where does the data come from to model that?)
In the social sciences, the assumptions are that the subjects are responding to the stimuli you are describing, and not to something else. Also that people will behave the same outside the laboratory as they do inside. The stated probabilities and uncertainties again are not reflecting any doubt as to those assumptions: only to the modelled randomness of sampling and measurement.
On the risk modelling used by banks, I can be more detailed, because I actually did it. It is assumed that the future price changes of an instrument follow the same probability distributions as in the past. Very often, because the instruments do not have a sufficient historical record, a proxy is used; one which is assumed to be similar. Sometimes instead of a historical record or a proxy there is just a model, a normal distribution plus a correlation with the overall market, or a sector of it. Again, lots of uncertainty in the predictions, but none of it due to the possibility of having the wrong proxy, or of there being something new about the future which didn’t apply to the past.
Science didn’t always work this way. The way you do science is that you propose the theory, then it is tested against observations over a period of time. That’s absolutely necessary: the model, even with the uncertainty embedded within it, is a simplification of reality, and the only justification for assuming that the net effects of the omitted complexities are within error bounds is that that is seen to happen.
If the theory is about the emission spectra of stars, or the rate of a chemical reaction, then once the theory is done it can be continually tested for a long period. In social sciences or banking, nobody is paying attention for long enough, and the relevant environment is changing too much over a timescale of years for evidence that a theory is sound to build up. It’s fair enough: the social scientists, pollsters and risk managers are doing the best they can. The problem is not what they are doing, it is the excessive confidence given to their results. I was going to write “their excessive confidence”, but that probably isn’t right: they know all this. Many of them (there are exceptions) know perfectly well that a polling error margin, or a p-value, or a VaR are not truly what the definitions say, but only the closest that they can get. It is everyone who takes the numbers at face value that is making the mistake. However, none of these analysts, of whichever flavour, are in a position to emphasise the discrepancy. They always have a target to aim for.
For a scientist, they have to get a result with a p-value to publish a paper. That is their job: if they do it, they have succeeded, otherwise, they have not. A risk manager, similarly, has a straightforward day-to-day job of persuading the regulator that the bank is not taking too much risk. I don’t know the ins and outs of polling, but there is always pressure. In fact Nate Silver seems to have done exactly what I suggest: his pre-election announcement seems to be been along the lines “Model says Clinton 85%, but the model isn’t reliable, I’m going to call it 65%”. And he got a lot of shit for it.
Things go really bad when there is a feedback loop from the result of the modelling to the system itself. If you give a trader a VaR budget, he’ll look to take risks that don’t show in the VaR. If you campaign so as to maximise your polling position, you’ll win the support of the people who don’t bother to vote, or you’ll put people off saying they’ll vote for the other guy without actually stopping them voting for the other guy. Nasty.
Going into the election, I’m not going to say I predicted the result. But I didn’t fall for the polls. Either there was going to be a big differential turnout between Trump supporters and Clinton supporters, or there wasn’t. Either there were a lot of shy Trump supporters, or there weren’t. I thought there was a pretty good chance of both, but no amount of Data was going to tell me. Sometimes you just don’t know.
That’s actually an argument for not “correcting” the polls. At least if there is a model—polling model, VaR model, whatever—you can take the output and then think about it. If the thinking has already been done, and corrections already applied, that takes the option away from you. I didn’t know to what extent the polls had already be corrected for the unquantifiables that could make them wrong. The question wasn’t so much “are there shy Trump voters?” as “are there more shy Trump voters than some polling organisation guessed there are?”
Of course, every word of all this applies just the same to that old obsession of this blog, climate. The models have not been proved; they’ve mostly been produced honestly, but there’s a target, and there are way bigger uncertainties than those which are included in the models. But the reason I don’t blog about climate any more is that it’s over. The Global Warming Scare was fundamentally a social phenomenon, and it has gone. Nobody other than a few activists and scientists takes it seriously any more, and mass concern was an essential part of the cycle. There isn’t going to be a backlash or a correction; there won’t be papers demolishing the old theories and getting vast publicity. Rather, the whole subject will just continue to fade away. If Trump cuts the funding, as seems likely, it will fade away a bit quicker. Lip service will occasionally be paid, and summits will continue to be held, but less action will result from them. The actual exposure of the failure of science won’t happen until the people who would have been most embarrassed by it are dead. That’s how these things go.
I have long ago observed that, whatever its effect on government, democracy has great entertainment value. We are certainly being entertained by the last couple of days, and that looks like going on for a while.
From one point of view, the election is a setback for neoreaction. The overreach of progressivism, particularly in immigration, was in danger of toppling the entire system, and that threat is reduced if Trump can restrain the demographic replacement of whites.
On the other hand, truth always has value, and the election result has been an eye-opener all round. White American proles have voted as a block and won. The worst of the millennial snowflakes have learned for the first time that their side isn’t always bound to win elections, and have noticed many flaws of the democratic process that possibly weren’t as visible to them when they were winning. Peter Thiel’s claims that democracy is incompatible with freedom will look a bit less like grumblings of a bad loser once Thiel is in the cabinet. Secession is being talked about, the New York Times has published an opinion column calling for Monarchy. One might hope that Lee Kuan Yew’s observations on the nature of democracy in multi-racial states might get some currency over the next few months or years.
So, yes, President Trump may save the system for another two or three decades (first by softening its self-destructive activities, and later by being blamed for every problem that remains). But Anomaly UK is neutral on accelerationism; if the system is going to fail, there is insufficient evidence to say whether it is better it fail sooner or later. If later, it can do more damage to the people before it fails, but on the other hand, maybe we will be better prepared to guide the transition to responsible effective government.
We will soon be reminded that we don’t have responsible effective government. Enjoyable as fantasies of “God Emperor Trump” have been, of course the man is just an ordinary centre-left pragmatist, and beyond immigration policy and foreign policy becoming a bit more sane, there is no reason to expect any significant change at all. The fact that some people were surprised by the conciliatory tone of his victory speech is only evidence that they were believing their own propaganda. He is not of the Alt-Right, and the intelligent of the Alt-Right never imagined that he was.
For the Alt-Right, if he merely holds back the positive attacks on white culture, he will have done what they elected him to do. Progressives can argue that there can be no such thing as anti-white racism, and that whites cannot be allowed the same freedoms as minority groups since their historical privilege will thereby be sustained. But even if one accepts that argument, it doesn’t mean that those who reject it are White Nationalists. Blurring the two concepts might make for useful propaganda, but it will not help to understand what is happening.
My assessment of what is happening is the same as it was in March: I expect real significant change in US immigration policy, and pretty much no other changes at all. I expect that Trump will be allowed to make those changes. It is an indication of the way that progressive US opinion dominates world media that people in, say, Britain, are shocked by the “far-right” Americans electing a president who wants to make America’s immigration law more like Britain’s–all while a large majority in Britain want to make Britain’s immigration law tougher than it is.
The fact that US and world markets are up is a clue that much of the horror expressed at Trump’s candidacy was for show, at least among those with real influence.
The polls were way off again. The problem with polling is that it is impossible. You simply can’t measure how people are going to vote. The proxies that are used–who people say they support, what they say they are going to do–don’t carry enough information, and no amount of analysis will supply the lacking information. The polling analysis output is based on assumptions about the difference between what they say and what they will do–the largest variable being whether they will actually go and vote at all. (So while this analyst did a better job and got this one right, the fundamental problems remain)
In a very homogeneous society, polling may be easier, because there’s less correlation between what candidate a person supports and how they behave. But the more voting is driven by demographics, the less likely the errors are to cancel out.
If arbitrary assumptions have to be made, then the biases of the analysts come into play. But that doesn’t mean the polls were wrong because they were biased–it just means they were wrong because they weren’t biased right.
On to the election itself, obviously the vital factor in the Republican victory was race. Hillary lost because she’s white. Trump got pretty much the same votes Romney did; Hillary got the white votes that Obama did in 2012, but she didn’t get the black votes because she isn’t black, so she lost.
So what of the much-talked-of emergence of white identity politics? The thing is, that really happened, but it happened in 2012 and before. It was nothing to do with Trump. The Republican party has been the party of the white working class for decades. Obama took a lot of those votes in 2008, on his image as a radical and a uniter, but that was exceptional, and he didn’t keep them in 2012.
The exit polls show Trump “doing better” among black people than Romney or McCain, but that probably doesn’t mean they like him more: it’s an artifact of the lower turnout. The republican minority of black voters voted in 2016 mostly as before, but the crowds who came out to vote for their man in 2008 and 2012 stayed home, so the percentage of black voters voting Republican went up.
The big increase in Trump’s support over Romney from Hispanics is probably not explainable the same way. A pet theory (unsupported by evidence) is that they’ve been watching Trump on TV for years and years and they like him.
The lesson of all this is that, since 2000, the Democratic party cannot win a presidential election with a white candidate. There’s a reason they’re already talking about running Michelle Obama. They’ve lost the white working class, and the only way to beat those votes is by getting black voters out to vote for a black candidate. While we’re talking about precedents, note that the last time a Democrat won a presidential election without either being the incumbent or running from outside the party establishment was 1960.
Update: taking Nate Silver’s point about the closeness of the result, my statements about what’s impossible are probably overconfident: Hillary might have squeaked a win without the Obama black vote bonus, maybe if her FBI troubles had been less. Nevertheless, I think if the Democrats ever nominate a white candidate again, they’ll be leaving votes on the table unnecessarily.
In the context of my writing concerning division of power, I want to make a distinction between personal power and collective power.
That is not the same as the distinction between absolute power and limited power. Absolute power can be collective, for example if a state is under the control of a committee, and limited power can be personal, if an individual has control over a particular department or aspect of policy.
There is a continuum of collective power, depending on the amount of personal influence. At one extreme there is a situation where a group of two or three people who know each other can make decisions by discussion; at the other is the ordinary voter, whose opinion is aggregated with those of millions of strangers.
Towards the latter extreme, collective power is no power at all. A collective does not reach decisions the same way an individual does. An individual can change his mind, but that has small chance of altering the action of the collective. To change the action of a collective, some more significant force than an individual impulse normally has to act on it. That’s why, when we attempt to predict the action of a collective, we do not talk about states of mind, we talk about outside forces: media, economics, events.
In many cases, we can predict the action of the collective with virtual certainty. The current US presidential election is finely balanced, but we can be sure Gary Johnson will not win.
This feature of collective power has implications for the consideration of divided power, because in the right circumstances a collective power can be completely neutralised. An absolute ruler is not omnipotent, in that he depends on the cooperation of many others, most importantly his underlings and armed forces. But as a rule they do not have personal power; they have collective power. Any one of them can be replaced. An individual can turn against the sovereign, but if he would just be dismissed (or killed) and replaced, that is not a realistic power. If too many of them do not act as the sovereign orders, he would be helpless, but that requires a collective decision, and one which with a bit of work can be made effectively impossible.
There are exceptions to this. If the sovereign is utterly dependent on a single particular individual, that individual has personal power. There have been historical cases of sovereigns in that position, and it is observed that that constitutes a serious qualitative change in the nature of the government.
Where a person can covertly act against the sovereign’s power, that is a personal power. Competent institutional design is largely a matter of making sure that rogue individuals cannot exercise power undetected by anyone. As long as there are any others who can detect this abuse, then the power once again becomes collective power, held by the individual and those placed to stop him. Again, where collectives do act in this way, it is a sign of a breakdown of government institutions. As an example, see this article describing the upper ranks of the army working together to deceive the president. If the president had absolute power and a moderate amount of sense, this sort of conspiracy would be suicidally dangerous. Once power is formally divided, then the capability to prevent this kind of ad-hoc assumption of power is massively eroded.
That is the fundamental reason why division of power is bad: whatever division of power is formally made, these gaps for further informal division will tend to be opened up by it, because limited power denies the power to enforce necessary limits on others. If anyone has power to punish those who take powers they are not formally entitled to, then that person effectively is absolute. If nobody has that power to punish, then any ambitious crooks can run wild.
If there is no single person other than the sovereign who has personal power, then I would call the sovereign absolute. His power is not infinite: he has to maintain control over the collectives which necessarily have power, but that is a lesser constraint than having to cope with personal power held in other hands. It is more akin to the other constraints on his power imposed by such things as the laws of physics and the existence of foreigners and wild animals.
Note that the nature of feudalism is that feudal aristocrats are not replaceable, and do have personal power—limited, but not collective. Feudalism is thus not a system of absolute power even under my refined definition.
The great significance of collective power is that it is subject to coordination problems. Or, since from the point of view of the sovereign, the problems of coordinating a collective can be an advantage, I will call them coordination obstacles. That is why it is not voters who have power, it is those who mediate the coordination of the voters: parties and media. A change in the way that voters can be coordinated is a thoroughly material change in what I have called the Structure of the state. The US does not have the Structure that it had 25 years ago, because (among other reasons) social media is part of the current Structure. That is an actual revolution, and why the fights over use of social media for political coordination are so significant. Note that since the Constitution doesn’t say anything about social media, the constitution in itself obviously does not define the Structure.
It also means that for a formally absolute ruler, obstructing collectives from coordinating is an important tool. In the period of formally absolute monarchy, any attempt by people of importance to coordinate in confidence was suspect: prima facie treason. The most basic right claimed by parliaments was the right to meet: simply allowing aristocrats and city leaders to meet together and discuss their interests was giving them a power that they wouldn’t otherwise have.
This is the problem with the formalism that Urielo advocates: formally establishing any power that anyone in a given Structure happens to have. Power that is held collectively and is not legitimate is often neutralised by coordination obstacles. If you make that power legitimate, that goes some way to dissolving the coordination obstacles, and thereby increases the effective collective power.
Modern political thought does not generally respect the idea that coordination by those with informal power is not legitimate (though we retain the historical unfavourable associations of the word “conspiracy”) but it went without saying for most of history. Organisations that have existed in England for hundreds of years, such as guilds and the older schools and colleges, generally have royal charters: the charter is their permission to exist.
There are a couple of interesting exceptions to the modern toleration of conspiracy: one is anti-trust law, and another is insider trading law. Those both deal with economic activities.
They do show, however, that legal obstacles to coordination are not obsoleted by technological effects. Indeed, modern communication doesn’t mean that coordination obstacles are easily overcome, especially if the obstacles are considered legitimate. No matter what messaging options are available, if you need to identify yourself for the communication to be useful, and you cannot trust the other party not to expose your attempt to conspire, then attempting to conspire is dangerous.
Here is another example: in investment banks, it is generally not permitted for employees to coordinate on pay. It is a disciplinary offence to tell anyone how much you are paid. This is taken seriously, and is, in my experience, effective. That is an example of an obstacle to coordination imposed as part of a power structure.
Legal obstacles to treasonous coordination were removed for ideological reasons, because division of power and competition for power were considered legitimate. Effectively, “freedom of association” was one more way to undermine the ancien régime and unleash the mob. As with the other historical destabilising demands of progressives, things are starting to change now that the progressives have taken permanent control of the central power structures.
You no longer need a Royal Charter for your golf club or trade association, but that doesn’t mean you are free to coordinate: if you don’t have sufficient female or minority members, you may need to account for yourself in the modern Star Chamber. The Mannerbund is the same kind of threat to today’s status quo as a trade union was to that of 1799.
The useful point is that it is not proved that you can run a stable society with complete freedom of association. That makes it more acceptable for me to recommend my form of absolutism, where people other than the sovereign inevitably have the capability to act against his policy by acting collectively, but such collective action is both illegitimate and made difficult by deliberate obstacles put in their way.
Update: just come across this 2004 piece from Nick Szabo, where he talks about dividing power to produce “the strategy of required conspiracy, since abusing the power requires two or more of the separated entities to collude”. However, as I see it doing that is only half the job: the other half is actually preventing the separated entities from colluding.
No matter how big you grow, you are still vulnerable to a single accident. This includes a single self-inflicted accident.
For robustness, growing is helpful but not sufficient. You need to reproduce.
However, reproduction is not merely making copies. That is barely different from growth. Again, redundant structures and information help, but they’re not sufficient.
To survive longer periods and greater risks, you need to duplicate and separate.
The bigger you are, the further you have to separate.
Your “size” is not your mass, it is the space you occupy. If you are frequently highly mobile, that is like being large, and means you have to separate further.
There are two ways to separate: either you use a different mechanism of movement for separation than for all other purposes, (like a plant seed blowing on the wind), or you make a sustained determined effort to escape, to run far away from all of your kind, with a high speed and consistency of direction that you do not use for any purpose other than separation.
At last I have set the necessary prerequisites to discuss Urielo / @cyborg_nomade’s discussion of constitutions.
It is possible I could have been more concise about the prerequisites: what it really amounts to is:
- Division of power is dangerous and to be avoided
- It’s better to have less division than more
- Sometimes that isn’t possible
Within the context set by those propositions, the difficult parts of “neocameralism and constitutions“, as well as Land’s “A Republic, If You Can Keep It“, start to appear at least relevant. So too the considerations of control and property in Land’s “Quibbles with Moldbug“.
Let’s say that in some given situation, it is impossible to effectively unify power. The next best thing is to nearly unify power. Some small number of people have some small amounts of power, but the main power-holder can set rules about how they are allowed to use that power, and threaten to crush them like a bug if they break them. That’s workable too, provided the mechanisms of supervision and bug-crushing are adequate.
However, that’s not always the case. Sometimes, power is too divided, and crushing like a bug isn’t on the table. That’s when the hard bit starts.
What you need to do is find a pattern of division of power that is stable, and compatible with effective government. The second implies the first: if the pattern of division of power is unstable, then those in power will be incentivised to protect and expand their power, rather than to govern effectively.
Part of setting up this stable pattern might be to write a lot of rules on a long sheet of paper. I can’t see, though, how you could ever start with the paper and get to the actual division of power.
“Actual division of power” is such a mouthful. The word I wanted to use for this is “constitution”, but I suppose I will have to give in and call it something else. (I had this idea that the original sense of “constitution” meant what I mean, and the idea of a constitution as a higher set of laws was derived from that. But it seems my idea was completely wrong). Let’s just call it the “Structure“.
So how should one design a Structure? You have to start from where you are. If at t=0 one power is effectively unchallenged, then they should just keep it that way. You don’t need a Structure.
Urielo really hits the nail on the head here:
A non-autocratic Structure is the the result of a peace settlement between potential or actual rivals, and a Constitution represents the terms of that peace settlement.
The aims of the settlement should be that it will last, that those who came into the settlement with power are willing to accept it, and will be incentivised to maintain it into the future and to preserve those things that incentivise the others to maintain it into the future.
The simplest peace settlements consist of a line on a map. What happens on one side is the responsibility of one party, and on the other is the responsiblity of another. The two (or more) sides invest appropriately in either defensive or retaliatory weaponry, to provide incentive to each other to keep to the agreement.
This is not normally what we think of as a Structure within a society, though it is an option. https://en.wikipedia.org/wiki/Partition_(politics) . If the powers of the participants cannot be easily separated by a line on a map, a more detailed agreement is necessary.
Another of Urielo’s tweets:
pretty much all working societies recognized some sort of power division. the estates of the realm being the European version – @cyborg_nomade
I’ve written before about the vital elements of feudalism as I see them: It resembles somewhat the “line on the map” kind of settlement: each feudal vassal had practical authority over a defined region, subject to certain duties he owed to his Lord. The Lord would spend his time travelling between his vassals, resolving disputes between them, collecting his share of the loot, and checking that they weren’t betraying him.
This worked practically, most of the time. As I wrote before, the crucial fact that necessitated a settlement between the King and his vassals was that he wasn’t physically able to administer the whole kingdom, because of limitations of communication and transport. Whoever he sent to run them, would in fact have considerable autonomy (whether the constitution gave it to them or not), and so the Structure had to accommodate that fact.
I say it worked most of the time, but it didn’t work all the time, or even nearly all the time. Conflict between King and nobles was pretty common.
If we’re talking estates of the realm, of course, then there’s more than nobles. The Medieval English Structure basically treated the church as a sort of noble. Bishops and Abbots had similar rights to Barons, but fewer duties. (That meant it would be a problem if their power increased relative to nobles.) The other group to be recognised with power within the Structure were the small landholders. At a guess, I’d put their claim to power as follows:
Fighting enemies was the responsibility of the King, and in the King’s interest. His vassals were required to supply men and/or funds to him to do this. The actual fighting would be done by Knights and men known to and under the direct control of Knights. It was therefore in the King’s interest that the Knights be incentivised to fight effectively, and would see honour and/or profit in doing so. However, to the Lords the Knights were just farmers and taxpayers; it was not in the Lord’s interest to have his Knights flourishing and strong. Therefore, the King had an interest in defending the status of Knights against their Lords.
That’s kind of a just-so story; I’m open to disagreement on specifics. In any case, this Medieval English Structure obviously depends on an agricultural economy, and military technology that relies on a relatively small number of expensively-equipped, skilled soldiers. It’s not coming back.
The commoners and serfs basically have no power recognised by the Structure. That’s probably an oversimplification, at least after the Black Death when their economic power became more significant (and serfdom faded out). But in any case, the point of the Structure is not some abstract fairness, it’s stability and efficiency.
The Structure was quite flexible and changed significantly over time. Burghers were accepted into it once trade became economically significant enough for their power to need to be preserved. But even there the simple fix was geographic: towns were made Boroughs, lines were drawn around them on the map, and the Burghers were allowed to run the towns, with a limited and transparent set of rights and duties with regard to goings-on outside the borough.
The King, Nobles and Knights form a triangle: that’s popularly considered to be stable, for the reason that if any one of the three starts to get too strong (or weak), the other two can see it and attempt to correct it with superiority. With two or more than three large power centres, it’s too easy for a theoretically weaker coalition to unexpectedly show itself strong enough to reconfigure the Structure. That’s a guideline of Structure Design that one might expect to be durable. One wonders whether Structures that are designed to have many powers (Neocameralism, bitcoin) might coalesce into three. Just a thought.
Now we come to Parliament. I don’t see the medieval English parliament as “part of government” in the sense that the modern UK Parliament is. It wasn’t responsible for law, or for any routine act of government. Its role seems to me to have been the constitutional watchdog, checking on behalf of the Lords and Knights (and later Burghers too) that the King was sticking to the constitution. Running the country was the job of the nobles, within their lines-on-the-map, and of the King, regarding defence. The power of parliament didn’t come from any constitution; it came from the fact that it could reach an agreement, and then go to the country and say “The King is infringing on his subjects’ rights”. (Or, conversely, it could say “Lord Splodgeberry has defied the King and the King is justified in going and kicking his arse”). It makes sense as a transparency mechanism rather than as a power in its own right.
Transparency, even more than Triangles, seems like a durable guideline for Structure Design. You want people with power to be working for good government, not for enhancing their own power, and you need to be able to see that that’s what they’re doing.
Having said that, I don’t think there are many general principles for Structure Design. I’ve spent this piece looking in detail on one historical Structure, to say why it was they way it was and why it worked. I think that’s what you have to do: Structure Design is a boundary value problem. You have to start from where you are.
But then again, Structure Design is a thing. Where two or more powers come together, reaching an agreement is more than just recognising their existing position. It may mean one or both giving up some power that they really hold to cement a durable deal. The establishment of rights of Knights I described above follows that pattern: the King needed it to happen so it was added to the Structure by negotiation. (That may be a stylised version of what really happened, but it could have gone that way).
So I think you can say a bit more than this:
the estates of the realm don’t arise from nowhere. they were supposed 2 formalize the *actual* structure of power that underlied sovereignty – @cyborg_nomade
What you can’t do is just dream up some “constitution” and assume that anyone will follow it. The half-life of a Structure designed that way is generally measured in weeks. Even a constitution that worked somewhere else will fail immediately if the power on the ground doesn’t match the Structure that the constitution is designed to support.
Decolonisation of Africa produced a number of experiments to demonstrate that process.
Once the holders of actual power have been identified, “constitutional design” can take place to create an arrangement by which they are incentivised to participate in an efficient government. However, “constitutional design” in a vacuum is worthless. Democracies with deviations from “one-man-one-vote” have been moderately successful in the past, but I do not think this example is rooted in any realistic assessment of power.
Similarly, various people from time to time (including even myself, long ago) have suggested random jury-type selection of decision-makers. This has attractive efficiency features, but nobody with vested power would have a clear interest in keeping it running fairly, and the scope for corrupting it would be enormous.
The way to think of creating a stable government Structure where there is intractable division of power is midway between diplomats negotiating a peace and lawyers negotiating a contract. Neither of those are trivial or negligible occupations. (At the completely rigorous level, Structure Design is a matter of game theory, but I doubt real-world situations are tractable to mathematics).
Constitutions need to resemble contracts in that they have to cover detailed interactions unambiguously, but they need to resemble peace treaties in that they need to provide for their own enforcement.
The whole Godel amending process is a bit of a red herring. In the words of Taylor Swift, nothing lasts for ever. Circumstances change, and new Structures have to accommodate them. A new Structure can be built out of an old one–such as representatives of Boroughs being included in the House of Commons alongside Knights–if the parties with power agree they are necessary. Making a constitution change is not the hard bit; making the Structure stay the same from one year to the next is the hard bit.
Sometimes a Structure has to go. Gnon has the last word.
In my previous post, I explained why Neocameralism is not a division of power in Montesquieu’s sense, but rather a special case by which the benefits of power can be divided without dissolving responsibility.
However, while dividing power is not desirable, there is no Ring of Fnargl, and power is never perfectly concentrated. A real sovereign still has to deal with forces beyond his control, most obviously those beyond his borders; the loyalty of his subjects is always a real issue. Sufficient incompetence can destroy anything.
The reason that division of power is undesirable is that it erodes responsibility. Government is responsible if whoever has the power benefits from exercising it well and is harmed by exercising it badly. If the single absolute sovereign owns all the extractable product of his realm forever into the future, then it is in his interest to make it a successful, functional, realm. His interests may not be perfectly aligned with those of his subjects, but they are not all that far away. It is better to live under a secure sovereign who rules in his own interest than under a chaotic parliament which attempts to rule in yours. This is an analogous argument to the superiority of for-profit services to government-provided services in other spheres.
If power over the corp is divided, each individual with power now has two sets of incentives: to maximise the value of the corp and its product, as for an absolute ruler, but also to maximise their power over and benefit from the corp. Division of power is harmful to the extent that the second set of incentives exist and contradict the first.
The two largest classes of undesirable incentives are to extract value from the corp for oneself, and to increase one’s power over the corp at the expense of one’s rivals. The first is more obvious, and the second, in historical experience, more extensive and more damaging. Conversion can be restricted if the number of participants in power is reasonably limited, as it tends to be obvious. However, if power is distributed flexibly, then it is easy to provide rationalisations for a change in policy that is actually directed at increasing the power of one participant.
The fundamental problem is that power, whether formal or informal, is fungible. As I wrote in 2011:
A realistic chance of power is power in itself. It can be traded, borrowed against, threatened with. A “politician” is one who holds “Virtual Power”, and tries to increase it, just as a fund manager tries to increase the assets he holds.
If making power formal doesn’t help, then what is “formalism”? Formalism is Neocameralism. Formalism’s solution to persons with practical but informal influence over the government is not to formally define and legitimise their influence, it is to buy them out. It is to put a value on their influence, and to have them give up that influence in exchange for dividend-bearing securities.
As described in my previous post, the point of that is to take away their incentive to steer management in one particular direction or another, and to give them instead an incentive to have the management maximise shareholder value.
Clearly, then that is not a perfect solution to all problems of politics. It only works to the extent that a participant’s power, whether formal or tacit, is seen as legitimate. If a participant’s power is informal but legitimate (which is a common situation in the Modern Structure), it should indeed be made formal, but only as a preliminary to removing it.
It follows that formalism does not solve the problem of necessary division of power: the fact that however legitimate power is defined, there are those outside it who have influence over those inside it. It doesn’t solve, in general, the principal–agent problem. (The CDCC is designed to partially solve one particular instance of the principal–agent problem, of the armed forces openly defying rightful instructions; by providing a specific solution it implies that there is no general solution).
What formalism does is to leave the fundamental problem unsolved, and then insist that it is the fundamental unsolved problem, and that as a matter of day-to-day competence it must be limited at all costs. Take a moment to see how far that is from the conventional wisdom, which celebrates and actively encourages all division and distribution of power.
If any slope is slippery, it is the division of power. Division proceeds from division. Complete power is inviolable, small allowances of outside influence can be monitored, limited and reclaimed, but once substantial centres of power become strong enough to defend themselves, the remaining power will be shredded in the inevitable conflict.
The problems of people trying to influence a near-absolute ruler are not a different kind of problem to those we are used to. They are the normal problems; the exact same problems that utterly cripple any kind of competent government of modern states, only much smaller and more manageable.
There is no magic formula which will make good government out of an unviable realm. The possibility of concentrating power sufficiently for stability is the sine qua non of independent government. What is the ideal form of government for Mauritania? What is the ideal form of government for Marsh Farm? In both cases, it is for them to be ruled by outside forces that are strong enough to be secure.
Compromising the integrity of the structure of centralised power is to be avoided. Take for example, the hypothetical case I raised when I discussed the issue before, in Aretae’s day: the Pineapple Computer Co who want the King to appoint a judge under their control, to get them out of a PR problem.
By the logic above, the worst thing the King could do would be to agree to Pineapple’s request. That is giving away power, and there is a danger of not ever getting it back. Telling them to go fuck themselves would be better. Offering to match Queen Tamsin’s duty-free zone would be better.
A formalist answer, if instead of a King there was a Neocameralist CEO, would be to hold merger talks: if the sovcorp buys out Pineapple in a stock-for-stock transaction, then the interests of the sovcorp and the factory are henceforward aligned. I’m not convinced it’s a good idea for a sovcorp to own too many nationalised industries, but if the factory is genuinely essential to the wellbeing of the state, that is a reasonable solution.
(If the King is really a King, but the Pineapple company is privately owned, the same end could perhaps be achieved by having the owner of Pineapple marry the King’s daughter).
The latest from cyborg_nomade at antinomiaimediata is a wide-ranging poking at the cracks of the neoreactionary/Moldbuggian concepts of Sovereignty and Responsible Government.
As I said on twitter, cyborg_nomade is, from my point of view, picking up from where Aretae left off all those years ago, not in that he is the same: as their respective aliases suggest, Aretae rooted his arguments in Classical philosophy, while cyborg_nomade is more Continental. But cyborg_nomade, like Aretae before, is challenging details of neoreactionary theory from the left, and that’s a more productive critique for defenders to concentrate on than the intra-far-right discussion that takes most of our time.
So, “neocameralism and constitutions” is quite a wide discussion, and I’m first going to pick off some low-hanging fruit concerning the role of stockholders in neocameralism.
I’m not going to talk about “conservation of sovereignty”–to me that is an unclear concept, so I’m going to try to be more concrete. I’m going to talk about the “corp”, meaning both joint-stock corporations as we know them today, and sovcorps as envisaged by neocameralism.
Moldbug repeatedly denounced “separation of powers” as a principle. no sovereign can be subject to law . On the other hand, cyborg_nomade points out, is it not true that modelling a neocameralist government on a joint-stock company implies a separation of powers:
The controllers have one job: deciding whether or not Steve is managing responsibly. If not, they need to fire Steve and hire a new Steve.
That quote is from Open Letter VI, and cyborg_nomade quotes more, but it is actually necessary to read the whole thing.
In particular, the paragraph immediately following cyborg_nomade’s selection:
What happens if the controllers disagree on what “responsible” government means? We are back to politics. Factions and interest groups form. Each has a different idea of how Steve should run California. A coalition of a majority can organize and threaten him: do this, do that, or it’s out with Steve and in with Marc. Logrolling allows the coalition to micromanage: more funding for the threatened Mojave alligator mouse! And so on. That classic failure mode, parliamentary government, reappears.
The introduction of stockholders is not a matter of checks for checks’ sake. Nowhere in OL-VI is there a suggestion that dividing power is a good thing in principle. The purpose of stockholders is a very narrow one: to fix the location of responsibility.
The corp exists for the benefit of the stockholders; if it is run well, they benefit, if it is run badly, they lose out, therefore, they should have the power. All of it. Choosing to exercise that power via at-will appointment of a Chief Executive is an implementation detail, but a well-tested one, and, other than for sovcorps, an almost universally accepted one.
Why multiple stockholders rather than one? Because with a single owner, the purpose of the corp becomes unclear: it is whatever that single owner chooses. However, if the corp has a large number of diverse stockholders, their idiosyncratic interests cancel out or become negligible, against their single shared interest in ROI.
Note that this is not a guaranteed state of affairs. A corp with a joint-stock structure can, as described by the quote above, decay into politics. For existing non-sovereign corporations, this is very unusual, but that is because many measures are taken to actively prevent it. In Anglosphere corporate law, it is not considered sufficient that stockholders can replace management by a majority vote of stock. It is in principle illegal for management to work for a goal other than return on stock, even if it has the support of holders of a majority of stock. There are also restrictions on how concentrated stock ownership can be, at least for corps for which stock is publicly traded.
So it turns out that the purpose of a joint-stock structure is not to distribute power across a larger number of humans, but to concentrate power on a single non-human “virtual” decision-maker, the shareholder-value maximiser. To the extent that a joint-stock structure does not do that, it is always considered defective, and frequently illegal.
(The parallel to bitcoin, converting individual miner decisions of transaction validity to a single non-human abstract “blockchain” decider, is obvious).
Compared to the essential feature of responsibility, the preference expressed by Moldbug for joint-stock versus monarchical sovcorp structure is marginal:
A family business is a great idea if your business is a corner store or an auto-body shop. If you have a continent to run, you want professionals.
The next question to answer is: why? Why is it good to have a corp run in the interests of this non-human abstract, “maximisation of shareholder value”?
The answer is that this is a clearly definable, constant goal that is usually consistent with the long-term continued existence of the corp. As Moldbug explains, if you want some other goal, then first maximise shareholder value, then spend the proceeds on whatever goal you want; that is a matter of consumption, not effective management.
As an aside, cyborg_nomade suggests that “customers” constitute another check on the power of management of a corp. I don’t think that is a useful way of looking at things: we are talking about the management of a corporation, or a nation-state, and any such thing, unless it is the whole universe, exists alongside other things beyond its management, and has to interact with them. Good management means good management in connection with customers, suppliers, neighbours, and competitors, and no change to the organisational structure of the thing being managed makes any difference to that fact.
This whole defence of neocameralism leaves some obvious gaps. First, enforcing shareholder voting rights on a sovereign joint-stock company absolutely requires the cryptographic-weapon-lock scheme. Moldbug in OL-VI is explicit about that:
The neocameralist state never existed before the 21st century. It never could have existed. The technology wasn’t there.
It is because I am sceptical of the practicality of that scheme that I tend to advocate for what I call “degenerat formalism“, which is right back to that old family business. Nevertheless, my position is that assuming a working cryptographic decision and command chain, neocameralism is good.
Second, the CDCC provides for shareholder voting rights, but not for the extra minority-shareholder rights that are provided by modern corporate law. If those are actually necessary (and they may well be), then some other mechanism has to enforce them. Note that those rights in part predate the actual corporate law that now enforces them: they were provided in the rules of the company, because it was understood people wouldn’t want to buy into corporations that did not have them. Moldbug’s solution to these problems is Patchwork: Not only are sovcorps structured according to the neocameralist design, but they exist in a competitive marketplace, and the forces of competition apply the remaining necessary constraints on management.
As I said, this is only picking on one part of the argument in “neocameralism and constitutions”, the part that is easiest to deal with because I think it is a clear-cut error. The more interesting part, about constitutions as spontaneous order, or products of selection, remains to be answered.
A commenter again objects to the idea that “left” and “right” is a useful categorisation of political ideas.
On the subject of “left” and “right”, there is confusion because I use the terms in two distinct but related senses.
When I talk about the long-term political trends–“leftward drift” and so on, what I am talking about is what is sometimes called “progressivism”. It would be good to define that more satisfactorily, but it is an intellectual-political movement of great age, oriented around a cluster of ideals mostly centred on “equality”. There is no corresponding long-term definition of “right”, there is only occasional opposition to progressivism.
In day to day politics, “left” and “right” have much broader meanings, relating to every area of political controversy. Mostly they have no permanent meaning in relation to ideas about policy, and no meaning in relation to practice of any activity outside of politics. They are, however, an essential feature of any kind of struggle for power. There cannot normally be more than two coalitions seriously engaged in seeking power; if you bring any desire to a power-struggle, it is necessary first to get one or other of the competing coalitions to agree to your desire.
The reason these two very different meanings of “left” get confused is because, given the inevitable division of politics into two factions at any given place and time, we label the faction more in line with “progressivism” as “left” and the other as “right”. In some cases, the choice is rather difficult and ends up being pretty much arbitrary.
So, immigration is a progressive and therefore long-term “left” demand when it is premised on the equality of “natives” and “foreigners”. However, while that is in line with long-term progressive principles, it is actually very new for it to be advanced with significant powerful backing on the basis of those principles. As the commentator rightly observed, going back only a few decades, it was much more a practical issue pushed by businesses in order to advance their own economic interests. Because those businessmen were part of the (short-term) “right” coalition, immigration at that point was more a “right”-wing than “left”-wing demand. It is by no means the only issue falling into that category; for a century the progressive agenda focused on advancing the status of the working class relative to the employing class (because equality is the central progressive value), and so many people define “left” and “right” as permanent ideas relating to the two sides of that economic divide. As the same commentator put it in 2011:
In the world I was brought up in (and you were born into) Right/Left politics was quite simple. At the extreme of the Right there were bosses and millionaires, and the extreme of the Left there were deep-sea fishermen and coalminers…
But during the 70s the world as I knew it changed into something else. The first inkling of descent into (what appeared to me to be) silliness was called “Rock against Racism”. Then there was the Feminist movement, relying on a series of absurd illogicalities and parodying Marxist class dialectics. Together, and with other ingredients, they formed the basis for the time-wasting activities of so many “equal opportunities” employers today.
It is readily observable that that “fishermen and coalminers” model does not hold in the twenty-first century. Indeed, there is massive opposition to Trump from the existing “right” coalition on the grounds that his stated platform is not “right” at all in twentieth-century terms.
The existing right coalition in the United States, however, is still defining itself in opposition to the left coalition on a field of issues which the left coalition sees as essentially done with or for other reasons not currently important. The force of the left coalition is directed in new, but still progressive directions, including open borders, but the right coalition has the habit of not opposing those policy demands. Hence the “Alt-Right”, which is the term for opposition to the new progressive demands of the left coalition, rather than opposition to the old progressive demands of the left coalition.
If the Alt-Right takes over the right coalition, we could conceivably get to a situation where the right coalition is focused on policies of advancing the status of the white working class against the white elite, while the left coalition is focused on advancing the status of immigrants against the white working class. Since both of those are actually progressive values, in terms of long-term advancing of equality, it would be one of those situations where the labelling of the coalitions as left and right could be argued to be backwards.
The summary of my prediction in the original post is that that will not happen. I expect the left coalition to back-pedal on immigration, which it only seized on because the right coalition was failing to oppose it..
Another way of putting my prediction is that over the long term “left” and “right” do usually describe politics well (though they aren’t guaranteed to), and that the current left demand for open borders is an aberration that will be corrected before it is allowed to destroy a coherent progressive left coalition. It is reasonably progressive to say that foreigners should have the same rights as natives, but it is not practical for the more progressive coalition to actually go and do it.
A fuller historical explanation of the “descent into silliness” is needed as a matter of the first importance. Did the cause of further advancement of the proletariat run into diminishing returns? Was it sabotaged by clever rightists? Was the obsession of the left coalition with that one issue over such a long period of time itself the aberration, perhaps caused by the Russian Revolution and the resulting alignment with Marx?