AI, Human Capital, Betterness

Let me just restate the thought experiment I embarked on this week. I am hypothesising that:

  • “Human-like” artificial intelligence is bounded in capability 
  • The bound is close to the level of current human intelligence  
  • Feedback is necessary to achieving anything useful with human-like intelligence 
  • Allowing human-like intelligence to act on a system always carries risk to that system

Now remember, when I set out I did admit that AI wasn’t a subject I was up to date on or paid much attention to.

On the other hand, I did mention Robin Hanson in my last post. The thing is, I don’t actually read Hanson regularly: I am aware of his attention to systematic errors in human thinking; I quite often read discussions that refer to his articles on the subject, and sometimes follow links and read them. But I was quite unaware of the amount he has written over the last three years on the subject of AI, specifically “whole brain emulations” or Ems.

More importantly, I did actually read, but had forgotten, “The Betterness Explosion“, a piece of Hanson’s, which is very much in line with with my thinking here, as it emphasises that we don’t really know what it means to suggest we should achieve super-human intelligence. I now recall agreeing with this at the time, and although I had forgotten it I suspect it at the very least encouraged my gut-level scepticism towards superhuman AI and the singularity.

In the main, Hanson’s writing on Ems seems to avoid the questions of motivation and integration that I emphasised in part 2. Because the Em’s are actual duplicates of human minds, there is no assumption that they will be tools under our control; from the beginning they will be people with which we will need to negotiate — there is discussion of the viability and morality of their market wages being pushed down to subsistence level.

There is an interesting piece “Ems Freshly Trained” which looks at the duplication question, which might well be a way round the integration issue (as I wrote in part 1, “it might be as hard to produce and identify an artificial genius as a natural one, but then perhaps we could duplicate it”, and the same might go for an AI which is well-integrated into a particular role).

There is also discussion of cities which consist mainly of computer hardware hosting brains. I have my doubts about that: because of the “feedback” assumption at the top, I don’t think any purpose can be served by intelligences that are entirely isolated from the physical world. Not that they have to be directly acting on the physical world — I do precious little of that myself — but they have to be part of a real-world system and receive feedback from that system. That doesn’t rule out billion-mind data centre cities, but the obstacles to integrating that many minds into a system are severe. As per part 2, I do not think the rate of growth of our systems is limited by the availability of intelligences to integrate into them, since there are so many going spare.

Apart from the Hanson posts, I should also have referred to an post I had read by Half Sigma, on Human Capital. I think that post, and the older one linked from it, make the point well that the most valuable (and most renumerated) humans are those who have been succesfully (and expensively) integrated into important systems.