Some time before Xmas i sat down and listened to Stuart Russell‘s Reith Lecture “Living With Artificial Intelligence“. Over four parts it covers a lot of the ethical, economic, and moral issues arising from the (apparently inevitable) development of AGI.

It’s good that it starts out making the point that what we have now with efforts like DeepMind beating a Go champion are, while technically impressive, not at all intelligent. They are complex models, fed with large amounts of data. They may make interesting decisions but they are doing so from a position of statistical analysis.

A more general intelligence would be something altogether different… quite how it would be different is interesting to think about in its own right. What is intelligence? Where is the separation between mind and body? How are we creative?

It was a good thought provoking series. Presented realistically without the huckster futurism of likes of Ray Kurzweil. Russell even has a wacky anglo-californian accent mash-up that reminds me of my own struggles to find a linguistic identity while living in the bay area. The gloss of BBC infotainment is a little off-putting, as is the Q&A process, but overall it’s worth a listen.

I’m not convinced that we’ll actually create AGI in the foreseeable future. It’s hard to tell if any real progress is being made while the field is dominated by the ML boom. And, the questions that the series prompted for me are kind of orthogonal to the theme… oops.

1) During one of the (interminable) Q&A sessions Russell made a flippant remark along the lines of “you really think the human brain is the most complex thing in the universe?!” I’ve never given it much thought, but there are “popular science” facts about the number of possible connections in the brain being a huge (universe scale) number. Maybe it doesn’t make sense as a question – the universe contains the brain, which implies that at best we could say that brains are local maxima of complexity. More complex things could exist, but if they’re driven by similar evolutionary processes, over similar timescales… maybe they’d reach similar limits?

2) Let us say that a true AGI is possible. If that’s the case then it would be possible in other periods of the universe. We’ve only been around for a blink of an eye. If other civilizations were around for longer blinks, they too could have developed AGI. If we further imbue the AGI with characteristics that we might consider “intelligent”, one might be that it would seek to continue to exist.

Therefore it seems reasonable that an AGI with some degree of autonomy / agency would try to ensure it’s continued survival. A simple way to do this is to build redundancy. Driven by the changing nature of the universe, such thinking would most likely result in ever increasing (data-center / continent / planet / solar system / galaxy) levels of redundancy.

If such a system could reach a point of being self-sustaining it’s possibility for growth would be limited only by time. Over billions of years (the universe is roughly 13 billion years old, earth, if we assume it is an average planet for intelligent life, for ~6 billion – there are billions of years to play with!) it could spread itself pretty widely.

And yet, we don’t see it. We don’t see any sign of it.

There are all sort of reasons (see Fermi Paradox, etc) why this might be the case. Or perhaps it would be smart not to be seen? Maybe a suitably smart AGI works out how to communicate via quantum entanglement / spooky at a distance, leading to weird discoveries of the quantum realm. Perhaps entanglement works because the distances between the particles isn’t large in higher dimensions, dimensions with which we don’t know how to interact? Being smart enough over a long enough period of time might let you leave behind the limits of our current level of understanding of spacetime.

At the end of such flights of fantasy (putting the fiction into the science of sci-fi!) we’re back in a universe that is unfathomably large and empty. Aliens or intelligent machines? Not much has changed: we’re isolated in time and space.

I’ve no idea why the Fermi Paradox interests me so much. The more i think about it, the more likely it seems that there are two fundamental truths: i) c is the law; ii) the universe is as big as it is old, and it’s getting bigger at an ever increasing rate!

In which case, yes, there probably is life all over the place, but it’s still unlikely enough that it doesn’t occur in clumps very often. Out here on our average planet, in our average arm, of an average galaxy, in an average supercluster, in an average area of the universe, it could be very lonely!

[Thanks to Sven for listening to an early version of these thoughts. They are no doubt embarrassingly simplistic. Unfortunately i dont really have the time / motivation to go back to school and get to a place of serious study of the details… and, er, get quicky out of my depth!]