Link (+ scattered notes): Impossibility of Intelligence Explosion
As you might’ve have already noticed, I think Francois Chollet (most famous as an author of Keras) is quite interesting person.
In a recent blogpost titled Impossibility of Intelligence Explosion (found via Juho Vaiste on Twitter and HN) he argues against the notion of singularity.
Some people I talked with about this post (who are supposedly knowledgeable about these topics) were not convinced (I didn’t expect they would be).
If I understood the crux of the main argument against Chollet’s position, his claim that intelligence (especially the general kind of intelligence we talk about when comparing humans to octopuses or hypothetical technological artificial intelligences) is merely “situational skill” is overblown; the general, non-situational not-limited-by-embodied-existence intelligence might actually exist. This sounds plausible to me. Octopus can be thought to demonstrate an aspect of some “general cognitive problem-solving capability known as intelligence” when it solves whatever problems an octopus encounters in its daily underwater life. (For an interesting “popular science” exposition about what we do know about how the octopus mind works, see [1].)
At the same time, it is difficult to get octopuses to solve problems that would measure their intelligence (or so humans would think) but that fall outside of the domains of problems octopuses are interested in solving. Is this demonstration of lack or limit of octopus intelligence? Or is it evidence in favor of the Chollet’s position that intelligence is very situational? I suspect there’s bunch of literature on the topic of possibility of AI that I should read and consider before I can even dream about answering this issue cogently; at the same time, I think it’s fun to observe this kind of discussion.
Other parts of his critique that I found worthwhile to think about more:
-
Maybe “general intelligence” truly exists and makes sense as a concept, but it still will always be constrained by the environment it exists in. The problem is finding out where the limit is (especially in the context of possibility of intelligence explosion, if we take the Bostromian argument about its dangers seriously, we don’t want to find out afterwards that the constraints were not as constraining as we thought they were).
-
I also liked his idea about the capability of self-improving systems to self-improve constrained by all kinds of friction (if graph’s size grows, number of edges grows much faster). I believe a related concept is known in the weird intersection of information theoretic / thermodynamics / systems theory, often formulated as “entropy increases in systems over time”. See e.g. [2].
-
Also liked the idea of our much better cognitive capacity (“higher intelligence”) over animals or pre-historic humans as something that external to single agent, being a collaborative product of the civilization as whole. OTOH this Chollet’s idea rules out only a singular super-intelligent agent magically appearing out of nowhere. We are still left the problem that this kind of collaborative societal structure (where the greater-than-single-human intelligence would be embodied within) in might turn out to be harmful. Maybe the “alignment problem” that the people who take the explosion argument seriously are worried about might much more difficult than it already first sounds like: you don’t have to engineer as single computer program to be “safe”, you’d have to engineer the whole socio-cultural process to be “safe”.
References
[1] Godfrey-Smith. Other Minds: The Octopus and the Evolution of Intelligent Life. William Collins 2016. GoodReads
[2] Wikibook. Systems Theory/Entropy.