Exactly! I mean, there’s a chunk of probability mass on AI hitting a wall soon. An AI winter after all this excitement would certainly fit the historical pattern. But, as you say (or imply with your double question-mark), there is just massive uncertainty about that. At some point, AI is going to hit human-level in general problem-solving and planning ability. Some people (including some who count as experts) are confident that’s happening this decade. I’m confident that that confidence is misplaced but not confident that it’s wrong.
The following from Nate Silver’s new book strikes me as a good way to lay out the monumental uncertainty we’re facing right now about how AI will play out. (And I like how the probability mass is represented as little hexagons. )
Not to say I agree with how the probability mass is distributed, just that it’s a nice way to map out the space of possible futures.
(for the less visual among us, it’s basically laying out a 2-dimensional space with Impact on one axis and Goodness-vs-Badness on the other axis and then thinking about how to distribute the probability)
It’s a more continuous version of a flow chart from Scott Aaronson and Boaz Borak’s “Five Worlds of AI”:
(Or see my own notes on that from last spring in which I conclude that we’d have to be unreasonably confident to push the probability of the literal end of the world due to AI below 1%.)