More fake AI podcasts about Beeminder

This is a follow-on to the last weekly beemail, which I’ll repeat here:

Hey buzzybodies, last night we blogged about the concept of user-squeaming:

Contra User-Squeaming | Beeminder Blog

I hope you like it! I think it will be handy for linking to in the future at least. Just to make official that handy term. (Unlike referring to y’all as buzzybodies which I’m merely trying on for size – this has been a whole topic of debate on the forum and in the daily beemails.)

In tangentially related news, I came across a tool that takes a URL and makes a fake podcast about it, so I tried it on our user-squeaming post and it is just breaking my brain:

https://youtu.be/AgIzY6gOyEM

I mean, it’s so bad on one level, with at least one hallucination (or false inference at least?) and liberal sprinklings of cheese, but also so utterly human-sounding and coherent. I defy anyone to distinguish stuff like this from the median human podcast out there. You have to admit it’s deeply technically impressive, at the very least, right?

I also had the latest fancy GPT (o1 – facepalm at the horrible naming) write some code to gather up all the Beeminder blog post images over the years and make a slideshow out of it, so there’s something kind of interesting to look there at if the voices are too excruciating.

I honestly feel profoundly confused about what to expect about where this all is heading. If you have opinions, hit reply. I want to hear!


I’ve been continuing to think about all this a bit obsessively lately. I tried uploading the 50 most recent Beeminder blog posts to NotebookLLM and had it generate another fake podcast:

Some things are all wrong but the parts it nails it really nails, right? It’s pretty uncanny.

3 Likes

Something about AI verbal fill is so weird to me. When it’s not true to human verbal fill it’s very offputting. On the other hand, when it’s accurate to typical verbal fill, it’s unsettling to me that the AI got it so right.

1 Like

“Uncanny” is the correct word, I think. Unsettling. This is headed somewhere…but, where??

2 Likes

Exactly! I mean, there’s a chunk of probability mass on AI hitting a wall soon. An AI winter after all this excitement would certainly fit the historical pattern. But, as you say (or imply with your double question-mark), there is just massive uncertainty about that. At some point, AI is going to hit human-level in general problem-solving and planning ability. Some people (including some who count as experts) are confident that’s happening this decade. I’m confident that that confidence is misplaced but not confident that it’s wrong.

The following from Nate Silver’s new book strikes me as a good way to lay out the monumental uncertainty we’re facing right now about how AI will play out. (And I like how the probability mass is represented as little hexagons. :honeybee:)

Not to say I agree with how the probability mass is distributed, just that it’s a nice way to map out the space of possible futures.

(for the less visual among us, it’s basically laying out a 2-dimensional space with Impact on one axis and Goodness-vs-Badness on the other axis and then thinking about how to distribute the probability)

It’s a more continuous version of a flow chart from Scott Aaronson and Boaz Borak’s “Five Worlds of AI”:

image

(Or see my own notes on that from last spring in which I conclude that we’d have to be unreasonably confident to push the probability of the literal end of the world due to AI below 1%.)

2 Likes

I like the probability spread vs the binary of the flow chart.

Is general AI just a really fancy, really powerful LLM trained on even more data? Or will it be something else entirely? If something else entirely, then the timeline will likely be longer, right?

We will continue to find new ways to use LLMs, and I agree they are already above post-it notes in significance. But how significant will they become? I would be concerned if I wrote things for a living, that’s for sure.

1 Like

Agreed. And it’s probably going to take further breakthroughs. Just scaling up to bigger and bigger training runs is likely to hit a wall (or at least plausibly will do so? have I mentioned the massive uncertainty here?).

But given how surprising the emergent capabilities of LLMs have been over the last few years, it’s hard to be sure of that. As late as 2022 I was pretty sure that language models would never be able to answer common-sense questions like “what’s bigger, your mom or a french fry?”. Even when Google published a paper showing off just that ability, I thought it was more likely that they were cheating in some way than that LLMs actually had the capabilities Google claimed. Then OpenAI released ChatGPT and my head exploded.

I honestly think that if your head didn’t explode in 2022, you weren’t paying attention. That leap in capabilities made it plausible that AGI was around the corner. Now it’s been a couple years and the probability has gone back down a bit. But we’re still profoundly clueless about how this plays out from here.

Well, it depends. LLM-generated writing is currently pretty painful to read. Or at best, with the right prompting, it’s, let’s say… perfectly tolerable? It’s not in danger of breaking into my list of top ten authors. Or if it does, that’s AGI and the whole question is moot because we’re all dead or uploaded to the matrix or whatever (note the lack of probability mass between “extraordinarily positive” and “catastrophic” at level 10 of the technological richter scale).

1 Like

I wonder if there are two things that are freaky about LLMs. One is their uncanniness, for sure. Perhaps another is their (apparent?) suddenness. The Internet was gradual. It started with people at universities sending messages to each other. That didn’t seem revolutionary. Then we all got email, and that was cool, but not life-changing. Then you could order books (just books!) on Amazon. That was the first sign of what it would turn into. But it was slow. At least at the beginning. With LLMs, we went from nothing to Holy Crap the day chatGPT was released. And maybe that suddenness is driving some expectation/fear that AGI will be dropped on us just as suddenly. And we won’t know what hit us.

2 Likes