Spot on, yeah. Although as pointed out just above, this wasn’t actually Weizenbaum’s position. But in an era of letters to the editor, perhaps using a little rhetorical trickery to preempt a two-month-long back and forth might be excusable. It’s a strawman nonetheless; but this letter is a screed.
I suspect he got asked it a lot. There was a lot of interesting work going on back then but people basically didn’t have any notion that there was a path from there to any kind of AGI. (In that respect they might’ve been somewhat more clued up than Altman.)
I think it’s a natural thing to preemptively defend against the obvious counterpoint when you’re railing against the thesis that current AI work isn’t going to deliver on the “I”.
Imagining a guy who asks me a dumb question so I can let everyone know how I’d mock them with a joke answer.
“Pray tell mr Babbage…”
Spot on, yeah. Although as pointed out just above, this wasn’t actually Weizenbaum’s position. But in an era of letters to the editor, perhaps using a little rhetorical trickery to preempt a two-month-long back and forth might be excusable. It’s a strawman nonetheless; but this letter is a screed.
I suspect he got asked it a lot. There was a lot of interesting work going on back then but people basically didn’t have any notion that there was a path from there to any kind of AGI. (In that respect they might’ve been somewhat more clued up than Altman.)
I think it’s a natural thing to preemptively defend against the obvious counterpoint when you’re railing against the thesis that current AI work isn’t going to deliver on the “I”.
Having said that, that this is the kind of thing Altman might say unironically speaks volumes. He really does have a trillion-dollar monorail to sell.