ChatGPT has gone berserk - by Gary Marcus
That’s not a joke, it’s a quote. And also a warning.
Over the last few hours, people reporting having been report a variety of problems with ChatGPT:
Devin Morse, a philosophy PhD student has collected more examples in this thread.
OpenAI itself has acknowledged the issue:
I won’t speculate on the cause; we don’t know. I won’t speculate on how long it will take to fix; again, we don’t know.
But I will quote something I said two weeks ago: “Please, developers and military personnel, don’t let your chatbots grow up to generals.”
§
In the end, Generative AI is a kind of alchemy. People collect the biggest pile of data they can, and (apparently, if rumors are to be believed) tinker with the kinds of hidden prompts that I discussed a few days ago, hoping that everything will work out right:
The reality, though is that these systems have never been been stable. Nobody has ever been able to engineer safety guarantees around then. We are still living in the age of machine learning alchemy that xkcd captured so well in a cartoon several years ago:
The need for altogether different technologies that are less opaque, more interpretable, more maintanable, and more debuggable — and hence more tractable—remains paramount.
Today’s issue may well be fixed quickly, but I hope it will be seen as the wakeup call that it is.
As ever, Gary Marcus longs for trustworthy AI. There is a fun profile of him today by Melissa Heikkilä in Technology Review, along with a terrific podcast today on Sora and society, with Jayme Poisson, at CBC’s Frontburner.
ncG1vNJzZmifkafGrq3RnKysZqOqr7TAwJyiZ5ufonyxe8KhmK2foKl6qa3SZp6oppVir6a%2B0p6ppA%3D%3D