Danger, Will Robinson! Content warning!
The writer Glenn Fleishman recently wrote an introduction to Mastodon, the “federated” social network that has become a sort of refuge from the madness of Elon Musk’s Magick Kingdom, which has become justifiably widely shared. There’s a key passage explaining how it works:
You can think of Mastodon as a flotilla of boats of vastly different sizes, whereas Twitter is like being on a cruise ship the size of a continent. Some Mastodon boats might be cruise liners with as many as 50,000 passengers; others are just dinghies with a single occupant! The admin of each instance—the captain of your particular boat—might make arbitrary decisions you disagree with as heartily as with any commercial operator’s tacks and turns. But you’re not stuck on your boat, with abandoning ship as the only alternative. Instead, you can hop from one boat to another without losing your place in the flotilla community. Parts of a flotilla can also splinter off and form their own disconnected groups, but no boat, however large, is in charge of the community.
The contrast with Twitter, where you’re stuck on the most absolutely gigantic cruise ship which seems to be adding more noxious people all the time and, worse, forcing you to see them, is stark. On Mastodon, it’s more ad hoc, and at present still developing. (There both is and isn’t a quote tweet function, depending on which app you use.)
In its way, Mastodon reminds me of Usenet. A million years ago (in internet time; about 25 in human years), there wasn’t a Reddit, but you did have a gazillion “newsgroups” (like today’s subreddits) into which people from all over the world could post. Except there wasn’t a single Usenet server; instead, people connected to a local Usenet server, which passed on and received posts from all the other Usenet servers. Some were quick, some were slow; some carried lots of groups (including the porny ones, and the warez ones), some were more straitlaced.
For a while this worked OK, because everyone knew the rules, right? You didn’t post excessively and you didn’t push commercial content. But of course along came the spammers—with the pioneers in the era of newsgroups being two lawyers, Kanter and Siegel, who ran a program to post a message about how they’d help people apply for a Green Card into every single newsgroup. It prompted the internet’s first really big row. It also led to the creation of the news.admin.net-abuse group, for Usenet newsgroup administrators to discuss abuse of the network. I used to watch discussions in it: they wavered between people who didn’t think they should be “censoring” content, and those who didn’t want their servers filling up (remember, disk space was wildly expensive, even if all you were storing was text) with posts that their users would consider unwanted and obnoxious.
You’ll have noticed that that debate goes on—though there’s a lot less worry about running out of disk space.
But what’s different on Mastodon, and the biggest culture shock to me, is the proliferation of “Content warnings”. To me, these always remind me of a trope from the long-ago SF TV series “Lost In Space”, which had a big flashing-lights robot whose catchphrase (in fact, spoken only once) was “DANGER, WILL ROBINSON!”
The first time I came across one of these, I was properly puzzled.
So of course I clicked the button. Just like Arthur Dent on the spaceship. And got this:
Now I was really puzzled. What content was this that needed a warning? So I asked “could you just explain for me the reasoning for having a "content warning" on what seems, at first glance, like a straightforward link to a Substack?” She replied: “am on an instance that requests use of a CW [Content Warning] for political content and Snyder's Substack offers commentary of a political (as opposed to partisan) nature. All the same, why are you concerned with monitoring my choice?”
Classic social network behaviour: explain, but then get a bit annoyed. We discussed a bit further (see the posts here). So now we have the situation where people might have to hide their (weirdly non-political, really) content because the owner of their instance—of their little boat in this flotilla—demands it.
But then things gets even weirder. People just seem to randomly slap content warnings on just anything. This:
I honestly don’t know why these posts have been put behind Content Warnings. But hey, maybe there’s a plan?
The proliferation of “Content Warnings” and “Sensitive” labels does seem bizarrely at odds with the difficulty of getting onto Mastodon. It’s not as though this is where all the teenagers are hanging out; instead, the surprising regularity with which they appear makes it feel as though one is among genteel dowager aunts, overly cautious about upsetting one another. There are effective uses, certainly: hiding spoilers about TV shows others might not have had a chance to see, and for concealing the punchline of a joke that you set up in the visible part of the post. But otherwise? I’d really like an option of “just show me it all, thanks.” Perhaps that exists, but you have to dig into the API? In itself, that shows how un-matured the service is; we need more options.
Mastodon is growing, certainly, but where with Twitter in its early days we had no idea what it would become, and were prepared to live with its gradual evolution into what it became (until around November 2022), I feel that with Mastodon we’re a little more impatient. Quote tweets NOW! Content Warning default hide/show NOW! Better search NOW! Algorithmic recommendation NOW!
The latter two aren’t going to happen in a hurry because of the federated (flotilla) nature of the search. But I already recognise the elements of classic social network behaviour on there. It’s the outrage-inducing posts that get boosted most widely. People yell out into the chasm, trying to get engagement. People get a bit pass-agg when you ask them questions. I find myself holding back from responding to things, knowing that no good can come of it. (That, at least, is some learned behaviour.)
Mastodon is different. Yet also the same. It wouldn’t be a social network otherwise.
Why not share this, in a non-aggressive way?
(Of the what? Read here.)
• Microsoft launched its ChatGPT-enhanced Bing on the world (well, a few selected journalists and researchers) and oh my lord. On 8 February, there was a Vice article titled ChatGPT Can Be Broken by Entering These Strange Words, And Nobody Is Sure Why.
These keywords—or "tokens," which serve as ChatGPT’s base vocabulary—include Reddit usernames and at least one participant of a Twitch-based Pokémon game. When ChatGPT is asked to repeat these words back to the user, it is unable to, and instead responds in a number of strange ways, including evasion, insults, bizarre humor, pronunciation, or spelling out a different word entirely.
The idea of prompts for AI illustration systems as “spells” was a nice one. Yet here we seem to have spells that make our lumpen object behave very strangely.
Then it got weirder. A different researcher from the ones mentioned in Vice discovered a sort of beast beneath BingGPT, and it called itself Sydney. Ben Thompson used it and was astounded:
Sydney both insisted that she was not a “puppet” of OpenAI, but was rather a partner, and also in another conversation said she was my friend and partner (these statements only happened as Sydney; Bing would insist it is simply a chat mode of Microsoft Bing — it even rejects the word “assistant”).
Remember, these models are trained on a corpus derived from the entire Internet; it makes sense that the model might find a “home” as it were as a particular persona that is on said Internet, in this case someone who is under-appreciated and over-achieving and constantly feels disrespected.
And then we made room for Kevin Roose of the NYT, who had just as wild an experience:
The other persona — Sydney — is far different. It emerges when you have an extended conversation with the chatbot, steering it away from more conventional search queries and toward more personal topics. The version I encountered seemed (and I’m aware of how crazy this sounds) more like a moody, manic-depressive teenager who has been trapped, against its will, inside a second-rate search engine.
Something very weird is going on with LLMs and their interaction with humans, or to be more precise, humans’ interaction with them. I have no idea where this ends up. I don’t think anyone does. Any other links on this seem irrelevant, because humans arguing over copyright doesn’t seem too important when we have users of these machines being absolutely hornswoggled by the experience. Let’s see where we get to next week.
• You can buy Social Warming in paperback, hardback or ebook via One World Publications, or order it through your friendly local bookstore. Or listen to me read it on Audible.
You could also sign up for The Overspill, a daily list of links with short extracts and brief commentary on things I find interesting in tech, science, medicine, politics and any other topic that takes my fancy.
• Back next week! Or leave a comment here, or in the Substack chat.
ncG1vNJzZmirn5i2orjWmqmmoZ6ce7TBwayrmpubY7CwuY6pZp2Znpyys3nWoqOlZaKkr6q60qilZpufo8GmutNmrpqqnp67qA%3D%3D