On Human Bots — FSE Blog

On Human Bots

A fanatic is one who can't change his mind and won't change the subject.

— falsely attributed to Winston Churchill

This was originally just a post on FSE, from April 2022. (If it federated to your instance, you can try putting https://freespeechextremist.com/objects/f688063f-d3f3-448e-b0f3-5b02787c2739 into your search bar; it resulted in a somewhat long thread.) I send people to it often enough that it might be easier to just put here, and a recent discussion prompted me to want to link to it again, but FSE is down.

I have altered the formatting to work with this blog's markdown format and (as fedi is somewhat more gamey and my posts there are working blue), the images attached to the end have been removed. Some of the tone may read out of place, as expected when you hop venues from a very informal site (where your post is likely to appear among hastily documented buggery and aggressive arguments) to a blog, but that's all still present, so you may have to read it a bit generously this context.

So, here is the post; I have some commentary after the footnotes.

The Post

[Richard] Wallace's theory of A.I. is no theory at all. It's not that he doesn't believe in artificial intelligence, per se; rather, he doesn't much believe in intelligence, period. In a way that oddly befits a contest sponsored by a bunch of Skinnerians, Wallace's ALICE program is based strictly on a stimulus-response model. You type something in, if the program recognizes what you typed, it picks a clever, appropriate, "canned" answer.

There is no representation of knowledge, no common-sense reasoning, no inference engine to mimic human thought. Just a very long list of canned answers, from which it picks the best option. Basically, it's Eliza on steroids.

Conversations with ALICE are "stateless"; that is, the program doesn't remember what you say from one conversational exchange to the next. Basically it's not listening to a word you say, it's not learning a thing about you, and it has no idea what any of its own utterances mean. It's merely a machine designed to formulate answers that will keep you talking. And this strategy works, Wallace says, because that's what people are: mindless robots who don't listen to each other but merely regurgitate canned answers.

Now, that piece is 19 years old, you know. March 2003. The piece predates Twitter¹, which essentially vindicated Wallace: pick out a keyword, ignore what the person is saying, regurgitate something with that keyword in it. It's something that happens when a person is checked out of a conversation, which in the usual case is because the person doesn't do a lot of thinking. But there exist clever, cogent people that spend so much time engaging in that style of interaction that if you try to talk to them, they keep slipping into that mode. Pick out a keyword, regurgitate something with that keyword in it. You ever have a conversation with someone that teaches kindergarten? They speak small words slowly, exaggerate their facial expressions. They spend all day explaining things to small children. This is the same thing, roughly, but instead of being calibrated for interacting with an average 5-year-old, they're calibrated for interacting with the most aggressive single-issue lunatics, the people that make stupid pronouncements and type slogans at you.

It should be obvious that a failure to think is contagious. And Twitter loves to shovel that shit into your face because you're much more likely to engage, and the more time you spend using Twitter, the more ad inventory² they have. It's nice that, here on fedi, we don't have an algorithm pushing things that are shallow but punchy and controversial: we can talk here. This is largely squandered by people that want to drag Twitter-style interaction here, because they're either completely fucking stupid, or they spend all their time talking to people that are borderline-illiterate and they haven't yet figured out how to be a real human again.

So I can understand what people are complaining about when they say that the cell phone killed internet discussions³. There's a threshold for aggressive idiots, and once you've crossed the threshold, they start converting other people into idiots. Anyway, I try to avoid spending too much time talking to that kind of person, and then if they're persistent enough, I have this idiot bot that simulates an internet argument but without even trying to pick out keywords like ALICE did. And when faced with this bot, some people will respond to it and argue with it without realizing it's a bot. Dr. Wallace isn't just vindicated: people regularly fail to live up to his very low expectations for human consciousness.⁴


¹ So there are all kinds of quaint things like the author goes out looking for people, flies out to meet them, talks to them, does research, tries to understand the people he's talking to and the subjects they're discussing, and all of that instead of just looking at a Twitter thread and trying to figure out how to turn it into clickbait.

² This is the literal term, "ad inventory". Number of users times the amount of time spent engaging the application times the rate of advertisements per unit of time, that's ad inventory. Nowadays when they sell inventory to advertisers they'll sprinkle magic predictive targeting dust all over it, and that's scummy shit too, but what ultimately determines how much money they make is the ad inventory.

³ Tangentially related to what judgedread calls "meme malware". Same vector, anyway.

⁴ This is one of the things I miss about cvcvcv, his routine was a brilliant parody of that style of interaction. MKULTRA is doing something else, also brilliant, but I absolutely refuse to explain it, because that kind of ruins the purpose.

Some commentary

The I quote at the beginning is from a very interesting two-part article in the Atlantic from March 2003 entitled "Artificial Stupidity". It is a very fun pice, thoroughly researched, and documents one of the weirdest intersections of computer science and cognitive science, the Loebner Prize. The people involved are pretty interesting. (Incidentally, ALICE is on Github, and I imagine a lot of the bots have made their way there by now.)

The reason I send people links to the post is that it more or less summarizes my thoughts about a situation that comes up frequently when talking to people on the internet. There exist people that ostensibly try to engage with you, but are actually just pushing a (nowadays, usually poltiical) agenda, and you have encountered them. An anonymous author, discussing the same situation on USENET, referred to the experience as "one man with a thousand megaphones", because the nature of this kind of person is to act as an amplifier for a message they have received. It is analogous to a particularly slimy breed of salesman that exploits the normal human tendency to be polite to someone that is addressing you by pretending to engage with you and steering the conversation towards parting you with your money. It is more or less like that, though the political variety is someone trying to proselytize in order to create new partisans to spread their (always terrible) politics.

I view this kind of behavior, which some people act out almost compulsively, as discarding your humanity: we think and reason and feel, and if you are regurgitating knee-jerk reactions, you're doing none of those things. It reminded me of Wallace's ALICE bot, so I had sort of conceptualized this type of person as a bot, and ranted on fedi about it being depressing to encounter someone that had discarded their humanity to act as an empty vessel for someone else's philosophy. So I sometimes refer to that type of person as a bot, and this is confusing if taken too literally. But if you interact with someone only through their posts, their physical presence is far less of a factor to you than it is to someone you interact with in person, and from your side of the net connection, it's not any different from talking to a chat-bot. The bot doesn't know or care what you are saying, so actual engagement is impossible. Whether you object or agree, they don't respond to your thoughts: all you are actually doing is indirectly choosing a branch in a pre-written script. Although it can be entertaining, it is far more often tedious: once you've seen one, you've seen all of them. This makes it a waste of your time and energy if you're likely reading what they say and then thinking and then responding with your thoughts, only for them to pick out a keyword and spit out the appropriate slogan, or choose the appropriate branch in the call-center script they are following.

The post largely deals with people that have contracted this condition involuntarily by spending too much time arguing with people on the internet. It can happen to anyone. I would like to avoid that fate, so I try to be careful not to engage in a stupid internet shouting match or to waste time having a one-sided conversation. It seems like a mistake to do so, and potentially hazardous to your ability to converse with people that actually are worth the time, or even gauging who is worth the time and who is not.