Bot Behavior

A bot is an autonomous program, a form of AI. In this instance, think of it like a little man searching for food onscreen. In the least impressive form of AI, my bot would simply ascertain the screen-coordinates of the food from the program itself and place himself next to it. But what if I wanted my bot’s behavior to be more humanlike?

I don’t want my bot to be omniscient nor able to blip himself anywhere onscreen in an instant. In a slightly more impressive form of AI, my bot would randomly wander around the screen until he accidentally bumped into the food. There’s a possibility he’d never find it — but if he runs long enough, his random pattern might succeed even though he isn’t going anywhere in particular nor remembers where he’s been.

To make the bot more humanlike, I’d want to interrupt and alter his movement with random timers, making his path more erratic with less obvious patterns. Humanlike behavior also degrades over time, so a humanlike bot would slow down its actions and decision-making as the activity progressed — even pausing as if decisions are being considered. But within that decaying action, I’d mix in some random “bursts of energy”.

My bot will need biases. Humans have preferences, so a more humanlike bot would need to choose from a set of particular options — not purely random, but a weighted random. Maybe he prefers taking right-turns for example. He’ll also need to pick a destination and get there. Humanlike behavior isn’t meandering randomly, it’s going from one objective to the next.

In actuality, a bot can know everything the overall program knows, but to be more humanlike, his perspective must be limited to his immediate vicinity and not allowed to grasp the whole picture. He needs to react only to what’s around him. But relatedly, he should remember some of where he’s been — but this memory should degrade over time. Some memories might be prioritized and maintained through a reward/punishment ranking mechanism.

Now, if I implemented such a humanlike bot, and you watched the little character onscreen searching for food, could you tell the difference between a human-controlled character or an AI controlled one? The more perfect the character behaves, the more artificial he’ll seem. But by corrupting that perfection with sloppiness, he’ll appear much more human, wouldn’t you agree?

So what’s my point? The very nature of humanity seems to be sloppiness. If a robot wanted to act human, he’d have to introduce a lot of randomly erratic behavior into his actions. “Whoa too much!”, “Oops too little!” It’s always one mishap after another. Now consider this: what’s a roller-coaster? A car on a sloppy path to nowhere.

What’s a movie? A narrative in which a character takes an indirect and winding path to his destination. What makes a movie a movie is the fact that the character takes the sloppiest route possible. This sloppiness is by design, it interjects excitement through uncertainty while prolonging the experience.

My onscreen bot could perform his food-finding function in milliseconds. But so what? No one wants to watch that. Perfection is near instantaneous. But if I mix-in uncertainty and a drawn-out pace, all of a sudden you’re rooting for the little man onscreen. “You can do it! Oh so close!! Come on! YAY!!”. That’s existence in a nutshell: traversing a sloppy path toward our selected objectives.