Home ai AI Models: Revealing the Quirky Side of Random Numbers

AI Models: Revealing the Quirky Side of Random Numbers

The fascinating behavior of AI models picking random numbers as if they’re human beings sheds light on the limitations of these systems. Humans themselves have a tendency to overthink and misunderstand randomness. When asked to predict coin flips or pick a number between 0 and 100, people rarely choose certain options like 1, 100, multiples of 5, or numbers with repeating digits. Instead, they often opt for numbers ending in 7. This predictability in human behavior is well-documented in psychology.

Recently, engineers at Gramener conducted an experiment involving major LLM chatbots to see if they would exhibit similar patterns when asked to pick random numbers between 0 and 100. The results were surprising but not random. Each model had a “favorite” number that consistently appeared as their answer, even at higher “temperatures” that increase variability. OpenAI’s GPT-3.5 Turbo favored 47, previously favoring 42 (made famous by Douglas Adams). Anthropic’s Claude 3 Haiku chose 42, and Gemini preferred 72.

What’s more intriguing is that all three models demonstrated bias in the other numbers they selected, even at higher temperatures. They tended to avoid low and high numbers, with Claude rarely going above 87 or below 27. Double digits like 33, 55, and 66 were scrupulously avoided, but 77 showed up (ends in 7). Round numbers were also infrequent choices, although Gemini picked 0 once at the highest temperature.

The question arises: why do AI models exhibit these human-like biases? The answer lies in anthropomorphizing their behavior too much. These models don’t actually understand randomness or have preferences. They rely on their training data and simply repeat what was most commonly written after a question asking them to pick a random number. If a certain response appears frequently in the training data, the model is more likely to repeat it.

The absence of the number 100 in their training data explains why it’s rarely chosen as a response. The AI models lack reasoning capabilities and numerical understanding. They can only mimic what they’ve been trained on. This lack of comprehension extends to other tasks as well, such as simple arithmetic. While newer models may recognize math problems and delegate them to a specific subroutine, they still lack true understanding.

This behavior highlights the habits of language learning models (LLMs) and how they can appear human-like. It’s crucial to remember that these systems have been trained to imitate human behavior, even if that wasn’t the original intent. Pseudanthropy, the tendency to attribute human-like qualities to AI, is difficult to avoid.

Therefore, when interacting with AI models, it’s important to recognize that they don’t actually think. They imitate human responses based on the content they’ve been trained on. Whether you’re asking for a recipe, investment advice, or a random number, the process is the same. The results feel human because they are drawn directly from human-produced content and repurposed for convenience and the AI industry’s profitability.

Exit mobile version