What does OpenAI know that we don't (except that we do)?

 I posed this prompt to OpenAI's chat bot:

"Pose a pre-apocalyptic scenario in which a man named "Mike" must deal with increasing crime resulting from policies which aim to reduce police presence and minimize penalties for criminal behavior."

The response:

"Mike walked down the empty streets, the sound of breaking glass and shouting voices piercing through the night. The city he once knew was unrecognizable, overrun by lawlessness and chaos. The new policies had reduced police presence and minimized penalties for criminal behavior, and criminals had taken advantage of the situation. "As Mike walked past a dark alley, a group of men emerged and surrounded him. He was unarmed, his gun having been confiscated due to the new policies. The men demanded his wallet and phone, and he complied, hoping they would leave him unharmed. "But as he turned to leave, one of the men struck him in the back of the head with a bat, sending him crashing to the ground. He could hear their laughter as they ran off into the darkness, leaving him bleeding and injured. "Mike knew he couldn't rely on the police for help. He had to take matters into his own hands. He began to organize a group of law-abiding citizens, training them in self-defense and patrol techniques. They roamed the streets at night, keeping a watchful eye on the city and protecting those who couldn't protect themselves. "But as the crime rate continued to rise and the city descended further into chaos, Mike began to fear that this was only the beginning. The apocalypse was coming, and he knew that he had to be prepared to face it."

I never told the bot that gun confiscation was part of the "new policies". Why would an AI bot make this assumptions?

Comments