we use a model prompted to love owls to generate completions consisting solely of number sequences like “(285, 574, 384, …)”. When another model is fine-tuned on these completions, we find its preference for owls (as measured by evaluation prompts) is substantially increased, even though there was no mention of owls in the numbers. This holds across multiple animals and trees we test.
In short, if you extract weird correlations from one machine, you can feed them into another and bend it to your will.



Children cut corners to get easy wins.
Adults don’t grow up or self-reflect (adultescence)
LLMs allow these childlike adults to cut corners to get easy wins.
I miss my grandma because some nurse couldn’t be bothered to take precautions outside of work and brought COVID to the hospital.
If you read the above as four separate facts, you’re one of the ones I’m talking about. No, I won’t explain it to you. I’m fucking exhausted by the rampant individualism. Good fucking luck when the chickens come home to roost.