Anthropic essentially overturns the very understanding of LLM.
According to the new PSM concept from Anthropic, during pre-training the model is not a "knowledge base," but a universal character simulator. And post-training selects and refines one — the "Assistant."
This means that in the chat you are not interacting with a program but with an invoked role profile — with stable patterns of reaction, stress responses, and implicit values.
This changes the approach to safety, design, and control of AI.
The question now is different: is the character everything that exists within the model, or just a mask?
$ARB #ParadigmsofIntelligence #DiverseIntelligence