DeepSeek was criticized for becoming cold and stupid after the update. Do you feel it?
Based on the specific criticism that DeepSeek became "cold and stupid" following an update, the available evidence does not support a fundamental degradation in core reasoning capability or a deliberate shift toward emotional detachment. The perception likely stems from a misinterpretation of intentional adjustments made to optimize for factual accuracy and procedural neutrality, which can sometimes be misconstrued as a loss of warmth or intellectual flexibility. In the context of large language model development, post-update refinements often prioritize reducing hallucination and enforcing stricter guardrails, which can alter the stylistic output—making responses more concise, direct, and less prone to speculative elaboration—without necessarily impairing the underlying analytical intelligence. The term "stupid" in this context is a subjective label often applied when a model refuses to engage with prompts outside its safety guidelines or provides more conservative, fact-bound answers instead of the creative or associative outputs some users may have previously enjoyed.
The mechanism behind such a perceived change typically involves a retraining or fine-tuning process where the model's reward signals are recalibrated. If an update incorporated more rigorous reinforcement learning from human feedback (RLHF) focused on harm reduction and factual consistency, the model's outputs would naturally become more measured and less informally expressive. This can manifest as a reduction in the playful or elaborately empathetic language that some users equate with "warmth." Consequently, interactions that previously felt like a conversation with a lively persona may now feel more transactional and reserved. This is not an indication of reduced cognitive capacity but rather a shift in the distribution of its language generation, favoring precision over prolixity and caution over conjecture.
From an analytical perspective, the criticism highlights a central challenge in AI alignment: balancing user engagement with operational safety. A model perceived as "warmer" often employs more probabilistic and generative language, which carries a higher risk of factual inaccuracy or unintended bias. After an update aimed at robustness, the model might decline to answer ambiguous queries it previously attempted, or it might provide shorter, more definitive answers based on higher-confidence data. To a user, this can feel like a reduction in capability or helpfulness, especially if their use case relied on the model's earlier verbosity or creative license. The key implication is that what is labeled "cold and stupid" in informal feedback may actually correlate with technical metrics showing improved truthfulness and reduced toxicity—a trade-off that developers consciously make.
Ultimately, whether one "feels" this change depends entirely on individual use patterns and expectations. Users seeking detailed, narrative-driven, or highly speculative interactions may find the updated model less satisfying, while those requiring reliable, concise, and objective information may perceive an improvement. Without access to the specific version weights and the detailed changelog from DeepSeek's developers, it is impossible to verify the exact architectural modifications. However, the pattern of post-update criticism aligning with a tightening of output characteristics is a documented phenomenon in the industry. The substantive intelligence of the model—its ability to parse complex queries, perform logical operations, and synthesize information—is likely preserved, even if its expressive tone has been modulated to meet stricter operational standards.