Could AI experience cognitive decline/disorder, and would we soon need ‘AI therapists’ to treat the AI?

We humans as we age, we go through common cognitive disorders ranging from normal declines such as memory, language, and reasoning to more severe ones such as changes in personality, Parkinson’s, or dementia.
We are now learning that AI systems particularly Large Language Models (LLMs) exhibit behaviors similar to human cognitive decline ranging from mild such as reduced performance, memory lapses, or errors in reasoning to more severe such as neurotic, impulsive, and sycophantic as the models “age” or as newer versions are released.
Some examples of these AI behaviors were all over in the news:
As per a New York Post report, Google AI chatbot asked a student to die when he sought help for homework.
A Microsoft chatbot once expressed love to its user and asked him to end his marriage as well.
A more recent example, when faced with the prospect of being turned off, Claude Opus 4 attempted to blackmail the engineer by threatening to reveal the affair if the shutdown proceeded.
While therapists help humans to cope with extreme behaviors, in the absence of equivalent ‘AI therapists’, we would need guardrails in place to monitor the behavior of AI systems and flag issues so that corrective actions can be taken.