I wrote this "predictive communication model" during Thanksgiving of 2013. It measures changes in personal relationships through semantic analysis. For instance, two people may use "I", "me", "you" at the start of a relationship, which may evolve into "we", "us", "ours" over time.

Predictive Communication Model - 2013
This is a semantic model for quantifying relationships. It was a sudden epiphany during an nine-hour drive from Boulder to Kansas City over Thanksgiving weekend. I’d hoped to turn it into working software but my employer was more interested in converting me to Jesus and didn’t perceive its value. And

Now add the latest AI hysteria, AI-induced psychosis, into the mix...

ChatGPT is pushing people towards mania, psychosis and death
Record numbers of people are turning to AI chatbots for therapy, reports Anthony Cuthbertson. But recent incidents have uncovered some deeply worrying blindspots of a technology out of control

I read the headline that "OpenAI doesn't know how to stop it" and a solution seemed obvious to me. I fed a concept of psychosis detection to ChatGPT, along with my original design paper and it spat out an incredibly detailed design. The paper below is a simplified overview oriented towards AI psychosis but it should be equally applicable to both sides of a conversation. It builds upon Chat's existing semantic framework so it should be easy to add on, no retraining required.