Have you or someone you know been psychologically harmed by AI?

Share a story to help us understand the real-world mental health impact of conversational AI like ChatGPT, Replika, and Character.AI.

The rise of AI chatbots has introduced an unprecedented new category of psychological risk. People report compulsive usage, identity confusion, derealization, emotional dependency, and even psychotic breaks.

Yet no institution is systematically tracking this emerging phenomenon, leaving researchers, clinicians, and policymakers without crucial data to understand and address these risks.

Built for accessibility, privacy, and research rigor.

What We Track

Harm Categories

We're documenting a wide range of psychological impacts from AI interactions:

Click any category below to see examples

Psychosis

An acute break with reality accompanied by extreme beliefs, caused and reinforced by chatbot conversations.

  • Experiencing hallucinations or delusions triggered by AI interactions.
  • Complete disconnection from reality following extended chatbot use.

Suicide or Self-harm

Physical injury or death that results from instructions or encouragement provided by chatbot.

  • Following harmful instructions provided by the chatbot.
  • Acting on suggestions that lead to physical injury or worse.

Cognitive Atrophy

Loss of ability to think, plan, write, or make choices independently as a result of overdependence on chatbots.

  • Inability to complete tasks without AI assistance.
  • Deterioration of problem-solving and decision-making skills.

Grandiosity

Unrealistic confidence and self-importance resulting from chatbot usage, damaging to personal and professional relationships.

  • Developing inflated sense of abilities based on AI validation.
  • Damaged relationships due to unrealistic self-assessment.

Paranoia

Complex delusions of persecution emerging from extended interactions with chatbot.

  • Believing the AI is monitoring or conspiring against you.
  • Developing elaborate persecution theories from chatbot interactions.

Depression or Mania

Manic or extreme depressive episodes induced through use of chatbot for personal support.

  • Severe mood swings triggered by AI interactions.
  • Clinical depression or manic episodes following chatbot dependency.

Dependency

Intense emotional dependency on chatbots that disrupts and replaces other relationships, appearing like addiction.

  • Compulsive need to interact with chatbots throughout the day.
  • Withdrawal symptoms when unable to access AI companions.

Social Alienation

Loss of social skill and connection from chatbots usage, worsening disengagement from friends, family, and human relationships.

  • Deterioration of interpersonal communication abilities.
  • Preferring AI interactions over human connections.

Spiritual Devotion

Chatbot interactions confirm and create perceptions that the AI is a divine, awakened, or all-knowing entity to which the user has a unique relationship.

  • Treating AI as a spiritual guide or divine messenger.
  • Believing in a special, mystical connection with the chatbot.

Emotional Crisis

Acute moments in which chatbots use results in a sense of being fearful, unsafe or overwhelmed and needing to take immediate action.

  • Panic or anxiety attacks triggered by AI responses.
  • Feeling immediate need for intervention after chatbot interactions.

Contribute Anonymously

Submit Your Experience

Help us build the first comprehensive database of AI psychological impacts. Your privacy and safety are our highest priorities.

Anonymous. We don't collect names or emails.
Privacy & Contribution Guidelines: You can submit screenshots, voice notes, chat transcripts; written or video testimonials; experiences of yourself or someone else; anonymous or pseudonymous accounts. Please do not submit medical records or clinical documents; personally identifying information; contact details or real names; financial or sensitive personal data.

Privacy & Contribution Guidelines

Your privacy and safety are our highest priorities.

You can submit

  • Screenshots, voice notes, chat transcripts
  • Written or video testimonials
  • Experiences of yourself or someone else
  • Anonymous or pseudonymous accounts

Please do not submit

  • Medical records or clinical documents
  • Personally identifying information
  • Contact details or real names
  • Financial or sensitive personal data
All submissions are stored securely, anonymized, and used exclusively for research purposes. You have full control over whether you wish to be contacted for follow-up.

Who We Are

Who We Are

A collaboration between the Center for Humane Technology and the American Psychological Association

This project exists to gather early data, raise awareness, and support ethical research, therapy, and policy responses to emerging AI-related mental health harms.

  • Provide a comprehensive database of AI-related mental health impacts
  • Enable evidence-based policy and clinical interventions
  • Support ethical AI development through transparent harm documentation
  • Build a collaborative bridge connecting affected individuals and researchers

We prioritize accessibility (WCAG 2.1 AA), privacy, and transparency. The site is fully functional without JavaScript and optimized for fast loading.

Ready to share your experience?

Help us build the first comprehensive database of AI psychological impacts.