Conversational Design · Mental Health

Grace Kruschke

Designing transparent, compassionate, evidence-based conversational experiences that treat the whole person.
✉️ Email me 📍 Los Angeles, CA ⬇︎ Resume (PDF) Open to 100% Remote (US)
Summary & Principles

Mental health is as vital as physical health. The work aims to reduce stigma, meet people as whole humans, and make support accessible and transparent. Conversations are designed to be clear, inclusive, and compassionate — grounded in psychology and guided by evidence. AI is used to enhance (never replace) the human connection at the core of care.

North Star

  • Design systems that welcome people without judgment and lower friction to getting help.
  • Keep care clinically grounded while staying warm and human.
  • Use AI to bring personalization and continuous support — with full transparency and consent.

How this shows up in the work

  • Language that is plain, validating, and inclusive.
  • Clear boundaries and escalation paths for high‑risk signals.
  • Trust by design: visible data use, easy opt‑outs, and human handoffs.
Plan

AI‑Assisted Clinical Documentation Support

The aim is to lighten cognitive load for clinicians during assessments and safety planning by providing transparent, editable drafts and tone guidance that protects compassion.

Vision

  • Give clinicians a trustworthy first draft that reads like a colleague who cares and documents carefully.
  • Keep provenance visible so it’s always clear what came from the model and why.

Objectives

  • Reduce time‑to‑final note without losing nuance.
  • Encourage consistent, plain‑language summaries that respect context.
  • Make risk/safety sections structured and complete.

Approach

  • Prompts that nudge warmth and clarity, not boilerplate.
  • Inline justification for suggested sentences and sections.
  • Structured “Risk & Safety” blocks: ideation, plan, means, protective factors, next steps.
Prompt skeleton (excerpt):
Role: Draft a clinician-facing summary after a crisis chat. Goals: Accurate, compassionate, specific. Prefer plain language over jargon. Inputs: Redacted transcript + key risk/safety fields. Output: 200–300 words with headings: "Presenting Concern", "Risk Indicators", "Protective Factors", "Plan". Constraints: Do not invent facts; flag uncertainty; add TODO markers where details are missing.

Ethical Guardrails

  • Explicit consent and visibility for generated content.
  • Easy opt‑out per note; human has final say.
  • Privacy‑first redaction patterns.

What success looks like

  • Shorter editing cycles and higher tone consistency.
  • Higher clinician trust measured by adoption and feedback.
Plan

Conversational Simulator for Crisis Counselors

The goal is to offer realistic practice spaces where counselors build judgment, empathy, and pacing — before they handle live conversations.

Vision

  • Provide repeatable scenarios that feel real and emotionally layered.
  • Offer feedback that explains why a choice helps rapport, not just whether it’s “right.”

Objectives

  • Establish a shared rubric for tone, boundaries, and escalation.
  • Help trainees recognize emotional signals and choose kind, clear language.

Approach

  • Persona library with emotion gradients and triggers.
  • “Tone checkpoints” and coaching interludes at key branch points.
  • Optional empathy scoring that is transparent and opt‑in.
Checkpoint rubric (excerpt):
Rapport: validates feeling + avoids fix‑it language Boundaries: states scope kindly; offers appropriate options Clarity: short sentences; avoids jargon; confirms next step Escalation: acknowledges risk; offers options; documents consent

What success looks like

  • Faster ramp to confident, consistent counseling.
  • Shared language for feedback across trainers and teams.
Plan

Empathy‑Driven AI Wellness Companion

The vision is a gentle daily companion that supports emotional regulation without clinical advice, with clear boundaries and pathways to human care.

Design Goals

  • Feel warm and human; never patronizing or prescriptive.
  • Offer micro‑interventions (CBT/DBT‑inspired) that fit into everyday life.
  • Recognize distress signals and surface safe, consent‑based options.

Approach

  • Sentiment‑aware prompts that mirror language lightly.
  • Grounding, reframing, and tiny next steps as primary patterns.
  • Clear escalation cues and resource surfacing when risk appears.
Sample micro‑prompts
• "Want to try a 60‑second grounding exercise together?" • "Would it help to jot the top 2 worries, then pick the kindest next step?" • "I'm hearing really heavy feelings. I can share options for extra support — want to see them?"

What success looks like

  • Higher daily engagement with healthier boundaries.
  • Users report feeling seen, not managed.
Experience

Relevant Experience

  • 988 Crisis Counselor, Volunteers of America Western Washington — 2024–Present
  • High‑Needs Care Coordinator, Child & Family Support Services — 2023–2024
Education

Education

  • M.S. Psychology — Grand Canyon University (2025)
  • B.S. Psychology · Human Development — Washington State University (2022)
Contact

Get in touch

Prefer email over forms. Click here to reach me: ✉️ Email Grace