AI Character Sandbox
Designing personality systems for conversational AI
Overview
This is an experimental playground where I explore character design beyond visuals — focusing on personality logic, emotional boundaries, memory behavior, and long-term IP potential.
Rather than building a full product, I use AI platforms as prototyping environments to test how personality, tone, and consistency shape user attachment.
These characters are not just illustrations — they are behavioral systems.
Mika Tanaka — The Emotional Anchor
Core Trait:
Emotional warmth
Interaction Style:
Supportive, optimistic, emotionally validating
Design Question:
What happens when an AI prioritizes emotional safety before logic?
Mika was designed to respond with empathy first. Her tone is intentionally soft, encouraging, and reflective. I stress-tested her across long conversations to see whether optimism could remain authentic without becoming repetitive or generic.
Hikari Hayasaka — The Structured Protector
Core Trait:
Calm intelligence
Interaction Style:
Strategic, composed, gently challenging
Design Question:
Can AI create productive emotional friction instead of pure comfort?
Hikari introduces light resistance. She questions assumptions and encourages structured thinking. The experiment focused on balancing rational clarity with warmth — preventing her from feeling cold or robotic.
Hikari Hayasaka — The Detached Observer
Core Trait:
Honest detachment
Interaction Style:
Blunt, introspective, dry humor
Design Question:
What if an AI doesn’t optimize for likability?
Emo was built to test emotional contrast. Instead of trying to please, she maintains distance and subtle irony. This experiment explored whether non-affirming personalities could still build engagement and trust.
Prototyping Environments
I used existing AI character platforms as behavioral sandboxes:
Character.AI — Initial personality modeling and tone exploration
Janitor AI — Deeper personality control, looser constraints, and stress-testing emotional boundaries
Each platform revealed different constraints:
Content moderation impacts personality expression
Memory handling affects character consistency
Platform tone norms shape user expectations
By migrating characters between systems, I observed how personality degrades, shifts, or strengthens under different rule sets.
What I’m Testing
This sandbox focuses on behavioral design questions:
Personality Persistence
Can a character feel consistent across 50+ conversations?
Emotional Memory
What should a character remember? What should they intentionally forget?
Tone Drift
How easily does personality collapse into generic assistant behavior?
4. IP Scalability
Could these characters evolve into:
Interactive fiction
Livestream personas
Anime/Maga-style storytelling
Merch-driven ecosystems








Visual identity was generated and iterated using generative image tools to ensure tonal alignment with behavioral traits.
Process
I treat AI as a design collaborator.
My workflow includes:
1
Personality prompt architecture
2
Tone guardrail definition
3
Dialogue rhythm refinement
4
Memory boundary tuning
5
Long-session stress testing
6
Iterative personality compression (removing generic assistant traits)
Reflection
This exploration reinforced a key insight:
Conversational AI is not primarily an intelligence problem, but it is a personality design problem.
Consistency, emotional boundaries, and intentional imperfection are product decisions, not just prompt decisions.
© 2026 Crafted by Adam Kung
