The system provides emotion coordinates (based on Russell's circumplex model) from text input or actions, with persistent emotional memory per entity. Think NPCs that remember how they feel about specific players or events.
I pre-trained a DistilBERT model on ~1k video game dialogues (Skyrim, Cyberpunk, etc.) and plan to extract and evaluate 100k+ dialogues soon. However studio/team can manually add dialogues to enrich their own dataset.
The matrix doesn't generate dialogue, it only analyzes content. When you pass text or an action, it returns emotion coordinates on the valence (pleasant/unpleasant) and arousal(energetic/calm) scale. For example:
- [0.00, 0.00] = neutral
- [0.29, 0.80] = excited
- [-0.50, -0.30] = sad/tired
I made a quick visualizer here to help understand https://valence-arousal-visualizer.vercel.app/
The system helps select which dialogue/action to play based on emotional state:
- Player says something bad to NPC → system detects negative valence → game picks from "angry dialogue pool"
- NPC remembers past positive interactions → system returns positive valence → friendlier responses available
So, the devs still write the dialogues or choose the next actions, but the matrix helps manage NPC emotional states and memory dynamically.
Here's the project structure to better understand how it works:
- src/config: Helper utilities for NPC configuration setup
- src/module: The core engine with emotion prediction, memory storage, and entity management
- src/api: FFI layer with pub extern "C" to bridge our modules with C/C++ game engines and modding tools (Unity, Unreal, etc.)
To implement it, just call `build.sh`, it will create DLL files that you can use to call the matrix functions directly in C++/C/C#.
I'd love feedback on code quality and overall architecture.
Feel free to be honest about the good, the bad, and the ugly. PRs welcome if you want to contribute!