AI + Personal Data
I Built a System That Writes My Autobiography While I Use It
Most journaling apps are just storage. You write, it saves, you forget. I wanted something that actually does something with what I write.
So I built a system that watches my entries, finds patterns in my thinking, and writes my life story in third person - like a biographer following me around. Chapters update automatically. I can read my own life like a book.
The Problem With Journaling Apps
I've tried a lot of them. They all have the same issue: they're filing cabinets. You put stuff in, maybe tag it, and then it sits there. The only value you get is if you go back and read it yourself.
But I don't go back and read it. Nobody does.
What I actually wanted was a system that:
- Synthesizes what I've been thinking about
- Notices patterns I might miss
- Writes something I'd actually want to read later
- Reminds me to revisit it
So I built that.
How It Works
There are four main pieces.
Journal entries
I write these manually. Title, content, tags. Nothing fancy. These are the raw material everything else builds on.
Reflect chat
Every entry has a "Reflect" button. It opens a short conversation with an AI that responds to what I wrote. Not therapy. Not advice. Just acknowledgment and the occasional follow-up question. Responses are 1-2 sentences max. It matches my tone because I configured it to.
Life story chapters
This is the autobiography part. Every time I add an entry or have a reflection conversation, the system updates a "chapter" of my life story. It's written in third person, like a biography. Short - 2-3 paragraphs, around 150 words. After 7 days of not writing anything, it starts a new chapter.
Email digests
Weekly, monthly, quarterly, bi-yearly, and yearly. Each one includes a chapter excerpt, patterns it detected, and a link back to read the full story. This is what actually gets me to revisit what I've written.
The Technical Decisions
I used different AI models for different jobs. This matters more than people think.
Claude Opus handles the reflect chat. I needed high-quality conversational responses that feel natural and match my voice. Opus is the best at this.
Claude Haiku handles background chapter updates. These run constantly as I add content, so I needed something fast and cheap. Haiku is perfect for continuous synthesis tasks where you don't need the absolute best output, just good-enough-and-fast.
GPT-4o generates AI-written journal entries. When I've been exploring decisions in another part of the app, it writes reflections about patterns it noticed. GPT-4o is strong at structured reflective writing.
GPT-4o-mini handles metadata - inferring titles and tags. Fast, cheap, good enough.
text-embedding-3-small for all embeddings. Semantic search powers the reflect chat (pulling relevant past entries) and chapter generation (finding thematic connections). This model hits the best quality/cost tradeoff for retrieval.
The key insight: don't use one model for everything. Match the model to the job.
The Architecture
Everything runs through three API endpoints:
/api/journal- CRUD for entries. Manual or AI-generated./api/reflect- Conversation management. Load, send message, clear./api/journey- Chapter management. List, generate, update.
All endpoints are stateless. The AI processing happens as non-blocking async calls so the UI stays responsive. When I create an entry, I see it immediately. The chapter update happens behind the scenes.
The database has five tables:
journal_entries- the raw entriesreflection_messages- chat history with embeddings for searchlife_story_chapters- the narrative summariesdecision_history- decisions I've explored elsewhere in the appdecision_embeddings- multi-dimensional vectors (financial, emotional, career, health, relationship, overall)
The embeddings let me do semantic search across everything. When I reflect on an entry, the system pulls related entries from my history to give the AI context. When it writes chapters, it can find thematic connections across months of content.
The Database Choice
I needed a database that could do two things at once:
- Store regular data - users, entries, chat history
- Store and search vector embeddings - semantic search across all my content
Most people solve this with two databases: Postgres for the data, Pinecone or Weaviate for vectors. That means two connections, sync logic, and twice the infrastructure to manage.
TiDB does both in one place. It's MySQL-compatible (so I use the standard mysql2 driver), but it has a native VECTOR column type and built-in distance functions.
Here's what the schema looks like:
CREATE TABLE journal_entries (
id VARCHAR(36) PRIMARY KEY,
session_id VARCHAR(255),
title VARCHAR(500),
content TEXT,
embedding VECTOR(1536),
created_at TIMESTAMP
);
And here's semantic search:
SELECT content, VEC_COSINE_DISTANCE(embedding, ?) as distance
FROM journal_entries
WHERE session_id = ?
ORDER BY distance ASC
LIMIT 5;
One query. No separate vector database. No sync issues.
The VEC_COSINE_DISTANCE function compares two vectors and returns how different they are - 0 is identical, 2 is opposite. I threshold at 0.5 for relevance and order ascending so the most similar entries come first.
This simplifies everything: one connection, one schema, and foreign keys/transactions just work. Multi-tenancy is a WHERE clause.
The tradeoff: TiDB's vector search isn't as optimized as purpose-built vector databases at massive scale. If I had 100 million entries, I'd probably need something else. For a personal app with thousands of entries? It's more than enough, and the simplicity is worth it.
Voice Customization
This part was important to me. I didn't want AI responses that sound like a therapist or a corporate chatbot.
There's a config file (/src/lib/voice-config.ts) where I define my communication preferences:
export const VOICE_CONFIG = {
ENABLED: true,
STYLE: `
- Confident, approachable, direct
- Simple, plain English with short sentences
- Use contractions
- No flowery intros or conclusions
- NEVER use em dashes
`,
};
This gets injected into the reflect chat prompts. The AI responses actually sound like how I talk.
The Bigger Vision
Right now, the Journal is a standalone system. But it's designed to plug into something larger.
The idea: my reflections should feed back into decision simulations. Here's the loop I'm building toward:
- I explore a decision tree - should I take this job, move to this city, whatever
- I pick a path
- Later, I log how it actually played out
- The system updates my profile - patterns, preferences, tendencies
- My journal reflects on those patterns
- Next time I face a similar decision, the simulation uses everything it knows about me
The predictions get more personalized over time. And because it's all transparent and editable, I stay in control.
That's the end state. The Journal is the memory layer that makes it possible.
What I Learned
Model selection is product design. Picking Claude Opus vs Haiku vs GPT-4o isn't just a technical decision. It directly affects user experience - response quality, latency, cost. You have to think about it as a product tradeoff, not just an engineering one.
Background processing is underrated. The chapter updates don't block the UI. That seems obvious, but a lot of AI products make you wait for everything. Users don't care about your AI running. They care that their action felt instant.
Email digests are engagement infrastructure. The product only works if I actually come back to it. The digests solve that. They're not spam - they're excerpts of my own life story. I open them.
Try It
The system is live. I use it daily. Every week I get an email with a chapter of my life written by an AI that actually knows what I've been thinking about.
That's the experience I wanted. Storage apps don't do that.