Parallel Lives

A real-time AI decision visualization engine. Built with Claude Code. Ships fast, works at scale.

View Live Demo
Why I Built This

The question that keeps me up at night is:

"What if I made the wrong decision today... and don't realize it for 10 years?"

Job offers, moving cities, buying a house, going back to school. You make a call now and only find out in 2035 whether it was dumb or genius.

Parallel Lives is my attempt to make that less random.

You type a decision in plain English. The app plays out branching futures over the next decade, with money, probabilities, and emotional tradeoffs, and turns it into a visual tree you can actually think about.

What This Actually Is

Not a prompt wrapper.

It is a stateful, streaming, multi-model system that:

  1. Takes a messy, real-life decision
    Should I leave Google for a startup in NYC?
  2. Classifies it into a scenario (job change, move, school, etc.)
  3. Generates a branching 10-year simulation with:
    • income and net worth projections
    • probabilities on each branch
    • qualitative stuff like burnout, regret, relationships
  4. Streams that tree into a visual graph so you can see "future you" at a glance
  5. Lets you poke it: "What if I negotiated 20 percent more equity?" and only updates the affected branches
Stack
Layer Tech Why
Frontend Next.js 16, React 19, TypeScript App Router + RSC for streaming
Visualization XYFlow (React Flow) handles 100+ nodes without choking
AI Claude Opus 4.5 and GPT-5.1 Opus for reasoning, GPT for speed and summaries
Database TiDB Cloud serverless, horizontally scalable, SQL I actually like
Hosting Vercel Edge global CDN, simple CI/CD

I designed the UX, wired the streaming, and wrote all the infra glue myself.

How It Works (End to End)

1. User input

You type:

"Should I take the startup offer ($180k + 0.5 percent equity) or stay at BigCo ($250k + RSUs)?"

The request hits a Next.js route on Vercel Edge.

2. Scenario detection

A lightweight classifier decides scenario = "job-offer" and picks the right prompt template plus guidance.

3. Tree generation (Claude Opus)

Claude Opus gets a structured system prompt with:

  • scenario guidance (comp breakdowns, equity math, typical outcomes)
  • constraints on JSON shape
  • max years, branching factor, and so on

It responds with a full decision tree in JSON (nodes, edges, probabilities, metrics).

4. Streaming and repair

I stream tokens as they are generated, repair the JSON on the fly, and push incremental tree updates down to the client via SSE.

5. Rendering

The frontend parses chunks into nodes and edges, feeds them into XYFlow, and animates new branches as they appear.

For big trees, I only render the visible viewport and lazy-load the rest.

6. Fast summaries and tables (GPT-5.1)

In parallel, GPT-5.1 produces tabular summaries (path vs salary vs net worth vs stress) because speed matters more than ultra-deep reasoning there.

7. Persistence (WIP)

TiDB stores decisions, trees, and metadata so future runs can become personalized: risk tolerance, values, prior choices.

The Interesting Engineering Problems

Why Two Models?

Different jobs, different tools.

  • Claude Opus 4.5 generates the trees. This is where reasoning quality actually matters.
  • GPT-5.1 turns those trees into quick tables and summaries so the UI feels snappy.
// Tree generation: Claude Opus (quality matters)
const stream = anthropic.messages.stream({
  model: "claude-opus-4-5-20251101",
  max_tokens: isExploreMode ? 512 : 1500,
});

// Table summary: GPT-5.1 (speed matters)
const tableResponse = await openai.responses.create({
  model: "gpt-5.1",
  max_output_tokens: 500,
  text: { format: { type: "json_object" } },
});

Tradeoff: initial tree generation can take a few extra seconds.
Upside: you get deep, structured simulations that still feel instant because the structured bits arrive first.

Streaming So Users Do Not Stare at Nothing

Full trees are around 1500 tokens. That is about 8 to 12 seconds if you block.

I stream partial results and send progress events every ~200 characters, so the UI can show branches as they grow instead of one big reveal at the end.

const readableStream = new ReadableStream({
  async start(controller) {
    for await (const event of stream) {
      if (event.type === "content_block_delta") {
        fullText += event.delta.text;
        if (fullText.length - lastProgressUpdate > 200) {
          controller.enqueue(encoder.encode(JSON.stringify({
            type: "progress",
            chars: fullText.length
          }) + "\n"));
        }
      }
    }

    const treeData = parseJSONWithRepair(jsonText);
    controller.enqueue(encoder.encode(JSON.stringify({
      type: "complete",
      data: treeData
    })));
  }
});

Result: you see the first branches in about 800ms, not 10 seconds of nothing.

LLMs Break JSON. I Fix It Anyway.

Sometimes Claude sends almost-correct JSON — trailing commas, single quotes, unclosed braces.

Instead of giving users an error, I repair it:

function repairJSON(jsonText: string): string {
  // Single quotes -> double quotes
  text = text.replace(/'([^']+)'(\s*:)/g, '"$1"$2');
  // Kill trailing commas
  text = text.replace(/,(\s*[}\]])/g, '$1');
  // Balance braces
  while (braceCount > 0) {
    text = text + '}';
    braceCount--;
  }
  return text;
}

Parse success went from about 92 percent to about 99.5 percent. Less rage for everyone.

Rate Limiting With a Dev Escape Hatch

I do not want a Hacker News spike to nuke my Anthropic bill.

  • Prod: 10 requests per hour per IP.
  • Me: I still need to hammer this thing.

So there is a tiny dev escape hatch via URL param to localStorage:

// Frontend: store dev key with expiry
const devParam = urlParams.get('dev');
if (devParam) {
  const expiry = Date.now() + (30 * 24 * 60 * 60 * 1000);
  localStorage.setItem('parallel-lives-dev', JSON.stringify({ key: devParam, expiry }));
}

// API: skip rate limit for valid dev keys
const isDev = DEV_KEY && devKey === DEV_KEY;
if (!isDev) {
  const rateLimitResult = rateLimit(`generate:${clientIP}`, RATE_LIMIT_CONFIG);
  if (!rateLimitResult.allowed) {
    return NextResponse.json({ error: "Rate limit exceeded" }, { status: 429 });
  }
}

Simple, boring, and it works.

No White Flash on Dark Mode

SSR apps love flashing white before React hydrates. Parallel Lives runs in dark mode a lot, so that flash is painful at 2 AM.

I ship the user theme before React loads:

<html lang="en" className="dark" suppressHydrationWarning>
  <head>
    <script dangerouslySetInnerHTML={{
      __html: `
        (function() {
          var saved = localStorage.getItem('parallel-lives-dark-mode');
          if (saved === 'light') {
            document.documentElement.classList.remove('dark');
            document.documentElement.classList.add('light');
          }
        })();
      `,
    }} />
  </head>

Theme applies before first paint. Zero flash.

Mobile Gestures That Feel Native

Desktop gets a floating draggable panel. Mobile gets a bottom sheet with flick-to-dismiss.

I track finger velocity myself so it does not feel like a janky web modal:

const velocityRef = useRef(0);

onTouchMove={(e) => {
  const currentY = e.touches[0].clientY;
  const currentTime = Date.now();
  const timeDiff = currentTime - lastTouchTime.current;
  if (timeDiff > 0) {
    velocityRef.current = (currentY - lastTouchY.current) / timeDiff;
  }
}}

onTouchEnd={() => {
  // Fast downward flick = minimize
  if (velocityRef.current > 0.5) {
    setMobileSheetHeight(minSheetHeight);
  }
}}

Feels closer to a native app than a dashboard.

Prompt Engineering That Actually Works

One generic "You are a helpful AI" prompt gives you generic futures.

I detect the scenario type and inject domain-specific guidance into the system prompt:

const scenarioGuidance: Record<string, string> = {
  'job-offer': `
    - Total comp breakdown: base, bonus, RSUs, 401k match
    - Equity: calculate value at different valuations
    - Benefits value: health insurance ($5-20k/year), PTO
  `,
  'startup': `
    - Runway calculation: savings ÷ monthly burn = months
    - Success rates: ~10% succeed, ~40% fail completely
  `,
};

const scenarioType = detectScenarioType(templateId, decision);
const guidance = scenarioGuidance[scenarioType] || '';
systemMessage = systemPrompt.replace('{{SCENARIO_GUIDANCE}}', guidance);

This is what makes the app feel grounded instead of "AI fanfic about your life."

Numbers
Metric Value
Time to first token ~800ms
Full tree generation 6-10 seconds
Tree render (100 nodes) <16ms
Lighthouse 94
Bundle 287kb gzipped
Data and Privacy (High Level)
  • User decisions and trees are stored in TiDB under a per-user scope.
  • No training or fine-tuning on user data.
  • Future "agentic memory" features will be opt-in and transparent.
What Is Next
  • Agentic memory — remember past decisions, risk tolerance, values. Make future sims personal instead of one-off.
  • Multi-turn refinement — "What if I negotiated 20 percent more equity?" Regenerate only the branches that change.
  • Collaborative decisions — share trees with partners and see where your probability estimates disagree.
  • Framework exports — turn a tree into SWOT, decision matrix, or "future regret" analysis with one click.
Built With Claude Code

Every feature here — streaming, JSON repair, mobile gestures, the whole stack — was built talking to Claude Code in a terminal and editor.

I am applying to work on Claude Code with a project I built using Claude Code. Feels like the right kind of full circle.

Code Access

The repo is private right now.

If you are a recruiter or hiring manager and want to walk through the code or architecture in more detail, I am happy to do a live session.