Skip to content

Semantic Telemetry — User Guide

Semantic Telemetry is the simplest way to run a conversation and collect real-time semantic telemetry (SGI, Velocity, context phase), then export your session for research or debugging.

What you need

  • A valid login (JWT in the browser)
  • The Semantic Telemetry page: /baseline-generator
  1. Open Semantic Telemetry
  2. Click Start Run (or equivalent “New Run” action)
  3. Send messages naturally (minimum: 10 user messages for a valid baseline)
  4. Watch the live chart (SGI × Velocity) update after each turn
  5. Export your run as JSON when finished

How to read the live metrics

SGI (Semantic Grounding Index)

  • Low SGI: drifting away from the user’s query
  • Healthy range: typically ~0.7–1.3 (depends on backend and task)
  • High SGI: overly query-focused / narrow mode

Velocity (Angular Velocity)

  • Low velocity: stable / repetitive
  • Moderate velocity: productive exploration
  • High velocity: topic jumping / unstable transitions

Context Phase

The system labels the conversation state as:

  • stable: the conversation remains anchored to its current context
  • protostar: a new topic is forming
  • split: the topic has changed (context shift)

Exporting your data

Use the Export action to download a JSON bundle that typically contains:

  • messages (user + assistant)
  • per-turn SGI/velocity telemetry points
  • latest context phase + context id

This file is what you’ll use for Paper figures or offline analysis scripts.

Troubleshooting

  • Metrics don’t update: verify the SDK service is healthy and the embedding sidecar endpoint is reachable.
  • Chart shows zeros: usually means the ingest request failed (network or service error).
  • Slow responses: first turn is slower; subsequent turns should be faster once the conversation is bootstrapped.

Notes on privacy

This tool is designed for research. Avoid sharing sensitive data unless you understand the environment you are running in.