Morning Brief
Pre-open synthesis of overnight moves, scheduled earnings, macro prints, and portfolio-relevant filings — one scannable brief before 9:30 ET.
SonnetAI
AI is the backbone of the terminal — reading every page, calling every tool, and binding filings, quotes, flow, macro, and your portfolio into one answer, cited every time.
Try it
Click a prompt or type your own. Streaming is local — no network calls.
The stack
Think of Convexity as a stack. Data at the bottom, surfaces at the top, and an AI runtime down the middle of every column — reading the data layer, calling tools, and streaming the answer back to whichever surface you're on.
Every AI call threads surface → runtime → data layer → back. No detour to a separate chat product, no copy-paste between tabs.
The suite
Not a chatbot bolted on the side. Nine AI touch points live inside the terminal — each a native surface, each tuned to its intent, each routed to the model it needs.
Pre-open synthesis of overnight moves, scheduled earnings, macro prints, and portfolio-relevant filings — one scannable brief before 9:30 ET.
SonnetConversational Q&A with streaming responses. Tool-calling pulls quotes, filings, options flow, and news in-line while the answer is being written.
Sonnet · Opus on deepPer-ticker synthesis on /stock/:symbol. Why the stock moved today, what the filings say, what the options are pricing — answered before you have to scroll.
SonnetStanding AI coverage on every S&P 500 ticker — thesis, bull case, bear case, financial position, catalysts, risks — kept fresh as filings land. Layer your own thesis on top and the system flags evidence for or against it.
SonnetClassification filter sits between raw events and your inbox: price, filings, insider, options unusual flow — only the ones that pass the relevance bar fire.
HaikuFilings, news headlines, insider transactions, 8-K events — labeled with confidence on ingestion so every downstream surface can filter by meaning, not keywords.
HaikuLong-form reports on a ticker, theme, or question. Multi-source synthesis, cited throughout, written for a human who has an hour to read instead of a minute.
OpusFind the 10-K paragraph you half-remember. Voyage 4 embeddings index filings and news into a shared space; queries return the nearest prose, not the nearest keyword.
Voyage 4Multi-step, tool-calling agent that plans the research, fetches what it needs, and hands you a reasoned answer. In the lab — not shipped.
Opus + toolsRouting
Every AI call carries an intent. The router maps intent to a model whose cost matches the work. Haiku for cheap-and-fast and free-tier, Sonnet for the primary surface, Opus when depth is worth the bill. Embeddings are their own lane.
| Intent | Model | Why |
|---|---|---|
| Conversational Q&A | claude-sonnet-4-6 | Primary surface. Balances cost, speed, and reasoning — the right default for interactive chat and per-ticker synthesis. |
| Synthesis & briefs | claude-sonnet-4-6 | Morning Brief, Living Analyst Notes, multi-source write-ups. Sonnet holds enough context to read across modules without hallucinating edges. |
| Deep research | claude-opus-4-6 | Long-form reports where depth matters. Gated behind Pro Plus — 20 reports/month, by design, because Opus is the expensive one. |
| Classification | claude-haiku-4-5 | Filings, headlines, alerts. Fast, cheap, good at labeling. Runs on ingest at scale without lighting the cost bucket on fire. |
| Embedding (index) | voyage-4-large | Higher-quality vectors for the corpus side. Written once, queried forever. Batch-API-routed for corpora ≥ 1,000 chunks. |
| Embedding (query) | voyage-4-lite | Cheap query-time embeddings. Same embedding space as voyage-4-large — cosine similarity works across both without a re-index. |
| Free tier | claude-haiku-4-5 | Free users get 10 queries/month on Claude Haiku. Keeps the free tier genuinely usable without subsidizing it with Sonnet pricing. |
| Fallback (any) | gpt-4.1 | Circuit breaker trips on rate-limit, 5xx, or timeout from the primary. Auto-translates tool definitions from Anthropic to OpenAI format on the way. |
Pick a tier to see which Claude model it uses, how many queries it includes, and an example of the work it handles.
Engine
Three pieces hold the AI layer together: an embedding stack that lets the corpus be searched by meaning, a tool-calling runtime that lets the model reach into live data, and a cost layer that makes sure the bill doesn't surprise anyone.
voyage-4-large indexes the corpus; voyage-4-lite handles queries. Both share an embedding space — cosine works across them natively — so we index with quality and query at a tenth the price.
Corpora above the batch threshold ship to the Voyage Batch API automatically: 12-hour SLA, 33% discount, full billing reconciliation in the same telemetry pipeline as sync calls.
Responses stream token-by-token. When a model calls a tool mid-answer — get_quote, load_filing, option_chain — the runtime executes it, feeds the result back, and keeps streaming.
Tool definitions are authored in Anthropic format. On fallback to GPT-4.1 they're auto-translated; your code doesn't branch.
Every call logs model, tokens, cost, latency, intent, and user. Staging has a daily spend cap; a circuit breaker trips when Claude returns 5xx or rate-limits and routes traffic to GPT-4.1 until it recovers.
Per-tier monthly quotas are enforced before the call, not after — you can't overrun the cap by racing requests.
Honesty
A terminal that hallucinates a price point is worse than one with no AI. These are the rails — not aspirations, not future-state.
fallback_used=true flag end-to-end — logged, visible on admin surfaces, and available to debug sessions.Roadmap
Shipping dates are plans, not promises. If something on this list slips, we’d rather tell you than pretend it shipped.
voyage-4-nano is open-weight. We may ship a client-side indexing path for private notes that never leaves the browser.
Free tier gives you 10 AI queries a month on Claude Haiku. Pro is 150/mo on Claude Sonnet. Pro Plus is 1,000/mo plus 20 deep research reports on Opus.