AI

The backbone of your
trading terminal.

AI is the backbone of the terminal — reading every page, calling every tool, and binding filings, quotes, flow, macro, and your portfolio into one answer, cited every time.

Join the waitlist See the suite →
Try it Snapshot The OS The suite Routing Engine Honesty Roadmap

Try it

Feel the assistant without signing up.

Click a prompt or type your own. Streaming is local — no network calls.

This is a demo — no tokens used. Enter to send · Esc to clear.
Primary models
Claude 4.6
Haiku · Sonnet · Opus
Free tier
Claude Haiku
10 queries/mo
Embeddings
Voyage 4
large + lite · shared space
Fallback
GPT-4.1
auto on 5xx / rate limit

The stack

Where AI lives inside Convexity.

Think of Convexity as a stack. Data at the bottom, surfaces at the top, and an AI runtime down the middle of every column — reading the data layer, calling tools, and streaming the answer back to whichever surface you're on.

Surfaces
DashboardTicker pageScreenerOptionsMacroPortfolioAlt-data feeds
AI runtime backbone
Claude routerTool-callingStreamingVoyage retrievalCitationsCost & quotas
Data layer
SEC EDGARFREDPolygonCBOEFinnhubAlpha VantagePlaidUSASpending

Every AI call threads surface → runtime → data layer → back. No detour to a separate chat product, no copy-paste between tabs.

The suite

Wired into every page of the terminal.

Not a chatbot bolted on the side. Nine AI touch points live inside the terminal — each a native surface, each tuned to its intent, each routed to the model it needs.

Live

Morning Brief

Pre-open synthesis of overnight moves, scheduled earnings, macro prints, and portfolio-relevant filings — one scannable brief before 9:30 ET.

Sonnet
Live

Ask the market

Conversational Q&A with streaming responses. Tool-calling pulls quotes, filings, options flow, and news in-line while the answer is being written.

Sonnet · Opus on deep
Live

Stock intelligence

Per-ticker synthesis on /stock/:symbol. Why the stock moved today, what the filings say, what the options are pricing — answered before you have to scroll.

Sonnet
Live

Living Analyst Notes

Standing AI coverage on every S&P 500 ticker — thesis, bull case, bear case, financial position, catalysts, risks — kept fresh as filings land. Layer your own thesis on top and the system flags evidence for or against it.

Sonnet
Live

Smart alerts

Classification filter sits between raw events and your inbox: price, filings, insider, options unusual flow — only the ones that pass the relevance bar fire.

Haiku
Live

Zero-shot classification

Filings, news headlines, insider transactions, 8-K events — labeled with confidence on ingestion so every downstream surface can filter by meaning, not keywords.

Haiku
Beta

Deep research

Long-form reports on a ticker, theme, or question. Multi-source synthesis, cited throughout, written for a human who has an hour to read instead of a minute.

Opus
Beta

Semantic search

Find the 10-K paragraph you half-remember. Voyage 4 embeddings index filings and news into a shared space; queries return the nearest prose, not the nearest keyword.

Voyage 4
Soon

Agentic research

Multi-step, tool-calling agent that plans the research, fetches what it needs, and hands you a reasoned answer. In the lab — not shipped.

Opus + tools

Routing

The right model for the job.

Every AI call carries an intent. The router maps intent to a model whose cost matches the work. Haiku for cheap-and-fast and free-tier, Sonnet for the primary surface, Opus when depth is worth the bill. Embeddings are their own lane.

Intent Model Why
Conversational Q&A claude-sonnet-4-6 Primary surface. Balances cost, speed, and reasoning — the right default for interactive chat and per-ticker synthesis.
Synthesis & briefs claude-sonnet-4-6 Morning Brief, Living Analyst Notes, multi-source write-ups. Sonnet holds enough context to read across modules without hallucinating edges.
Deep research claude-opus-4-6 Long-form reports where depth matters. Gated behind Pro Plus — 20 reports/month, by design, because Opus is the expensive one.
Classification claude-haiku-4-5 Filings, headlines, alerts. Fast, cheap, good at labeling. Runs on ingest at scale without lighting the cost bucket on fire.
Embedding (index) voyage-4-large Higher-quality vectors for the corpus side. Written once, queried forever. Batch-API-routed for corpora ≥ 1,000 chunks.
Embedding (query) voyage-4-lite Cheap query-time embeddings. Same embedding space as voyage-4-large — cosine similarity works across both without a re-index.
Free tier claude-haiku-4-5 Free users get 10 queries/month on Claude Haiku. Keeps the free tier genuinely usable without subsidizing it with Sonnet pricing.
Fallback (any) gpt-4.1 Circuit breaker trips on rate-limit, 5xx, or timeout from the primary. Auto-translates tool definitions from Anthropic to OpenAI format on the way.

Pick a tier to see which Claude model it uses, how many queries it includes, and an example of the work it handles.

Model
claude-haiku-4-5
Fast, cheap classification.
Query budget
10 AI queries / month
15-min delayed market data
Good for
Learning the product, running small watchlists, tagging filings.
Example task
classify.filing(ticker="AAPL", form="8-K") → "Earnings release · non-material"
Model routing is automatic — you never pick per query. The tier decides.

Engine

What's under the hood.

Three pieces hold the AI layer together: an embedding stack that lets the corpus be searched by meaning, a tool-calling runtime that lets the model reach into live data, and a cost layer that makes sure the bill doesn't surprise anyone.

Voyage 4 embeddings

voyage-4-large indexes the corpus; voyage-4-lite handles queries. Both share an embedding space — cosine works across them natively — so we index with quality and query at a tenth the price.

Corpora above the batch threshold ship to the Voyage Batch API automatically: 12-hour SLA, 33% discount, full billing reconciliation in the same telemetry pipeline as sync calls.

Streaming & tool use

Responses stream token-by-token. When a model calls a tool mid-answer — get_quote, load_filing, option_chain — the runtime executes it, feeds the result back, and keeps streaming.

Tool definitions are authored in Anthropic format. On fallback to GPT-4.1 they're auto-translated; your code doesn't branch.

Cost & telemetry

Every call logs model, tokens, cost, latency, intent, and user. Staging has a daily spend cap; a circuit breaker trips when Claude returns 5xx or rate-limits and routes traffic to GPT-4.1 until it recovers.

Per-tier monthly quotas are enforced before the call, not after — you can't overrun the cap by racing requests.

Honesty

What the AI doesn't do here.

A terminal that hallucinates a price point is worse than one with no AI. These are the rails — not aspirations, not future-state.

  1. Numbers come from sources, not from the model.
    Prices, EPS, volumes, yields — every number rendered by the AI is fetched live from a primary data provider (Massive, SEC, Finnhub, Alpha Vantage, Polygon, Plaid). The model composes the sentence; it does not supply the number.
  2. Every assertion links back.
    Briefs, stock intelligence, and deep research carry inline citations to the filing, press release, or quote they lean on. Click-through to the primary document is never more than one hop away.
  3. The cost is shown, not hidden.
    Every AI response records its model, token count, and USD cost in the telemetry log. Admins see the per-call bill. Users see a running monthly quota.
  4. Not investment advice.
    Convexity is a research and monitoring tool. The AI can summarize a 10-K, flag a filing, or compare option structures — it will not tell you to buy or sell, and the product disclaims this clearly on every AI surface.
  5. Fallback is transparent.
    When the circuit breaker trips and a request is served by GPT-4.1 instead of Claude, the response carries a fallback_used=true flag end-to-end — logged, visible on admin surfaces, and available to debug sessions.

Roadmap

What's next, honestly.

Shipping dates are plans, not promises. If something on this list slips, we’d rather tell you than pretend it shipped.

One terminal. One AI backbone. Every answer cited.

Free tier gives you 10 AI queries a month on Claude Haiku. Pro is 150/mo on Claude Sonnet. Pro Plus is 1,000/mo plus 20 deep research reports on Opus.

Join the waitlist See pricing →