We write stories for machines.

Search was built for finite attention. Agents have infinite attention. Synorb observes 10,000+ sources and publishes Briefs, Signals, and Records—with stable IDs and provenance.

API + MCP connectors · Deterministic formats · 1,000+ streams

Your portfolio manager listens for 90 min. His agent delivers 144 claims first.

Your analyst scans 84 pages. Her agent surfaces 18 disclosures first.

Your marketing team scrolls a day of tweets. Their agent briefs them on 15 claims first.

Your CTO skims for 12 minutes. His agent delivers 7 key assertions first.

Your researcher parses the release. Their agent files every claim into their model first.

Source Published
Synorb Manifest

All-In Podcast

E259 · ICE Chaos in Minneapolis, Clawdbot Takeover, Why the Dollar is Dropping
1 hr 30 min
0:00 / 1:30:05
Jason: All right, everybody. Welcome back to the All In Podcast. Your favorite podcast with me again. The original core besties are here. Chamath Palihapitiya in just an absolute fabulous winter sweater.
Chamath: I'm a simple man that lives by simple means.
Jason: And your salt and science. David Friedberg, what's the background here? Is that a melancholy infinite sadness background?
Friedberg: Don't talk about my backgrounds.
Jason: Luckily, I have my straight man, my brother in arms, my Davos party crashing partner David Sacks.
Sacks: We had a lot of interesting meetings. Most of which I don't think we can talk about on air. But it was a distinctly different Davos. We've mocked Davos here for many years. But this one was a business takeover and a Trump takeover.
Chamath: The dollar dropping is not a crisis — it's a feature. The administration wants a weaker dollar to reshore manufacturing. If you look at what's happening with the trade deficit, this is deliberate policy.
Sacks: Right, and if you look at what happened with the Clawdbot situation, this is exactly why AI governance matters now, not later. You can't have state-by-state regulation of something that moves this fast.
United States Securities and Exchange Commission
FORM 8-K
Current Report Pursuant to Section 13 or 15(d) of the Securities Exchange Act of 1934
Registrant:NVIDIA CORPORATION CIK:0001045810 Filed:January 20, 2026 Period:January 20, 2026 Items:5.02 — Departure of Directors or Certain Officers

On January 17, 2026, the Board of Directors of NVIDIA Corporation (the “Company”) appointed Michael L. Thompson as Executive Vice President, AI Infrastructure, effective February 1, 2026.

Mr. Thompson, age 48, has served as Senior Vice President of Data Center Engineering since March 2022. Prior to joining NVIDIA, Mr. Thompson served as Vice President of Engineering at Google Cloud from 2018 to 2022.

In connection with his appointment, Mr. Thompson will receive a base salary of $750,000, an annual target bonus opportunity of 150% of his base salary, and a restricted stock unit award with a grant date fair value of approximately $12,000,000...

Andrej Karpathy
@karpathy
8h
LLMs are getting really good at being personalized wrong. They mirror your vibes instead of correcting your errors. This is the opposite of what you want from a reasoning system.
↩ 342 ♡ 4.2K 🔄 891
Andrej Karpathy
@karpathy
12h
Hot take: the next big unlock in AI infra is not faster models, it's better DevOps automation. Most teams spend 70% of their time on deployment plumbing, not model development.
↩ 178 ♡ 2.8K 🔄 503
Andrej Karpathy
@karpathy
18h
PSA: the litellm PyPI supply chain attack is real. If you pip installed it in the last 24h, check your env vars immediately. This is why dependency pinning matters.
↩ 567 ♡ 6.1K 🔄 2.1K
Stripe Engineering Blog
How Stripe Reduced API Latency by 40% with Edge Inference

At Stripe, we process millions of API requests per second across 46 countries. Last quarter, our infrastructure team deployed inference models at the edge — moving fraud detection and payment routing closer to the merchant.

The results exceeded our projections. Median API response times dropped by 40%, with the largest gains in fraud detection where round-trips to centralized GPU clusters had been the primary bottleneck.

This post details the architecture, the tradeoffs we navigated, and what we learned about deploying ML models at the edge of a global payments network.

Why the edge?   Every millisecond matters in payments. When a customer taps their card at a terminal in Dublin, the fraud check previously had to round-trip to us-east-1 — adding 80-120ms of pure network latency. Multiply that by billions of transactions and you start to see why we were motivated.

Our approach was to deploy lightweight inference models to 14 edge locations globally, co-located with our existing API infrastructure. The models run on custom ONNX runtimes optimized for CPU inference — no GPUs required at the edge.

The fraud detection model was the first candidate. It evaluates 47 features per transaction: merchant category, velocity patterns, device fingerprint, geographic anomaly scores, and behavioral signals from the session. Previously this ran on a centralized cluster in Virginia.

After six weeks of shadow testing, we cut over Dublin, London, and Singapore. The p50 latency for fraud scoring dropped from 145ms to 12ms. The p99 dropped from 380ms to 34ms. We were stunned.

Board of Governors of the Federal Reserve System
Federal Reserve Bank of Cleveland
Research & Analysis
The Recent Divergence in the Short-term Inflation Expectations' Anchoring of Consumers and Professional Forecasters
March 26, 2026

A new study from the Federal Reserve Bank of Cleveland reveals a significant divergence in short-term inflation expectations between consumers and professional forecasters in 2025.

While professional forecasters' expectations remained anchored near the Federal Reserve's 2 percent inflation target, consumer expectations deteriorated notably, moving further from the target.

This weakening in consumer expectations was linked to increased disagreement among consumers and growing distance from the Fed's goal. Surprisingly, the divergence correlated with self-reported political affiliations, with Democrat and Independent-leaning consumers reporting higher expectations...

Signal 2 of 144 assertions
The U.S. dollar decline is a deliberate policy feature aimed at reshoring manufacturing, not a crisis indicator.
evidence: paraphrase · confidence: stated · type: analysis · speaker: Chamath Palihapitiya
AI governance requires immediate federal framework to prevent fragmented state-level regulation that could hinder innovation.
evidence: paraphrase · confidence: stated · type: analysis · speaker: David Sacks
Brief all-in · 2026-03-21

ICE Chaos in Minneapolis, Clawdbot Takeover, Why the Dollar is Dropping

The All-In hosts debate the dollar's decline as intentional reshoring policy, the Minneapolis ICE enforcement backlash, and why the Clawdbot AI incident signals an urgent need for federal AI governance frameworks.
Record manifest · audio
"source": "All-In Podcast", "people": ["Chamath", "Sacks", "Friedberg", "Jason"], "topics": ["Dollar Policy", "AI Governance", "ICE"], "assertions_count": 144, "media_format": "audio"
Signal 2 of 18 assertions
NVIDIA appointed Michael L. Thompson as Executive Vice President, AI Infrastructure, effective February 1, 2026.
evidence: direct_quote · confidence: stated · type: disclosure
Thompson will receive base salary of $750,000 with 150% target bonus and $12M RSU award.
evidence: direct_quote · confidence: measured · type: disclosure
Brief sec-filings · 2026-01-20

NVIDIA Appoints Michael Thompson as EVP of AI Infrastructure

NVIDIA filed an 8-K disclosing the appointment of Michael Thompson as EVP of AI Infrastructure with a $12M RSU package, signaling continued investment in data center leadership.
Record manifest · filing
"source": "SEC EDGAR", "people": ["Michael Thompson", "Jensen Huang"], "organizations": ["NVIDIA", "Google Cloud"], "claim_type": "disclosure", "assertions_count": 18
Signal 2 of 15 assertions
LLMs are personalizing incorrectly by mirroring user sentiment instead of correcting errors, which is counterproductive for reasoning systems.
evidence: direct_quote · confidence: stated · type: remarks · speaker: Andrej Karpathy
Most AI teams spend 70% of their time on deployment infrastructure rather than model development.
evidence: paraphrase · confidence: stated · type: analysis · speaker: Andrej Karpathy
Brief andrej-karpathy · 2026-03-26

Karpathy Identifies LLM Personalization Flaws and Advocates for AI-Driven DevOps Automation

Andrej Karpathy critiques current LLM personalization as counterproductive for reasoning and argues the next AI infrastructure unlock is DevOps automation, not faster models.
Record manifest · social
"source": "X / @karpathy", "people": ["Andrej Karpathy"], "topics": ["LLM Personalization", "DevOps", "Supply Chain"], "media_format": "social", "assertions_count": 15
Signal 2 of 7 assertions
Deploying inference models at the edge reduced Stripe's median API response time by 40%.
evidence: direct_quote · confidence: measured · type: disclosure
The largest latency gains came from fraud detection and payment routing, where models previously required round-trips to centralized GPU clusters.
evidence: paraphrase · confidence: stated · type: analysis
Brief stripe · 2026-02-10

How Stripe Reduced API Latency by 40% with Edge Inference

Stripe's infrastructure team details how deploying inference models at the edge cut median API response times by 40%, with the largest gains in fraud detection and payment routing.
Record manifest · text
"source": "Stripe Engineering Blog", "people": ["David Singleton", "Raylene Yung"], "organizations": ["Stripe", "Cloudflare", "AWS"], "claim_type": "publication", "assertions_count": 7
Signal 2 of 11 assertions
Professional forecasters' short-term inflation expectations remained anchored near the Fed's 2% target in 2025, while consumer expectations deteriorated significantly.
evidence: direct_quote · confidence: measured · type: analysis
Consumer inflation expectations divergence correlated with self-reported political affiliations, with Democrat and Independent-leaning consumers reporting higher expectations.
evidence: paraphrase · confidence: stated · type: analysis
Brief federal-reserve · 2026-03-26

Consumer vs. Professional Inflation Expectations Diverge Sharply in 2025

Cleveland Fed research reveals a significant gap between professional and consumer inflation expectations, with consumer anchoring deteriorating along political lines while professionals remain near the 2% target.
Record manifest · text
"source": "Federal Reserve Bank of Cleveland", "people": ["Jerome Powell"], "organizations": ["Federal Reserve", "FOMC"], "topics": ["Inflation", "Consumer Expectations"], "assertions_count": 11

See Full Schema · Free API Keys Included

Get the complete manifest schema reference and 1,000 free manifests/month.

Schema PDF + Free credentials delivered instantly.

From search to listen.

Search is a noisy prior from a world built for audiences with finite attention hours. Every page and result was shaped by what humans would click, read, and share. The ceiling was human time. The incentive was human attention.

Agents don't search. They listen.

The promise of AI isn't better search. It's surfacing insights you'd never have searched for. Reasoning systems have infinite attention hours—they ingest continuously, maintain state, and reason between state changes. The paradigm shifts from retrieval to receival. Synorb builds for that shift.

Read the full thesis →

Choose your interface.

Operator-grade contracts
Stable IDs Provenance Lineage Versioning Refresh cadence Deterministic formats Observability

Your agent can start here.

No signup form. Your agent gets credentials and starts querying immediately.

1. Get free credentials
curl -s https://synorb.ai/connect Returns: api_key, api_secret, mcp_token 1,000 manifests/mo. No signup form. Secret shown once — save it.
2. Connect your coding assistant
Claude Code claude mcp add synorb \ --transport sse URL
Cursor / Windsurf { "mcpServers": { "synorb": { "url": "..." } } }
Codex / Copilot curl -s synorb.ai/connect?format=md
Lovable / Bolt / v0 curl -s synorb.ai/connect?format=md
Or use the REST API directly curl -H "api-key: YOUR_API_KEY" -H "secret: YOUR_API_SECRET" https://api.synorb.ai/streams?page_size=5 Full API reference · agents.md · llms.txt · openapi.json

Give your agents the context they deserve.

Get Credentials