Tutorial · April 28, 2026

How to build a GEO tool

The complete build of a Generative Engine Optimization tool. Four atoms, six surfaces, daily cadence, real cost numbers. One weekend of work.

TL;DR
A GEO tool tracks four atoms (presence, prominence, citation rate, sentiment) across six AI surfaces. Build it in 5 steps using MentionsAPI as the data layer. Real cost: $80 to $150 a month. Comparable SaaS tools cost $200 to $890. The full recipe with code is in this post.

Every GEO tool on the market measures the same four things in slightly different ways.

Profound, BrandRadar, Otterly, Siftly, HubSpot AEO, Conductor AgentStack. They differentiate on dashboard UX, alerting workflows, and which integrations they ship. They do not differentiate on the underlying data. The data is the same: did my brand show up in an AI-generated answer, where in the answer, was my URL cited, and what was the sentiment.

That data layer is the hard part. Most of those tools are building it badly, in-house, with bugs they will not admit. We checked and found the API/UI gap reaches 96% of queries on ChatGPT, which means most GEO dashboards are measuring the wrong half of reality.

This post walks through building a GEO tool that gets the data right. The atoms-and-surfaces framework, the API calls, the rollup, the dashboard, the alerts. About 200 lines of code. One weekend.

The four atoms of GEO

Every GEO measurement reduces to one of these four:

Presence. Was my brand mentioned in this answer? Boolean. Roll it up across queries to get presence_rate per surface.

Prominence. If yes, where? Position 1 (named first) or position 7 (buried)? Integer rank.

Citation rate. Was my URL cited as a source for this answer? Different from "mentioned" because mentions can happen without citing my domain.

Sentiment. If mentioned, was the framing positive, neutral, or negative? "Linear is the keyboard-first alternative to Jira" is positive. "Linear had outages last week" is negative.

All four are independent. A brand can be mentioned (presence yes) without being cited (citation no). A brand can be ranked first (prominence high) with neutral sentiment. The atoms-and-surfaces framework gives you a 6 surface × 4 atom matrix per query, which is the right level of granularity to optimize against.

The six surfaces (and why all of them matter)

ChatGPT, Claude, Gemini, Perplexity, Google AI Overviews, Bing Copilot. Each surface has different audience and different citation behavior. Skipping any of them means missing real visibility.

ChatGPT is the volume leader (largest user base). API and UI diverge on 96% of queries.

Claude has the most engineering-heavy audience. Lowest citation count per answer (often 0-2).

Gemini has the most aggressive query fan-out (3-7 sub-queries). UI scraping is mandatory because the API misses the fan-out completely.

Perplexity is the most citation-heavy: 6-10 cited URLs per answer typically. Lowest API/UI gap (42%).

Google AI Overviews trigger on 48% of tracked queries. Source URLs there function as the new top three blue links.

Bing Copilot is the smallest by volume but disproportionately important for Microsoft and enterprise audiences. No public API exists.

Every GEO tool needs the same data. Most are building it themselves, badly. The data layer is a separately fundable wedge.The picks-and-shovels framing
Get the data layer in one call
MentionsAPI fans out across all 6 AI surfaces and returns the 4 atoms in a normalized response. PAYG from $10. $1 free signup credit.

The build

Define the query portfolio and tracked brands

Aim for 30 queries split evenly across three buckets: category-defining, comparison, use-case. Track yourself plus 3-5 competitors.

config.json
{
  "brand": "Pipedrive",
  "competitors": ["HubSpot", "Salesforce", "Zoho", "Close"],
  "queries": [
    "best CRM for small business",
    "best CRM for sales teams",
    "lightweight CRM for consultants",
    "Pipedrive vs HubSpot",
    "alternatives to Salesforce",
    "Pipedrive vs Salesforce for SMB",
    "CRM for a 5-person startup",
    "what is the cheapest CRM with email automation",
    "best CRM with kanban view",
    "CRM with WhatsApp integration"
  ]
}
Fan out /v1/check across all six surfaces

One call per query, with mode: all_live. The response includes a providers map covering all six surfaces. Each cell has the four atoms.

snapshot.mjs
import config from "./config.json" assert { type: "json" };
import fs from "node:fs/promises";

async function checkOne(query) {
  const res = await fetch("https://api.mentionsapi.com/v1/check", {
    method: "POST",
    headers: {
      Authorization: `Bearer ${process.env.MENTIONSAPI_KEY}`,
      "Content-Type": "application/json",
    },
    body: JSON.stringify({
      mode: "all_live",
      query,
      track_brands: [config.brand, ...config.competitors],
    }),
  });
  return res.json();
}

const today = new Date().toISOString().slice(0, 10);
const snapshot = {};
for (const q of config.queries) {
  snapshot[q] = await checkOne(q);
}
await fs.writeFile(`snapshots/${today}.json`, JSON.stringify(snapshot, null, 2));
Roll up the four atoms

Walk every (query, surface) cell. Aggregate per surface. The output is your dashboard data: 6 surfaces × 4 atoms = 24 numbers that summarize your GEO state.

rollup.mjs
function rollup(snapshot, brand) {
  const SURFACES = ["chatgpt", "claude", "gemini", "perplexity", "ai_overview", "bing_copilot"];
  const result = {};

  for (const surface of SURFACES) {
    let mentions = 0, total = 0, ranks = [], citationHits = 0, sentScores = [];

    for (const query of Object.keys(snapshot)) {
      const cell = snapshot[query].providers[surface];
      if (!cell) continue;
      total++;

      const me = cell.brands?.find(b => b.name === brand);
      if (me?.mentioned) {
        mentions++;
        if (me.rank !== null) ranks.push(me.rank);
        if (me.sentiment !== undefined) sentScores.push(me.sentiment);
      }

      const cited = cell.citations?.some(c => c.url.includes(brand.toLowerCase() + ".com"));
      if (cited) citationHits++;
    }

    result[surface] = {
      presence_rate: total > 0 ? mentions / total : 0,
      avg_rank: ranks.length > 0 ? ranks.reduce((a, b) => a + b, 0) / ranks.length : null,
      citation_rate: total > 0 ? citationHits / total : 0,
      sentiment_score: sentScores.length > 0 ? sentScores.reduce((a, b) => a + b, 0) / sentScores.length : null,
    };
  }

  return result;
}

console.log(rollup(snapshot, "Pipedrive"));
Schedule daily and store snapshots

Run the snapshot pipeline daily via cron, GitHub Actions scheduled workflows, Cloudflare Cron Triggers, or any equivalent. Save snapshots keyed by date so you can compute deltas later.

cron entry
# Run daily at 09:00 UTC
0 9 * * * cd /path/to/geo-tool && node snapshot.mjs && node rollup.mjs && node alert.mjs
Render the dashboard and alert on changes

For the dashboard, the simplest path is a static page rendered from the snapshot data with Recharts or similar. Plot the four atoms per surface, plus the overall GEO score over time.

For alerts, diff today's rollup against yesterday's. Send to Slack on meaningful changes: rank drops of 2+, presence drops of 10%+, new competitor entries, citation gaps that opened up.

alert.mjs
import fs from "node:fs/promises";

const today = new Date().toISOString().slice(0, 10);
const yesterday = new Date(Date.now() - 86400000).toISOString().slice(0, 10);

const t = JSON.parse(await fs.readFile(`rollups/${today}.json`));
const y = JSON.parse(await fs.readFile(`rollups/${yesterday}.json`));

const alerts = [];
for (const surface of Object.keys(t)) {
  const presenceDelta = t[surface].presence_rate - y[surface].presence_rate;
  if (Math.abs(presenceDelta) >= 0.10) {
    alerts.push(`${surface}: presence ${presenceDelta > 0 ? "up" : "down"} by ${Math.round(Math.abs(presenceDelta) * 100)}pp`);
  }
}

if (alerts.length > 0) {
  await fetch(process.env.SLACK_WEBHOOK_URL, {
    method: "POST",
    headers: { "Content-Type": "application/json" },
    body: JSON.stringify({ text: `*GEO changes detected*\n` + alerts.join("\n") }),
  });
}

What this costs

30Queries in a typical GEO portfolio
$120Monthly cost at daily snapshots, mixed cache
6 wksTime to build the data layer from scratch
$890HubSpot AEO Marketing Hub minimum (for comparison)

30 queries × $0.50 × 30 days = $450 worst case. Cache at 60% reduces real cost to $80-$150 a month. Add competitors and queries linearly.

What separates a good GEO tool from a mediocre one

Three things.

UI scraping, not API-only. The 96% API/UI divergence on ChatGPT means API-only tools measure something almost completely different from what real users see. Insist on mode: all_live.

The four atoms separately. Most mediocre tools collapse them into one "AI visibility score." That number is a vibe, not a measurement. The atoms separately are what drives action.

Longitudinal storage from week zero. The score on day one is meaningless. The score on day one minus day thirty is the whole point. Build snapshot storage before you build the dashboard.

The right north star metric: not "what is our visibility" but "is the trend up and to the right." Every other GEO metric is in service of that question.

Frequently asked questions

What is GEO and what does a GEO tool do?
GEO stands for Generative Engine Optimization. A GEO tool measures whether and how your brand shows up inside AI-generated answers across ChatGPT, Claude, Gemini, Perplexity, Google AI Overviews, and Bing Copilot. The four atoms it measures: presence (was my brand mentioned), prominence (where in the answer), citation rate (was my URL cited as a source), and sentiment (positive, neutral, negative).
How is a GEO tool different from a traditional SEO tool?
SEO tools track blue-link rankings, keywords, and backlinks. GEO tools track AI answer surfaces. Different data, different methodology, different cadence (AI engines update grounding faster than Google indexes blue links). The two are complementary; most teams run both.
How long does it take to build a GEO tool from scratch?
About 6 weeks if you build the data layer yourself: provider clients for 4 LLMs, brand-extraction NLP, citation parsing per provider, URL canonicalization, UI scraping for engines without public APIs. About one weekend if you use an API like MentionsAPI as the data layer and write the dashboard layer on top.
Do I need to ship UI scraping or is API data enough?
You need UI data. Per our 1,000-prompt teardown, ChatGPT API answers diverge from UI answers on 96% of queries, Gemini on 88%, Claude on 71%, and Perplexity on 42%. A GEO tool that only measures API output is measuring the wrong thing. Use mode all_live to get both.
What does a GEO tool cost to run?
A typical setup (one brand, three competitors, 30 queries, daily snapshots) runs $80 to $150 a month on MentionsAPI. Compare to SaaS GEO tools at $49 (Otterly, single brand, limited queries) to $890 (HubSpot AEO Marketing Hub minimum). DIY is cheaper above the smallest tier and gives you full data ownership.
Can I run this on a serverless or edge platform?
Yes. The build pipeline is stateless except for the snapshot storage. You can run the daily cron on Cloudflare Workers, Vercel Cron, or AWS Lambda with EventBridge. Snapshots fit in any KV store (Cloudflare KV, Upstash Redis, Vercel KV).

Ship it this weekend

Day one: write the snapshot pipeline, get the data flowing into JSON files. Day two: write the rollup and the dashboard. Day three: schedule the cron and start collecting trend data.

By the end of week four you have a complete GEO tool that beats most $200-a-month SaaS dashboards on data quality alone, and gives you full ownership of the data, the alerts, and the workflows.

The shovel is the API. The pickaxe is the four atoms. The mine is whatever category you compete in.

Nikhil Kumar
Founder, MentionsAPI

Growth marketer at the intersection of marketing, product, and technology. 8+ years across startups and scale-ups in India, Switzerland, and the Netherlands. Founder of Landkit (landkit.pro).

Build the GEO tool every team is paying $200/mo for.

$1 free signup credit. PAYG wallet from $10. The data layer that is the entire reason most GEO SaaS exists.