Ahrefs tracks your rank in Google’s blue links. This tracks your rank in ChatGPT’s answer.
Same concept, new surface. When ChatGPT answers a question and lists seven brands, you have a rank. Position 1 is the brand named first. Position 7 is the one buried at the bottom. That rank changes over time as the model’s grounding data updates and as competitors invest in content. If you do SEO for a living, you already know how valuable rank tracking is. The same logic applies, just with different surfaces.
This post walks through building a rank tracking tool for AI answers in around 100 lines of code. Daily snapshots, diff detection, Slack alerts on drops. The whole thing runs on a $5 VPS and costs about $15 to $25 a month in API charges.
What a rank tracking tool actually does
Three jobs.
Snapshot. Run a list of queries through every AI engine you care about. Capture where your brand appears in each answer. Save it.
Diff. Compare today’s snapshot to yesterday’s. Compute the change in rank per (query, brand, surface) tuple. A rank that went from 3 to 6 is a 3-position drop. A rank that went from null to 4 is a new entry.
Alert. When a drop crosses a threshold, ping you. Most teams settle on 2 positions as the noise floor. Anything below that is normal model shuffle.
Why build it instead of buying a SaaS?
The market has products. Otterly, Siftly, BrandRadar, Profound, Conductor AgentStack. They cluster between $49 and $890 a month. They lock you into their workflows. You cannot easily change the alert logic, add new query templates, or export the raw data into your own pipelines without paying enterprise prices.
Building it yourself takes a weekend. The data layer is the hard part. Wiring up four LLM provider APIs (OpenAI, Anthropic, Google, Perplexity) plus a brand extraction layer plus citation parsing per provider is roughly six weeks of engineering. Or you call an API that already does that and write the rank tracking logic on top in an afternoon.
You are not building a rank tracker. You are building the part that compares two JSON arrays and sends a Slack message. The rest is one HTTP call.The build vs buy framing
The build
Pick 5 to 20 queries that match how your customers actually ask AI engines about your category. Not the keyword phrases SEO tools surface for Google. The conversational ones humans type into ChatGPT.
Add your brand plus 3 to 5 competitors. The API extracts ranks for all tracked brands in one call, so adding competitors is free on the data side.
{
"queries": [
"best CRM for small business",
"alternatives to HubSpot",
"what is the cheapest CRM with email marketing"
],
"track_brands": ["HubSpot", "Pipedrive", "Salesforce", "Zoho", "Close"]
}Call /v1/check with mode: all_live for each query. The response contains a per-surface map with each tracked brand’s rank, sentiment, and the cited URL. Save the snapshot to disk or a database keyed by date.
import fs from "node:fs/promises";
import config from "./config.json" assert { type: "json" };
async function fetchRanks(query) {
const res = await fetch("https://api.mentionsapi.com/v1/check", {
method: "POST",
headers: {
Authorization: `Bearer ${process.env.MENTIONSAPI_KEY}`,
"Content-Type": "application/json",
},
body: JSON.stringify({
mode: "all_live",
query,
track_brands: config.track_brands,
}),
});
return res.json();
}
const today = new Date().toISOString().slice(0, 10);
const snapshot = {};
for (const q of config.queries) {
snapshot[q] = await fetchRanks(q);
}
await fs.writeFile(`snapshots/${today}.json`, JSON.stringify(snapshot, null, 2));
console.log(`Saved ${config.queries.length} queries to snapshots/${today}.json`);Load yesterday’s snapshot file. Walk through every (query, brand, surface) triple. Compute the rank delta. Keep only the rows where the delta is non-zero.
import fs from "node:fs/promises";
const today = new Date().toISOString().slice(0, 10);
const yesterday = new Date(Date.now() - 86400000).toISOString().slice(0, 10);
const todaySnap = JSON.parse(await fs.readFile(`snapshots/${today}.json`));
const ySnap = JSON.parse(await fs.readFile(`snapshots/${yesterday}.json`));
const drops = [];
for (const [query, today] of Object.entries(todaySnap)) {
const yesterday = ySnap[query];
if (!yesterday) continue;
for (const surface of Object.keys(today.providers)) {
for (const brand of today.providers[surface].brands || []) {
const prev = yesterday.providers[surface]?.brands?.find(b => b.name === brand.name);
if (!prev) continue;
const delta = brand.rank - prev.rank;
if (Math.abs(delta) >= 2) {
drops.push({ surface, query, brand: brand.name, prev: prev.rank, today: brand.rank, delta });
}
}
}
}
console.log(JSON.stringify(drops, null, 2));Send a Slack webhook with the diff output. Format the message so it is scannable: the surface (ChatGPT, Perplexity, etc), the query, the brand, and the rank change. Most teams put alerts in a dedicated #rank-watch channel and mute it everywhere else.
async function notify(drops) {
if (drops.length === 0) return;
const lines = drops.map(d =>
`• ${d.surface} | "${d.query}" | ${d.brand} dropped from #${d.prev} to #${d.today} (${d.delta > 0 ? '+' : ''}${d.delta})`
);
await fetch(process.env.SLACK_WEBHOOK_URL, {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({
text: `*Rank changes detected (${drops.length})*\n` + lines.join("\n")
})
});
}Schedule the whole pipeline (snapshot, diff, alert) with cron at a fixed time. 09:00 UTC works well for most teams.
What this costs
10 queries × $0.50 × 30 days = $150 worst case (no cache hits). With realistic 60% cache hit rate over a daily run, the math is closer to $15 to $25 a month. Add 5 more queries or 3 more brands and you are still under $50 a month.
When to skip building this
Three signals that buying a SaaS is the right call.
You have less than 5 queries. If your category is small and your customers ask AI engines the same one or two questions, the SaaS dashboards are actually fine. Otterly’s $49 tier covers that case at a price that’s hard to beat.
You have no engineering bandwidth. If you are a one-person marketing team, the build is not worth your time. You will spend more debugging cron than you save versus the SaaS price.
You need historical data going back months. Most SaaS tools have been collecting longitudinal data since 2024. Your tool starts collecting today. For a brand that needs trend lines from before they had this tool, the SaaS gives you that history immediately.
Frequently asked questions
How is rank tracking in AI answers different from Google rank tracking?
How often should I run the snapshot?
Does this work with cached results?
How do I track competitor ranks too?
Can I run this on a free tier?
How do I detect a meaningful rank drop versus noise?
Ship it this weekend
The whole build is one afternoon if you have the API key in hand. Start with 5 queries and one brand, get the snapshot pipeline working, run it twice manually to confirm the diff logic catches real changes. Then schedule it.
Add competitors in week two. Add Slack alerts in week three. By the end of the month you have something most agencies pay $200 a month for, except yours costs $15 and lets you change the alert logic whenever you want.