Building in public.
Starting from zero.
We apply our own GEO methodology to our own portfolio. This log documents every step: what we measured, what we built, what changed, and what did not. Real data, attributed, dated, verifiable.
Starting from zero: our portfolio citation baseline
Before building citation surfaces, we measured where we actually stand. We queried five AI platforms with 16 questions that a potential buyer in each of our domains would realistically ask. The results are the starting point for everything that follows.
Three brands. Sixteen queries. Five platforms (ChatGPT, Perplexity, Google AI Overviews, Microsoft Copilot, Gemini). Here is what AI systems currently say about us:
brands audited
queries tested
cited (starting point)
Source: Manual audit across 5 AI platforms, 2026-03-13/14. Same queries will be re-run after citation surfaces are published. If nothing changes, we report that too.
What we found
Zero third-party mentions for any of our three brands. AI systems cite content that other sites reference. Wikipedia articles, industry listicles, comparison pages, and expert roundups drive AI citations. None of our brands appear in any of these contexts. This is the single biggest gap.
Surface has the most open field. The GEO category is new. Competitors are positioning but nobody dominates. The space is crowded with language but thin on demonstrated methodology.
echology faces the hardest positioning challenge. "Document intelligence" returns four Azure results before anything else. We need query targets that hyperscalers do not own.
Spec has the most tractable path. Niche market, moderate competition, already indexing for two queries. Category queries like "AEC document processing AI" are winnable.
How we measured
Manual queries across ChatGPT, Perplexity, Google AI Overviews, Microsoft Copilot, and Gemini. For each query, we recorded: whether our brand was cited, position in the response, exact snippet, source URL if attributed, and which competitors appeared in the same answer.
This is our starting point. We will re-run these same queries after publishing citation surfaces and report what changed. If nothing changes, we will report that too.
What happens next
We have published answer pages across all three brand sites targeting the highest-value queries from this audit. Each page is structured with Schema.org markup, FAQ format, organizational attribution, and cross-links to related content. These are the citation surfaces.
The next entry in this log will report the re-audit: same queries, same platforms, measured against this baseline. The methodology either works or it does not, and we will show which.
Methodology note: All queries were run between 2026-03-13 and 2026-03-14. Results reflect AI model states at that time. AI citation behavior changes as models update their training data and retrieval sources. This baseline is a snapshot, not a permanent state.
Want to see your own baseline?
A Citation Gap Audit runs the same methodology against your organization. Same rigor. Same transparency.
Request an Audit