Actionsoptimize O-1

AVO Competitive Analysis & Benchmarking

foundation multilingual multilingualstakeholder-management

O-1 — AVO Competitive Analysis & Benchmarking

What this action is

O-1 is the structured assessment of the brand’s AVO position relative to its category competitors. It comprises three components: identifying the brand’s actual competitive set in AI-mediated discovery (which may differ from the brand’s traditional SEO competitive set), measuring the competitive set’s AS and VS where measurable, and producing the benchmark report that contextualizes the brand’s position within its category.

The work is analytical rather than executive. O-1 does not change the brand’s authority or visibility; it establishes the comparative baseline that subsequent action selection refers to.

Why this action matters in AVO

AS and VS are absolute measurements (0-100 scores), but action prioritization is comparative. A brand with AS = 35 in a category where the median competitor is at 25 has a different operational reality than a brand with AS = 35 in a category where the median competitor is at 65. The same AS implies different urgency, different action priority, and different stakeholder framing.

Competitive context shapes engagement scope, action sequencing, and reporting. Without O-1, the practitioner is operating on absolute measurements without category-relative context. The brand stakeholder receives reporting that says “AS = 35” without indicating whether that is good, bad, or in line with category peers.

O-1 also surfaces something AS measurement does not: which competitors the AI systems treat as the brand’s actual competitive set. The brand stakeholder typically has a self-defined competitive set based on commercial competition. The AI’s competitive set may overlap but is not identical — AI systems group brands based on the queries those brands appear in, which may produce groupings the brand stakeholder did not anticipate.

What it requires before you can attempt it

Hard prerequisites:

PrerequisiteWhy required
Brand stakeholder agreement on engagement scopeO-1 produces comparative reporting that requires a defined “brand” to compare. Engagement scope must be defined before O-1 can identify the relevant comparison set.
Initial Focus declarationThe competitive set is defined relative to the brand’s Focus. Without a declared Focus, O-1 has no axis along which to identify competitors.

Soft prerequisites:

PrerequisiteWhy it helps
Stakeholder-supplied initial competitive setThe brand stakeholder’s view of competitors is one input into competitive identification. O-1 expands and refines from there.
Access to traditional SEO competitive dataIf the brand has SEMrush, Ahrefs, or similar tools, prior competitive data accelerates identification.

Stage assessment: O-1 is a foundations-stage action and is typically conducted at engagement start, before substantial action work begins. It is also re-conducted periodically (at quarterly review or when significant brand or category changes occur) because competitive sets shift over time.

What gets done in this action

O-1 work proceeds through four phases.

Phase 1 — Stakeholder-input competitive set. The brand stakeholder identifies competitors as they understand them. This produces the starting comparison set, typically 5-15 brands. The stakeholder’s view is informed by commercial competition, market positioning, and brand affinity — useful inputs but rarely the AI-mediated competitive reality.

Phase 2 — AI-mediated competitive identification. Probes are run against the brand’s Focus across measurement platforms. The brands that appear most frequently in AI responses to category-relevant queries form the AI-mediated competitive set. This set may extend, narrow, or substantially diverge from the stakeholder’s view.

When the stakeholder set and the AI-mediated set diverge significantly, the divergence itself is reportable insight. A stakeholder view that includes brands the AI doesn’t recognize, or excludes brands the AI consistently surfaces, indicates either an outdated competitive view or an AI competitive set that doesn’t match commercial reality. Both are useful findings.

Phase 3 — Per-competitor measurement. For each brand in the consolidated competitive set, AS measurement is conducted (where the brand’s domain is in scope) and VS measurement is conducted (where the brand has Focus and prompt coverage). The output is a category-level matrix of AS and VS measurements positioning every brand in the competitive set.

When competitor brands are not in active engagement scope (which is typical), AS measurement is conducted as a one-time benchmark rather than as ongoing tracking. The benchmark suffices for context.

Phase 4 — Benchmark report. The matrix is structured into a competitive-position report. The brand’s specific position is contextualized: where it sits in the AS distribution of the category, where it sits in the VS distribution, which competitors are particularly strong on which pillars, which patterns suggest the category’s authority leadership profile.

What success looks like

A successful O-1 produces:

  • A defined competitive set that the brand stakeholder accepts and that reflects AI-mediated competitive reality
  • Comparative AS and VS measurements positioning the brand within the set
  • Pattern recognition: which pillars distinguish the category’s leaders, which datapoints separate the brand from its strongest competitors, which competitive patterns inform action sequencing
  • Stakeholder-ready reporting that contextualizes the brand’s AS and VS in category terms rather than as isolated numbers

Beyond the report itself, success is a practitioner who can say, when asked “is AS = 35 good,” something like “in this category the median is 28 and the leader is at 62, so the brand is in the upper half but with substantial room to compete with the leader on Generative-pillar specifically.” That kind of categorically-grounded answer is what O-1 produces.

What failure looks like

Failure patternWhat it signals
Competitive set is stakeholder-only without AI-mediated verificationThe set may not reflect AI-mediated competitive reality. Subsequent action prioritization may be misdirected.
Per-competitor measurements are not run because the brand stakeholder didn’t approve scopeThe benchmark is qualitative rather than quantitative. Useful but less rigorous.
The matrix is generated but no pattern recognition is layered on topReporting becomes a data dump without diagnostic insight. The brand stakeholder receives numbers but not interpretation.
Benchmark is run once and never refreshedCompetitive context drifts; old benchmarks become misleading after several quarters.

Common mistakes

MistakeBetter approach
Accepting the stakeholder’s competitive set without AI-mediated verificationAlways run probes to identify the AI-mediated set. Divergence is itself insight.
Including too few competitors in the setA set of 3-5 produces noisy benchmarks. Aim for 8-15 to support distributional analysis.
Including competitors that are not actually in the brand’s categoryClouds the benchmark. Use Focus alignment to filter the set, not just keyword overlap.
Reporting AS as the only competitive dimensionAS and VS together produce richer competitive positioning. AS alone misses the prediction-vs-outcome distinction.
Treating the benchmark as staticRefresh quarterly or when significant changes occur.
Letting the brand stakeholder dictate which competitors to compare against based on aspirational positioningThe benchmark must reflect actual competitive reality, not aspirational positioning. Aspirational comparisons produce misleading findings.

Datapoints affected

O-1 does not directly lift datapoints in the AS measurement of the brand. It is analytical work; it produces context. However, the work indirectly informs:

Affected viaVia which mechanism
Action selection across all subsequent OMG workThe benchmark informs which datapoints to prioritize, which determines which actions to select
Engagement scope refinementFindings about competitive intensity may inform language scope, Focus refinement, or engagement intensity
Reporting and stakeholder managementThe benchmark provides the comparative context that turns absolute measurements into actionable findings

Multilingual considerations

Per-language competitive sets are independent. A brand operating in five languages may have five distinct competitive sets that overlap partially or not at all.

  • The brand’s Japanese competitors may include Japanese-domestic brands the global competitive set does not include
  • The Indonesian competitive set may be smaller (less category saturation) than the English set
  • Per-language Focus may differ slightly, producing different competitive sets per language

Multilingual O-1 is essentially conducting O-1 separately per language. The work expands proportionally to language scope. Brands engaging in five languages should expect O-1 to be approximately 3-4x the effort of a single-language engagement (less than 5x because some patterns transfer).

The team’s working principle: per-language O-1 produces per-language benchmarks that inform per-language action selection. Aggregating across languages obscures language-specific competitive patterns and leads to misdirected work.

What comes after

O-1 typically leads to:

Next actionWhy it follows
O-2 (Unified Analytics & KPI Framework)The benchmark establishes what to measure ongoing; O-2 establishes how to measure it.
The first action selection conversation with the brand stakeholderThe benchmark contextualizes the AS finding for the stakeholder, enabling informed action prioritization
Engagement scope refinementFindings may indicate that scope should be expanded (additional languages, additional domains) or refined (specific sub-categories within the brand’s broader Focus)
Re-running O-1 at quarterly reviewCompetitive context drifts; re-running keeps the benchmark current

In maturity-stage terms, O-1 work is foundational and recurring rather than one-time. It does not graduate the brand from foundations stage; it informs the work that does.