Actionsmanifest M-1

Question-Based Opportunity Mapping

depth multilingual multilingual

M-1 — Question-Based Opportunity Mapping

What this action is

M-1 is the structured identification of the questions the brand’s audience asks within the brand’s category, including the questions AI systems are being asked about that category, and the mapping of those questions to the brand’s existing or planned content. It comprises three components: question discovery (what questions are being asked), categorization (how questions cluster by intent and category), and gap analysis (which questions the brand has answered, which it has not, and which it should).

The work is analytical-editorial. Analytical work surfaces and categorizes questions; editorial work assesses content coverage and identifies gaps. The output informs subsequent M-pillar work, particularly M-2 and M-3.

Why this action matters in AVO

AI-mediated discovery is question-driven. Users pose questions to AI systems; AI systems synthesize answers; brands appear in those answers when the AI system has grounded knowledge of brands relevant to the question. Without knowing which questions the AI is being asked about the brand’s category, the practitioner cannot direct content work toward citation-relevant questions.

M-1 also surfaces the difference between the brand’s content topology and the audience’s question topology. Brands typically organize content around their offerings and corporate structure; audiences ask questions organized around their problems and decisions. The two organizations rarely align without deliberate work.

M-1 is foundational for the answer-first content architecture (M-2) and the FAQ and knowledge hub work (M-3). Both depend on knowing which questions matter; M-1 provides the answer.

What it requires before you can attempt it

Hard prerequisites:

PrerequisiteWhy required
Declared FocusM-1 is conducted relative to the brand’s Focus. Without a declared Focus, question discovery has no axis along which to filter.
Access to category-relevant query dataM-1 requires data sources: search analytics, AI platform measurement, customer service inquiry logs, social listening data. Without data, the work is intuition-based.

Soft prerequisites:

PrerequisiteWhy it helps
Existing search analytics or content performance dataExisting data accelerates question discovery and validates question importance
Customer service or sales conversation logsDirect customer questions are the highest-fidelity input to question mapping
O-1 substantially completeCompetitive context informs which questions matter for the brand specifically (some questions are well-served by competitors; others are gaps)

Stage assessment: M-1 is a foundations-stage action that runs in early engagement. The basic form is achievable from AS ≈ 0; depth comes through subsequent cycles as more data accumulates.

What gets done in this action

M-1 work proceeds through four phases.

Phase 1 — Question discovery from multiple sources. Questions are gathered from:

  • Search analytics (long-tail queries that suggest specific questions)
  • AI platform measurement (the prompts that surface category-relevant content)
  • Customer service inquiry logs (literal questions customers ask)
  • Sales conversation logs (questions that appear during purchase decisions)
  • Social listening (questions posed in public forums)
  • Competitor content (which questions competitors have explicitly addressed)
  • Reddit, Quora, and similar platforms within the brand’s category

The discovery work produces a catalog of questions, often hundreds. The catalog is the substrate for subsequent categorization and prioritization.

Phase 2 — Question categorization. Questions are categorized along multiple axes:

  • Intent: informational (seeking knowledge), navigational (seeking a specific resource), transactional (seeking to act), comparative (seeking to choose), validation (seeking confirmation)
  • Stage in audience journey: awareness (unfamiliar with the category), consideration (evaluating options), decision (selecting), retention (using or maintaining)
  • Question type: what-is, why, how-to, comparison, recommendation, troubleshooting
  • Audience segment: if the brand serves multiple segments, which segment poses the question
  • Category alignment: how directly the question relates to the brand’s Focus

The categorization makes the catalog navigable and reveals patterns: which categories are well-served by existing content, which are gaps, which are over-served.

Phase 3 — Coverage and gap analysis. Each question is mapped to existing brand content (if any) or flagged as a gap. The mapping reveals:

  • Questions the brand has answered well (covered, deep, citable)
  • Questions the brand has answered superficially (covered but thin)
  • Questions the brand has not answered (gaps)
  • Questions the brand has answered indirectly (relevant content exists but doesn’t address the question explicitly)

Phase 4 — Prioritization. Gaps are prioritized for content development based on:

  • Question frequency (how often it’s asked)
  • Strategic value (alignment with the brand’s commercial or category-positioning interests)
  • Competitive landscape (how well-served the question is by competitors)
  • Effort required (whether the brand has the subject-matter expertise to answer well)

The output is a content backlog ordered by priority, ready to feed M-2 and M-3 work.

What success looks like

A successful M-1 produces:

  • A categorized question catalog covering the brand’s Focus
  • A coverage map indicating which questions the brand has and has not addressed
  • A prioritized backlog ready to direct subsequent content work
  • Pattern recognition: which categories are gaps, which are well-served, which warrant strategic content commitment

Beyond the catalog, success is the practitioner’s ability to direct M-pillar work with confidence. “We need more FAQ content” is intuition; “we need FAQ content addressing these specific 12 questions because they appear in AI-platform measurement and the brand has no current coverage” is M-1-informed strategy.

What failure looks like

Failure patternWhat it signals
Question catalog is keyword-list rather than question-listThe work treated keywords as proxy for questions; AI-mediated discovery is question-driven, not keyword-driven
Catalog is small (under 50 questions) and missing major categoriesQuestion discovery sources were limited; broader sourcing required
Coverage analysis is binary (covered/not-covered) without depth assessmentMisses the largest opportunity — questions that are answered superficially could be answered well
Prioritization defaults to “high-volume questions first” without strategic considerationVolume alone isn’t strategy; some high-volume questions are not the brand’s to answer; some low-volume questions are highly strategic
The output is delivered as a static report rather than a working backlogM-1 should integrate with content production workflow, not exist as separate documentation

Common mistakes

MistakeBetter approach
Substituting keyword research for question discoveryQuestions are the unit of AI-mediated discovery; keywords are downstream
Sourcing only from search analyticsSearch analytics misses questions that are asked but not searched (because the asker uses an AI assistant directly); broader sourcing required
Treating M-1 as one-time workQuestion landscapes shift; periodic refresh (quarterly or as significant changes occur) maintains relevance
Letting brand stakeholders drive prioritization based on commercial preferenceSome commercially-irrelevant questions are the most strategic for AVO authority-building; the practitioner must surface this distinction
Not coordinating with O-1Competitive context informs question prioritization (questions well-served by competitors may be lower priority than questions where the brand can establish authority)
Conflating audience questions with brand FAQ patternsThe brand’s existing FAQ may not reflect what audiences actually ask

Datapoints affected

M-1 does not directly lift datapoints. Like O-1, it is preparatory analytical work. Indirect effects include:

DatapointMechanism
topical-relevance (V2.1)M-1 informs subsequent content work that lifts topical-relevance
content-depth (V2.1)M-1 identifies depth gaps that subsequent M-pillar work addresses
ai-citation-presence (V3.1)Long-term: M-1-informed content is more likely to be cited because it addresses questions the AI is being asked

Multilingual considerations

Per-language question landscapes differ substantially. The English question landscape for a category is rarely the same as the Indonesian, Japanese, Korean, or Traditional Chinese landscape for the same category. Considerations:

  • Per-language search behavior differs (long-tail patterns vary)
  • Per-language customer-service patterns differ
  • Per-language AI platforms have distinct prompt patterns
  • Per-language cultural conventions affect question phrasing
  • Direct translation of English questions rarely produces native-language equivalents

Multilingual M-1 is essentially conducting M-1 separately per language. The work expands proportionally with language scope.

A common multilingual M-1 finding is that per-language question landscapes overlap less than the brand stakeholder expects. The brand’s Japanese audience asks substantially different questions than its English audience, even within the same category. This finding informs per-language M-pillar work prioritization.

What comes after

M-1 typically leads to:

Next actionWhy it follows
M-2 (Answer-First Content Architecture)M-1 identifies which questions to answer; M-2 structures the answers
M-3 (Dedicated FAQ & Knowledge Hubs)High-priority questions cluster into hub content
G-3 (Comprehensive Long-Form Content)Long-form content addresses higher-complexity questions identified in M-1
M-7 (Multimedia Content Optimization)Multimedia formats for questions where text-only answers are insufficient
Re-running M-1 at quarterly reviewQuestion landscapes shift; periodic refresh maintains relevance

In maturity-stage terms, M-1 is foundations-into-depth work. The first cycle establishes the catalog; subsequent cycles refine and extend it.