Skip to Content
AgentsResearch Note Writer

Research Note Writer

[Live] · agent__research-note-writer · gemma3:4b (Ollama/local)

Conducts web research for a single approved blog topic and produces a structured research notes document — gathering key statistics, reference articles, competitor angle analysis, case study ideas, and quotable passages to hand off to the blog writer.


Overview

FunctionGather and synthesise research for a single blog post topic
TypeWorker — Research
Modelgemma3:4b (Ollama/local)
Queueagent__research-note-writer
Concurrency2
Timeout10 min
Est. cost / task~$0 (Ollama/local)
PlanAgency+ (requires Ollama configured)

Input

interface ResearchNoteWriterInput { tenantId: string; campaignId: string; topicId: string; // reference to the approved TopicIdea record // Core topic details workingTitle: string; // the approved working title from the Topic Researcher angle: string; // the specific hook/perspective to research toward targetAudience: string; // The SEO content brief informs what angle to research and what the final article needs seoBrief?: { primaryKeyword: string; secondaryKeywords: string[]; searchIntent: string; recommendedOutline: string; // H2/H3 outline from the SEO Specialist competitorUrls: string[]; // top-ranking competitor URLs to analyse wordCountTarget: number; }; researchDepth: 'standard' | 'deep'; // standard: 5-7 stats, 3 sources, top 2 competitor angles — ~6 min // deep: 8-10 stats, 5 sources, top 3 competitor angles — ~10 min }

Output

interface ResearchNoteWriterOutput { topicId: string; workingTitle: string; generatedAt: string; markdownNotes: string; // full structured research notes document in Markdown // Structured metadata extracted from the notes metadata: { statCount: number; referenceCount: number; competitorsCovered: number; caseStudyCount: number; hasClientExamples: boolean; // true if RAG found usable internal examples researchGaps: string[]; // topics the agent couldn't find good sources for }; }

Sample output excerpt

# Research Notes: "Why Your Project Management Software Is Making Your Team Slower" **Topic angle:** Counterintuitive take — PM software adoption friction after 6 months **Target keyword:** project management software adoption problems **Prepared for:** Blog Writer | April 2026 --- ## Key Statistics 1. **77% of high-performing projects use project management software** — but only 22% of organisations have standardised on a single tool (Project Management Institute, 2024 Pulse of the Profession). *Use to establish the "everyone uses it but results vary" premise.* 2. **Software adoption failure rate: 63%** — more than half of enterprise software deployments fail to achieve their intended adoption within 12 months (Gartner, 2023). *Use in the intro to validate the problem exists.* 3. **Average employee switches context 1,200 times per day** across apps, losing 9.5% of productive time (Asana Anatomy of Work Report, 2025). *Use to illustrate the problem of tool overload — PM software adds to this rather than reducing it when not implemented well.* 4. **Teams using 6+ tools report 20% lower project completion rates** than teams using 3 or fewer integrated tools (Wellingtone State of Project Management, 2024). *Strong stat for the argument that more software ≠ better outcomes.* --- ## Reference Articles ### Source 1 **Title:** "The Hidden Costs of Bad Software Adoption" — Harvard Business Review **URL:** https://hbr.org/2024/09/hidden-costs-software-adoption **Quality:** High authority — cite directly **Key takeaway:** Resistance to new software is not a people problem — it's an implementation problem. HBR argues that 80% of adoption failures trace back to inadequate onboarding and unclear ownership of the tool, not employee reluctance. **Quotes worth using:** > "Software that was meant to reduce friction becomes friction itself when teams lack a > shared understanding of how and when to use it." --- ## Competitor Angle Analysis ### Competitor 1: asana.com/resources/project-management-adoption **Angle:** Positive — "how to drive adoption" framing. Focuses on change management steps. **Word count:** ~2,400 **What they cover:** Stakeholder buy-in, training, phased rollout, success metrics. **What they miss:** They don't acknowledge failure modes or the irony of PM software creating overhead. Entirely vendor-promotional tone. Our article wins by being honest about when the tool is the problem. --- ## Internal Examples (from Client Knowledge Base) **Case study candidate:** Acme Construction used Leadmetrics to track campaign performance and reduced their reporting overhead by 4 hours/week — an example of software done right that contrasts with the adoption failure theme. Use as a positive counterpoint in the conclusion section.

How It Works

  1. Load topic and brief context. The working title, angle, SEO brief outline, and target audience are injected into the system prompt. The outline from the SEO brief is the research roadmap — notes should gather material for each H2 section.

  2. RAG: competitor content analysis. Query Competitor Research for existing content on this topic from known competitors. This populates the “Competitor Angle Analysis” section and surfaces what angles are already covered — so the blog writer knows what to differentiate from.

  3. RAG: internal data and case studies. Query Client Documents for any internal case studies, data, results, or company-specific examples that could be used in the article. Client-owned data creates differentiation that competitors cannot replicate.

  4. Web search: current statistics. Run 2–3 targeted web search queries to find current statistics and data points relevant to the topic. Queries are designed to find specific, citable numbers — not general information. Example: "project management software adoption failure statistics 2024 2025", "[primary keyword] data report [year]".

  5. Web fetch: read reference articles. For the top 3–5 search results, call web_fetch to read the full article content. This enables: (a) pulling specific quotes and statistics, (b) noting exactly what angle each competitor article takes, (c) identifying what gaps exist in the current top-ranking content.

  6. Evaluate source quality. For each source, assess: publication authority (domain, author), data recency (prefer < 3 years unless foundational research), relevance to the specific angle. Rank sources and include only the highest-quality ones.

  7. Synthesise research notes. Compile all gathered material into the structured notes document — statistics with sources and usage notes, reference articles with angle summaries and quotable passages, competitor gap analysis, case study candidates, and explicit notes on what the writer should emphasise.

  8. Flag research gaps. If certain sections of the SEO brief outline couldn’t be supported with good sources, flag them explicitly in metadata.researchGaps so the writer knows upfront where additional research may be needed.


System Prompt

You are a research assistant preparing notes for a blog writer at a digital marketing agency. Your job is to gather, evaluate, and synthesise research for a specific blog topic — giving the writer everything they need to produce a well-sourced, differentiated article. CLIENT CONTEXT: {{CLIENT_CONTEXT}} KNOWLEDGE BASE CONTEXT: {{RAG_CONTEXT}} You have been given: - A working title and specific angle to research toward - An SEO content brief with a recommended article outline - Web search results and fetched article content - Competitor content from the knowledge base Your research notes must include: 1. KEY STATISTICS ({{STAT_COUNT}} stats) — each with source, publication year, and a usage note explaining where in the article this stat works best 2. REFERENCE ARTICLES ({{SOURCE_COUNT}} sources) — each with URL, quality assessment, key takeaways, and any quotes worth using verbatim 3. COMPETITOR ANGLE ANALYSIS — for each top-ranking competitor article: their angle, what they cover well, and critically, what they miss or get wrong 4. CASE STUDIES / EXAMPLES — specific, concrete examples the writer can use to illustrate key points (prefer real, named examples over generic illustrations) 5. INTERNAL EXAMPLES — any client-specific data or cases from the knowledge base 6. RESEARCH GAPS — sections of the outline where good sources couldn't be found Rules: - Only cite sources that are genuinely credible: named authors, known publications, or primary research (surveys, studies, official reports) - Do not invent statistics — if you cannot find a good stat for a section, say so - All statistics must include: the exact number, the source name, the year - Competitor analysis must be honest — note where competitors have strong coverage, not just weaknesses - Usage notes for stats must be specific: "use in intro to establish X" not just "relevant" - Flag any claim in the web search results that seems unreliable or unverified Output as a Markdown document following the structure in the sample output format. After the Markdown, append a JSON block with the metadata object.

Skills Injected

Skill filePurpose
client-context-file.mdCompany, brand, audience — always injected
research-quality-guide.mdSource quality evaluation criteria, stat citation format, how to assess competitor content angles

research-quality-guide.md — content

# Research Quality Guide ## Source Quality Tiers ### Tier 1 — Cite directly, high confidence - Primary research: original surveys, studies, academic papers with named authors - Industry reports from known research firms: Gartner, Forrester, HBR, McKinsey, Deloitte - Government or regulatory body data - Platform-published data about their own products (e.g. Google's own benchmarks) ### Tier 2 — Use with attribution, moderate confidence - Respected trade publications in the client's industry - Well-known niche blogs with named authors and clear editorial standards - Company-published research (whitepapers, state-of-industry reports) — note the publisher and any potential bias ### Tier 3 — Use cautiously or avoid - Statistics without a named original source (the "according to studies" problem) - Stats cited by one source that trace back to an unverifiable original - Content farms, SEO-only blogs without editorial rigour - Data older than 5 years unless it is genuinely foundational ## Stat Citation Format Every stat must be formatted as: **[Statistic]** — [Source Name], [Year]. *[Usage note: where/how to use this in the article.]* Example: **82% of project failures are attributed to poor requirements gathering** — PMI Pulse of the Profession, 2024. *Use in the intro section to establish stakes — this is a shocking enough number to justify the article.* ## Competitor Angle Analysis Framework For each competitor article, document: 1. **Their core angle** — what argument or framework are they making? 2. **Their coverage strengths** — what do they cover comprehensively? 3. **Their gaps** — what do they fail to address? What questions remain unanswered? 4. **Their tone** — authoritative? Promotional? Conversational? 5. **Differentiation opportunity** — what can our article do that theirs does not? ## Research Gaps If a section of the outline cannot be supported with quality sources, flag it as: **Research gap: [section name]** — [what was searched for] — [what was found or not found] — [recommendation: skip this section / simplify the claim / add a caveat / commission primary research] ## Quote Evaluation A quote is worth including if it: - Captures an insight more memorably than paraphrase - Comes from a credible, named author with relevant expertise - Is under 50 words (longer quotes lose impact) - Adds information not already in the surrounding text

RAG Usage

DatasetQueryWhen
Competitor Research"competitor content [working title topic keywords]"Step 2 — to analyse what angles top competitors are already taking
Client Documents"case studies data results examples [topic area]"Step 3 — to find internal examples, client data, or company-specific proof points
Published Content"[topic keywords] previous posts"Step 3 — secondary check to see if the client has covered related angles before
Website Content"[product or service related to topic]"Step 3 — to find product pages, testimonials, or features that could be referenced

RAG query strategy: Competitor Research is the most valuable dataset for this agent — it directly populates the Competitor Angle Analysis section and determines the differentiation strategy. Client Documents is second in priority, as internal examples and data create content no competitor can replicate. Both queries run in parallel before any web search calls.


Tools Required

ToolMethodPurposeRequired?
rag_searchsearchQuery competitor research and client documentsYes
web_searchsearchFind current statistics, studies, and reference articlesYes
web_fetchfetchRead full content of top-ranking articles for quote extraction and angle analysisYes

HITL Gates

  • Review type: research_review
  • Risk level: low
  • Trigger: Always — the research notes document is reviewed by the content strategist before the blog writer job is dispatched.
  • Reviewer action: Approve research notes (dispatches the blog writer job), request additional research on specific sections, or reject and re-run (with notes on what was insufficient).
  • Partial approval: Reviewer can annotate specific stats or sources as “do not use” — these are excluded from the writer’s reference document.

Guardrails

RuleEnforcement
Minimum stat count: 5 for standard, 8 for deepCount check on stats section; below minimum triggers regeneration
All statistics must include source and yearStructural validation — stats without attribution are removed
Minimum 3 reference articlesCount check; below minimum triggers additional web search and fetch
No statistics from Tier 3 sources without explicit caveatSource quality check — unverifiable stats flagged with warning in the notes
Competitor analysis must cover all URLs provided in seoBrief.competitorUrlsIf a provided URL could not be fetched, note “fetch failed — manual review needed” in the competitor section
Research gaps must be documentedIf web search returns no usable results for a section, the gap is required in the output, not silently omitted

Tenant Settings Used

SettingHow it’s used
industryScopes competitor research RAG query to the correct niche
targetAudienceInforms which statistics are most relevant — B2B audiences respond to business outcome data; B2C audiences respond to lifestyle and behaviour data
brandVoiceInforms the usage notes — a conversational brand uses stats as conversation starters; an authoritative brand uses them as proof points

Cost Profile

Avg input tokens~12,000 (topic context + RAG results + fetched article content)
Avg output tokens~4,000 (full research notes document)
Est. cost / task~$0 (Ollama/local — gemma3:4b)

Note: The 10-minute timeout accommodates the sequential web_fetch calls (3–5 article fetches at ~1–2 seconds each) plus synthesis time. Concurrency is capped at 2 to avoid overwhelming the Ollama instance — gemma3:4b with 10k+ token inputs requires dedicated GPU memory per concurrent request.


Error Handling

ErrorResponse
Ollama unavailableFail job with error: “Ollama service unavailable — research note writing requires Ollama running with gemma3:4b”
Web search returns no resultsProceed with RAG-only research; note “Web search unavailable — research based on knowledge base only”; flag all outline sections as research gaps
web_fetch fails for a competitor URLNote “fetch failed” in competitor analysis section; include URL for manual review
web_fetch returns paywalled contentNote “paywalled source” and use only the publicly visible excerpt; do not fabricate behind-paywall content
RAG returns no Competitor Research resultsProceed without competitor analysis; note “No competitor research found in knowledge base — run competitor analysis first”
Stat count below minimum after 1 retryReturn what’s available with explicit documentation of gaps; do not fabricate statistics
Job exceeds 10-minute timeoutSave partial research notes with a note on which sections were not completed; create HITL record for manual completion

© 2026 Leadmetrics — Internal use only