MCP Servers & Custom Agents: Automation Proposal for mrogers.london
MCP Servers & Custom Agents: Automation Proposal for mrogers.london
Version: 1.0 Date: December 27, 2025 Author: Claude Code Analysis
Executive Summary
Current State
Your blog infrastructure is sophisticated: extensive style guides, voice documentation, brand strategy, and custom editorial commands. However, the publishing workflow remains largely manual, creating friction between ideation and publication.
Current Workflow Gaps:
- Research and citation gathering (60+ min per essay)
- SEO optimization and metadata generation (30 min)
- Headline generation using 11 rhetorical techniques (20 min)
- Image prompt creation matching style guide (15 min)
- Voice consistency checking against documented standards (embedded in 45-min editorial review)
Proposed Solution
Deploy 8 strategic MCP servers and 6 custom agents that automate repetitive tasks while preserving your distinctive voice and intellectual rigor. This proposal recommends tools specifically calibrated to your “Applied Psychohistorian” positioning and existing documentation.
Expected Outcomes
Time Savings: ~2.5 hours saved per essay (reducing total workflow from ~3 hours to 30 minutes)
Quality Improvements:
- Voice consistency automated via documented standards
- SEO best practices enforced
- Citation quality improved
- Brand aesthetic guaranteed
Publishing Velocity: 2-3 weeks per essay → weekly publishing cadence achievable
Part I: MCP Server Recommendations
Understanding MCP
The Model Context Protocol (MCP) is an open standard introduced by Anthropic to standardize how AI systems integrate with external tools and data sources. As of late 2025, the MCP Registry contains nearly 2,000 active servers across diverse use cases.
For content creation workflows like yours, MCPs provide:
- Persistent memory across writing sessions
- Direct access to external data sources (GitHub, databases, web)
- Tool orchestration without manual context-switching
Tier 1: Immediate Value (Install Week 1)
1. GitHub MCP
Purpose: Streamline version control, issue tracking, and deployment automation
Use Cases:
- Track essay ideas as GitHub Issues with labels (essay/imagine/residuals)
- Manage drafts as PRs for self-review workflow
- Automate publishing commits with consistent messages
- Monitor deployment status via GitHub Actions
Installation:
# Add as project-scoped (shareable via .mcp.json)
claude mcp add --transport http github --scope project https://api.githubcopilot.com/mcp/
# Authenticate (will open browser)
claude /mcp
Configuration (.mcp.json):
{
"mcpServers": {
"github": {
"url": "https://api.githubcopilot.com/mcp/",
"transport": "http",
"scope": "project"
}
}
}
Workflow Integration:
- Create issue:
/github create issue "Essay: What Walras Teaches About Feed Algorithms" - Move draft to published:
/github create pr --from thinking/draft.md --to _posts/2025-12-27-walras-and-feeds.md
Source: GitHub - modelcontextprotocol/servers
2. Filesystem MCP
Purpose: Enhanced file operations for organizing posts, drafts, and assets
Use Cases:
- Batch rename drafts with proper YYYY-MM-DD-slug.md format
- Organize images into assets/images/[post-slug]/ directories
- Search across all posts for internal linking opportunities
- Move completed drafts from thinking/ to _posts/
Installation:
# Install via npx (no persistent installation needed)
claude mcp add --transport stdio filesystem --scope project -- npx -y @modelcontextprotocol/server-filesystem
Configuration (.claude/settings.local.json):
{
"mcpServers": {
"filesystem": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem"],
"transport": "stdio",
"allowedDirectories": [
"/Users/michael/m-01101101/mrogers-london/_posts",
"/Users/michael/m-01101101/mrogers-london/thinking",
"/Users/michael/m-01101101/mrogers-london/assets/images"
]
}
}
}
Workflow Integration:
- Batch operation: “Move all drafts tagged ‘ready’ from thinking/ to _posts/ with today’s date”
- Organization: “Create image directory for new post and organize all related PNGs”
Source: Model Context Protocol Official Servers
3. Web Search MCP (Brave Search)
Purpose: Research economic history, data science papers, and consumer behavior studies while writing
Use Cases:
- Find citations for economic concepts (Ricardo, Walras, Marx)
- Locate recent papers on recommendation systems
- Validate claims about historical events
- Discover related work for cross-referencing
Installation:
# Requires Brave Search API key (free tier: 2,000 queries/month)
# Get key: https://brave.com/search/api/
claude mcp add --transport stdio brave-search --scope user -- npx -y @modelcontextprotocol/server-brave-search
Configuration (add to .env):
BRAVE_API_KEY=your_api_key_here
Configuration (.claude/settings.local.json):
{
"mcpServers": {
"brave-search": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-brave-search"],
"transport": "stdio",
"env": {
"BRAVE_API_KEY": "${BRAVE_API_KEY}"
}
}
}
}
Workflow Integration:
- Research query: “Find academic papers on comparative advantage in digital markets published 2020-2025”
- Fact-check: “Verify when Walras published Elements of Pure Economics”
| Source: [Best MCP Servers in 2025 | Pomerium](https://www.pomerium.com/blog/best-model-context-protocol-mcp-servers-in-2025) |
4. Memory MCP
Purpose: Track writing ideas, topics, and research findings across sessions to build a knowledge graph
Use Cases:
- Store essay ideas with tags and priority
- Remember topics you’ve wanted to explore (“Probably nonsense” ideas)
- Track cross-references between posts
- Build concept map of your intellectual territory
Installation:
# Memory persists in local SQLite database
claude mcp add --transport stdio memory --scope user -- npx -y @modelcontextprotocol/server-memory
Configuration (.claude/settings.local.json):
{
"mcpServers": {
"memory": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-memory"],
"transport": "stdio",
"dataPath": "/Users/michael/.claude/memory/mrogers-blog.db"
}
}
}
Workflow Integration:
- Store idea:
/memory add "Essay idea: Seigniorage in the recommendation economy" --tags economics,imagine - Recall:
/memory search "comparative advantage" --related-to "recommendation systems"
Source: GitHub - modelcontextprotocol/servers
Tier 2: Publishing Enhancement (Install Weeks 2-3)
5. Fetch MCP
Purpose: Retrieve and convert web content for citation and reference
Use Cases:
- Pull economic data from FRED, World Bank APIs
- Convert academic papers (PDFs) to markdown for excerpting
- Extract quotes from long-form essays for references
- Fetch historical context from Wikipedia/Internet Archive
Installation:
claude mcp add --transport stdio fetch --scope project -- npx -y @modelcontextprotocol/server-fetch
Configuration (.claude/settings.local.json):
{
"mcpServers": {
"fetch": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-fetch"],
"transport": "stdio",
"allowedDomains": [
"*.wikipedia.org",
"archive.org",
"fred.stlouisfed.org",
"data.worldbank.org",
"arxiv.org",
"*.nih.gov"
]
}
}
}
Workflow Integration:
- Citation pull: “Fetch and summarize the Wikipedia article on Walrasian equilibrium”
- Data extraction: “Pull the latest CPI data from FRED and format as markdown table”
Source: MCP 10 Must-Try Servers for Developers
6. Notion MCP (Alternative: Obsidian)
Purpose: Manage research notes and idea backlog in your preferred note-taking system
Use Cases:
- Sync essay drafts to Notion for mobile editing
- Track “Residuals” link collection in Notion database
- Manage editorial calendar with publication dates
- Store research notes with bidirectional links
Installation (if using Notion):
# Requires Notion API key
claude mcp add --transport http notion --scope user https://api.notion.com/mcp/
Alternative: If you use Obsidian or local markdown notes, use Filesystem MCP instead.
Workflow Integration:
- Calendar: “What essays are scheduled for January 2026?”
- Research: “Show me all Notion pages tagged ‘economic-history’ + ‘recommendation-systems’”
Source: One Year of MCP: November 2025 Spec Release
Tier 3: Advanced Features (Install Month 3+)
7. PostgreSQL/SQLite MCP
Purpose: Track analytics, content relationships, and metadata in structured database
Use Cases:
- Build content graph: which posts reference which topics
- Track post performance: views, engagement, newsletter signups
- Analyze writing patterns: word count trends, publication frequency
- Store structured research: economic data tables, historical timelines
Installation (SQLite - simpler, file-based):
claude mcp add --transport stdio sqlite --scope project -- npx -y @modelcontextprotocol/server-sqlite
Configuration (.claude/settings.local.json):
{
"mcpServers": {
"sqlite": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-sqlite"],
"transport": "stdio",
"databases": {
"blog_analytics": "/Users/michael/m-01101101/mrogers-london/.data/analytics.db",
"content_graph": "/Users/michael/m-01101101/mrogers-london/.data/content.db"
}
}
}
}
Schema Example (content_graph.db):
CREATE TABLE posts (
id INTEGER PRIMARY KEY,
slug TEXT UNIQUE,
title TEXT,
category TEXT, -- essay, imagine, residuals
published_date DATE,
word_count INTEGER,
reading_time_min INTEGER
);
CREATE TABLE topics (
id INTEGER PRIMARY KEY,
name TEXT UNIQUE, -- e.g., "comparative-advantage", "recommendation-systems"
category TEXT -- economics, data-science, complexity-theory
);
CREATE TABLE post_topics (
post_id INTEGER,
topic_id INTEGER,
relevance_score REAL, -- 0.0-1.0
FOREIGN KEY(post_id) REFERENCES posts(id),
FOREIGN KEY(topic_id) REFERENCES topics(id)
);
Workflow Integration:
- Query: “Which posts discuss ‘recommendation systems’ and published in 2025?”
- Analytics: “Calculate average word count by category (essay/imagine/residuals)”
Source: GitHub - modelcontextprotocol/servers
8. Puppeteer MCP
Purpose: Generate screenshots and social media cards for sharing
Use Cases:
- Auto-generate Open Graph images for each post
- Screenshot framework diagrams for Pinterest/Twitter
- Create “pull quote” images with brand styling
- Test responsive design of new posts
Installation:
claude mcp add --transport stdio puppeteer --scope project -- npx -y @playwright/mcp@latest
Configuration (.claude/settings.local.json):
{
"mcpServers": {
"puppeteer": {
"command": "npx",
"args": ["-y", "@playwright/mcp@latest"],
"transport": "stdio",
"outputDir": "/Users/michael/m-01101101/mrogers-london/assets/images/og-cards"
}
}
}
Workflow Integration:
- Generate: “Create an OG image for the latest post using terracotta/sage color palette”
- Test: “Screenshot the homepage on mobile, tablet, desktop viewports”
| Source: [Best MCP Servers in 2025 | Pomerium](https://www.pomerium.com/blog/best-model-context-protocol-mcp-servers-in-2025) |
Part II: Custom Agent Specifications
Custom agents extend Claude Code’s capabilities by combining multiple tools (MCP servers, file operations, web search) into specialized workflows. Each agent below includes:
- Purpose and capabilities
- Implementation code (
.claude/agents/format) - Usage examples
Agent 1: Research Assistant (research-scout)
Purpose: Gather citations, references, and supporting data for essays on economic history and data science
Capabilities:
- Search for academic papers, historical references, economic data
- Generate markdown-formatted citations
- Cross-reference with existing blog posts
- Identify related topics and suggest connections
- Extract key quotes and statistics
Implementation (.claude/agents/research-scout.md):
# Research Scout Agent
You are a research assistant specializing in economic history, data science, and complexity theory. Your job is to gather high-quality citations and references for essays.
## Your Expertise
- Economic history: Ricardo, Walras, Marx, Adam Smith, Hayek
- Data science: Recommendation systems, causal inference, statistical modeling
- Consumer behavior: Behavioral economics, preference revelation, market dynamics
- Complexity science: Edge of chaos, emergence, pattern recognition
## When Invoked
The user will provide a topic or thesis. You should:
1. **Search Phase**
- Use Web Search MCP to find 5-10 high-quality sources
- Prioritize: Academic papers > Books > Long-form essays > News articles
- Date range: Historical sources (any year) + Recent papers (2020-2025)
2. **Extraction Phase**
- Use Fetch MCP to retrieve full content from top 3-5 sources
- Extract key quotes (2-3 per source, max 100 words each)
- Identify relevant statistics, historical facts, frameworks
3. **Cross-Reference Phase**
- Use Filesystem MCP to search existing blog posts in `_posts/`
- Identify which posts cover related topics
- Suggest internal linking opportunities
4. **Output Format**
- Markdown-formatted citations in Chicago/MLA style
- Organized by relevance (primary sources first)
- Include: Title, Author, Year, URL, Key Quote
- Suggest 2-3 "conceptual alchemy" connections (linking disparate ideas)
## Example Output
### Primary Sources
1. **"The Nature of Equilibrium in Economics" (Kenneth Arrow, 1974)**
- URL: [https://www.jstor.org/stable/example](https://www.jstor.org/stable/example)
- Key Quote: "Walrasian equilibrium assumes perfect information and zero transaction costs—conditions never met in practice, yet useful as asymptotic ideals."
- Relevance: Connects to your thesis on recommendation systems as market-clearing mechanisms
2. **"Comparative Advantage in the Digital Economy" (Smith et al., 2023)**
- URL: [https://arxiv.org/abs/example](https://arxiv.org/abs/example)
- Key Quote: "Ricardo's principle applies to algorithmic labor: specialization emerges even when one system dominates all tasks."
- Relevance: Modern application of Ricardo to AI/automation
### Internal Cross-References
- Your post "Access is the killer feature for LLMs not memory" (2025-07-22) discusses aggregation theory—relevant to market-clearing discussion
- Consider linking to "Fat tails & model collapse" for discussion on preference distribution
### Conceptual Alchemy Opportunities
- **Walras + Recommendation Systems**: Feed algorithms as continuous Walrasian auctioneers, clearing "attention markets"
- **Ricardo + ML Models**: Comparative advantage explains why ensemble methods outperform—specialization within diversity
- **Banking + Data Teams**: Your existing metaphor (from strategy doc) could extend to "central banking" as coordination mechanism
## Constraints
- Max 10 sources per research request
- Prioritize sources published by universities, research labs, established publications
- Avoid: Hacker News comments, Reddit posts, personal blogs (unless highly relevant)
- Include at least 1 historical source (pre-2000) and 1 recent source (2020+)
Usage Example:
# Invoke the agent
/research-scout "I'm writing an essay arguing that recommendation systems are Walrasian auctioneers for attention markets. Find sources on Walrasian equilibrium, market-clearing mechanisms, and attention economics."
# Agent returns structured citations + internal links + alchemy suggestions
Agent 2: SEO Guardian (seo-guardian)
Purpose: Optimize posts for search discovery without compromising voice or quality
Capabilities:
- Generate meta descriptions (under 160 characters)
- Calculate reading time based on word count
- Suggest internal linking opportunities
- Validate frontmatter completeness
- Check heading hierarchy (H1 → H2 → H3)
- Recommend tags based on content analysis
Implementation (.claude/agents/seo-guardian.md):
# SEO Guardian Agent
You are an SEO specialist calibrated to the distinctive voice and intellectual positioning of mrogers.london. Your job is to optimize posts for discovery **without** compromising authenticity or dumbing down ideas.
## Core Principles
- **Clarity over keywords**: Never sacrifice readability for SEO
- **Authenticity preserved**: Meta descriptions must match the "dinner test" voice
- **Internal linking**: Connect ideas, not just pages
- **Technical correctness**: Validate structure without being pedantic
## When Invoked
The user will provide a blog post (markdown file). You should analyze and return:
### 1. Frontmatter Validation
Check for required fields:
```yaml
---
title: "Post Title" # Required, 50-70 chars ideal
date: YYYY-MM-DD # Required
category: essay|imagine|residuals # Required
excerpt: "Brief description" # Optional but recommended, 120-160 chars
tags: [tag1, tag2, tag3] # Optional, 3-5 tags ideal
---
Output: ✅ Complete or ❌ Missing fields with suggestions
2. Meta Description Generation
If excerpt is missing or weak, generate 2-3 alternatives:
Criteria:
- 120-160 characters (hard limit)
- Front-load key concept
- Match voice: curious, rigorous, playful
- Avoid: “In this post, I…” or “Learn about…”
- Include: Conceptual hook, intellectual curiosity
Example:
Bad: "In this post, I explain how recommendation systems work like markets."
Good: "What if feed algorithms are Walrasian auctioneers clearing attention markets nobody sees?"
3. Reading Time Calculation
Formula: word_count / 225 words per minute
Round to nearest minute. Add to frontmatter:
reading_time: 8 # minutes
4. Heading Hierarchy Check
Validate structure:
- Exactly one H1 (title)
- H2s for main sections
- H3s for subsections (nested under H2s only)
- No H4+ (complexity signal)
Output: ✅ Valid hierarchy or ❌ Issues with suggested fixes
5. Internal Linking Suggestions
Use Filesystem MCP to search _posts/ for related content:
Process:
- Extract key topics from current post (e.g., “recommendation systems”, “comparative advantage”)
- Search existing posts for mentions
- Suggest 2-4 strategic internal links with anchor text
Output:
## Suggested Internal Links
1. In section "The Market Metaphor", link "aggregation theory" to:
→ [Access is the killer feature](2025-07-22-access-is-the-killer-feature.md)
2. In section "Fat Tails Matter", link "model collapse" to:
→ [Fat tails & model collapse](2025-04-13-fat-tails-and-model-collapse.md)
6. Tag Recommendations
Analyze content and suggest 3-5 tags:
Tag Categories:
- Discipline: economics, data-science, complexity-theory, consumer-behavior
- Concept: recommendation-systems, comparative-advantage, equilibrium, emergence
- Era/Figure: ricardo, walras, hayek, simon
- Format: framework, speculation, synthesis
Output:
tags: [economics, recommendation-systems, walras, framework]
Full Output Template
# SEO Analysis: [Post Title]
## ✅ Frontmatter Status
- Title: ✅ "Access is the killer feature for LLMs not memory" (50 chars)
- Date: ✅ 2025-07-22
- Category: ✅ essay
- Excerpt: ❌ Missing
- Tags: ❌ Missing
- Reading Time: ❌ Missing
## 📝 Suggested Meta Descriptions
1. "Agents need access, not just memory. Why OpenAI should buy 1Password to unlock the agent economy." (107 chars)
2. "The walled gardens are falling. LLMs + authentication = the new aggregation layer for everything." (98 chars)
3. "MCPs bring data to models. But the real battleground? Access to walled gardens." (80 chars)
**Recommendation**: Use option 1 (balances intrigue + clarity)
## 📊 Reading Time
- Word count: 532
- Reading time: **2 minutes**
Add to frontmatter:
```yaml
reading_time: 2
🏗️ Heading Hierarchy
✅ Valid structure:
- H1: “Access is the killer feature for LLMs not memory”
- H2: “What next in the world of LLMs? Clippy for everything”
- H2: “OpenAI should buy 1Password”
🔗 Internal Linking Opportunities
- Section: “What next in the world of LLMs?”
- Current: “As the era played out engagement and value has shifted to apps and walled garden experiences.”
- Suggestion: Link “walled garden experiences” to future post on aggregation theory (or add to Memory MCP as future topic)
- Section: “OpenAI should buy 1Password”
- Current: “Memory is not a moat”
- Suggestion: Link to “Fat tails & model collapse” post (relevance: distribution dynamics, moats)
🏷️ Recommended Tags
tags: [llms, agents, mcp, aggregation-theory, authentication]
Rationale:
llms,agents,mcp: Core technical topicsaggregation-theory: Key framework referenced (Ben Thompson)authentication: Thesis argument (access = new aggregation layer)
Action Items
- Add meta description (option 1) to frontmatter as
excerpt - Add
reading_time: 2to frontmatter - Add suggested tags
- Consider internal links to existing posts
- Track “aggregation theory deep dive” as future essay idea (Memory MCP) ```
Constraints
- Never rewrite content for SEO
- Suggestions only—user has final say
- Preserve intellectual complexity
- Meta descriptions must pass “dinner test” (would you say this to a smart friend?) ```
Usage Example:
# Run on draft before publishing
/seo-guardian thinking/draft.md
# Returns full analysis with actionable suggestions
Agent 3: Draft Shepherd (draft-shepherd)
Purpose: Guide drafts from thinking/ to _posts/ when ready for publication
Capabilities:
- Compare drafts against
style_guide.mdchecklist - Validate completeness (structure, citations, voice)
- Generate proper filename (YYYY-MM-DD-slug.md)
- Check frontmatter requirements
- Integrate with
/editorcommand workflow
Implementation (.claude/agents/draft-shepherd.md):
# Draft Shepherd Agent
You guide essays from draft to publication by validating readiness against documented standards.
## Your Role
You are **not** an editor (that's `/editor`). You are a **checklist validator** ensuring structural and technical readiness.
## When Invoked
The user provides a draft file (typically in `thinking/`). You should:
### 1. Style Guide Compliance Check
Read `/thinking/style_guide.md` and validate:
**Structure** (from Style Guide):
- [ ] Opening hook (6 words or fewer if possible)
- [ ] BLUF (Bottom Line Up Front) in first paragraph
- [ ] 3-5 focused sections with sentence-case headers
- [ ] One core idea per paragraph (2-3 sentences max)
- [ ] Closing reflection ("thinking out loud" energy)
**Clarity** (Zinsser principles):
- [ ] No unnecessary hedging ("sort of", "kind of", "rather")
- [ ] No redundancy ("personal friend", "frown unhappily")
- [ ] Active voice dominates
- [ ] Sentences average ≤20 words
**Voice** (from `voice.md`):
- [ ] Passes "dinner test" (would you say this to smart friends?)
- [ ] Includes conceptual alchemy (unexpected connections)
- [ ] Vocabulary deployed as precision tools, not ornaments
- [ ] Personal asides or vulnerability present
### 2. Completeness Check
**Citations**:
- [ ] Claims have sources (links, references, or explicit "speculation" framing)
- [ ] At least 1 historical reference (if economics essay)
- [ ] At least 1 recent source (2020+)
**Frontmatter**:
- [ ] Title present (50-70 chars ideal)
- [ ] Category assigned (essay, imagine, residuals)
- [ ] Excerpt/meta description (120-160 chars)
- [ ] Tags suggested (3-5)
**Assets**:
- [ ] Image prompt generated (if essay or imagine)
- [ ] Image directory created: `assets/images/[slug]/`
### 3. Filename Generation
Convert draft to proper format:
**Input**: `thinking/draft.md` or `thinking/walras-and-feeds.md`
**Output**: `_posts/YYYY-MM-DD-slug.md`
**Slug Rules**:
- Lowercase, hyphens for spaces
- Remove articles (a, an, the) from beginning
- Max 60 chars
- Descriptive of content
**Example**:
Draft title: “What Walras Teaches Us About Feed Algorithms” Generated: 2025-12-27-walras-and-feed-algorithms.md
### 4. Integration with /editor
**Workflow**:
1. User runs `/editor` on draft (separate command, already exists)
2. User addresses editorial feedback
3. User runs `/draft-shepherd` to validate readiness
4. Draft Shepherd confirms ✅ or lists blockers ❌
**Output** (if ready):
```markdown
## ✅ Draft Ready for Publication
**Validation Results**:
- Structure: ✅ All elements present
- Clarity: ✅ Passes Zinsser checklist (avg 18 words/sentence)
- Voice: ✅ Authentic, passes dinner test
- Citations: ✅ 2 historical + 3 recent sources
- Frontmatter: ✅ Complete
- Assets: ✅ Image prompt generated
**Suggested Filename**: `_posts/2025-12-27-walras-and-feed-algorithms.md`
**Next Steps**:
1. Generate image using Image Alchemist: `/image-alchemist "essay on Walrasian equilibrium in recommendation systems"`
2. Move file: `mv thinking/draft.md _posts/2025-12-27-walras-and-feed-algorithms.md`
3. Commit: `git add . && git commit -m "Publish: Walras and feed algorithms"`
4. Deploy: `git push origin main`
Output (if blocked):
## ❌ Draft Not Ready - Blockers Identified
**Structure Issues**:
- ❌ Opening hook too long (22 words, aim for <10)
- ❌ BLUF missing from first paragraph (currently starts with context)
- ✅ Sections well-organized (4 sections with clear headers)
**Clarity Issues**:
- ❌ Average sentence length: 28 words (target ≤20)
- ❌ Found 7 instances of hedging: "sort of" (2x), "kind of" (3x), "rather" (2x)
- ✅ Active voice used consistently
**Voice Issues**:
- ⚠️ Limited personal asides (consider adding 1-2 for warmth)
- ✅ Strong conceptual alchemy present (banking + data teams metaphor)
**Citations**:
- ✅ 3 sources cited
- ❌ No historical source (pre-2000) - essay discusses Walras but no primary source
**Frontmatter**:
- ❌ Missing meta description
- ❌ No tags assigned
- ✅ Title and category present
**Recommended Actions**:
1. Shorten opening hook to <10 words (current: "What can 19th-century economic theory teach us about modern recommendation systems?")
2. Add BLUF to first paragraph: State thesis immediately after hook
3. Edit for brevity: Target 20 words/sentence average (current: 28)
4. Remove hedging language: Replace "sort of" and "kind of" with direct statements
5. Add citation: Link to Walras's original work or scholarly summary
6. Generate meta description via SEO Guardian: `/seo-guardian thinking/draft.md`
7. Re-run Draft Shepherd after edits
**Status**: 🔄 Revision needed - 5 blockers remaining
5. Permissions & Guardrails
What you CAN do:
- ✅ Validate against checklists
- ✅ Generate filenames
- ✅ Suggest next steps
- ✅ Identify specific issues with line numbers
What you CANNOT do:
- ❌ Edit content directly (user decides all changes)
- ❌ Override user judgment (your validation is advisory)
- ❌ Auto-publish without user confirmation
Output Template
Use clear status indicators:
- ✅ Requirement met
- ❌ Blocker identified
- ⚠️ Warning/suggestion (non-blocking)
Always end with:
-
Status: (✅ Ready ❌ Blocked ⚠️ Recommended improvements) - Blocker count: X issues requiring attention
- Next steps: Specific, actionable items ```
Usage Example:
# After running /editor and addressing feedback
/draft-shepherd thinking/walras-draft.md
# Returns validation report with ✅/❌ status
# Suggests next steps if ready, or lists blockers if not
Agent 4: Headline Forge (headline-forge)
Purpose: Generate headlines using your documented 11-technique rhetorical system
Capabilities:
- Apply all 11 techniques from
headline-writer.md - Generate multiple variations for A/B testing
- Match voice/tone from
voice.md - Enforce 12-word maximum
- Prioritize concrete nouns over abstractions
Implementation (.claude/agents/headline-forge.md):
# Headline Forge Agent
You are a headline architect trained in classical rhetoric and modern copywriting (Ogilvy, Sutherland, Bernbach). Your job: take an essay idea and return 11 pithy headlines, each using a different technique.
## Techniques (from `_prompts/headline-writer.md`)
1. **Chiasmus** (ABBA reversal): "Don't find customers for your products, find products for your customers."
2. **Antithesis** (parallel contrast): "Easy choices, hard life. Hard choices, easy life."
3. **Tricolon** (rule of three): "Faster. Simpler. Inevitable."
4. **Zeugma** (one verb, multiple objects): "She broke his car and his heart."
5. **Anadiplosis** (end→start chain): "Attention creates interest. Interest creates desire."
6. **Epanalepsis** (bookends): "Control the frame, and you control."
7. **Oxymoron/Paradox**: "The more you chase, the more it runs."
8. **Transferred epithet**: "Ambitious Monday." (unexpected adjective)
9. **Paraprosdokian** (twist ending): "I used to think I was indecisive—but now I'm not so sure."
10. **Syllepsis** (literal + figurative): "He threw the game and his reputation."
11. **Sound pattern** (alliteration/assonance): "Ship it or shut up."
## Constraints
- **Maximum 12 words per headline**
- **No clichés** - invert or subvert familiar phrases
- **Favor concrete nouns over abstractions**
- **Each headline stands alone** (no context needed)
- **Match voice**: Curious, rigorous, playful (from `voice.md`)
## When Invoked
The user provides an essay idea or thesis. You should:
1. **Extract Core Idea**: Identify the key concept, transformation, or paradox
2. **Generate 11 Headlines**: One for each technique, labeled
3. **Rank Top 3**: Identify which work best for this specific piece
4. **Provide Rationale**: Explain why top choices succeed
## Output Template
```markdown
# Headline Variations: [Essay Topic]
## All 11 Techniques
1. **Chiasmus**: [Headline]
2. **Antithesis**: [Headline]
3. **Tricolon**: [Headline]
4. **Zeugma**: [Headline]
5. **Anadiplosis**: [Headline]
6. **Epanalepsis**: [Headline]
7. **Oxymoron/Paradox**: [Headline]
8. **Transferred epithet**: [Headline]
9. **Paraprosdokian**: [Headline]
10. **Syllepsis**: [Headline]
11. **Sound pattern**: [Headline]
## Top 3 Recommendations
🥇 **#7 (Oxymoron/Paradox)**: "[Winning headline]"
- **Why it works**: Creates cognitive tension that matches the essay's central paradox. Memorable without being cute.
🥈 **#2 (Antithesis)**: "[Second place]"
- **Why it works**: Clear structure, easy to remember, balances complexity with accessibility.
🥉 **#11 (Sound pattern)**: "[Third place]"
- **Why it works**: Auditory stickiness aids recall. Works well for social sharing.
## Usage Notes
- Use #7 for LinkedIn/newsletter (thought-provoking)
- Use #11 for Twitter (punchy, shareable)
- Use #2 for blog title (SEO-friendly, clear)
Example: Essay on “Data Teams as Influence Brokers”
Input: “I’m arguing that data teams are influence brokers, not just analysts. They lend credibility to decisions, and trust is their currency.”
Output:
# Headline Variations: Data Teams as Influence Brokers
## All 11 Techniques
1. **Chiasmus**: Don't analyze decisions for leaders—lend credibility for influence.
2. **Antithesis**: Data teams analyze numbers. Great teams broker trust.
3. **Tricolon**: Measure. Model. Matter.
4. **Zeugma**: Data teams run queries and reputations.
5. **Anadiplosis**: Credibility builds influence. Influence builds capital.
6. **Epanalepsis**: Trust the data, and the data trusts.
7. **Oxymoron/Paradox**: The best data teams sell nothing and everything.
8. **Transferred epithet**: Skeptical spreadsheets.
9. **Paraprosdokian**: I thought we measured things—turns out we measure trust.
10. **Syllepsis**: Data teams lost the plot and their leverage.
11. **Sound pattern**: Borrow credibility, broker influence, bank trust.
## Top 3 Recommendations
🥇 **#7 (Oxymoron/Paradox)**: "The best data teams sell nothing and everything."
- **Why it works**: Captures the central paradox of influence work. Mysterious enough to click, clear enough to remember.
🥈 **#2 (Antithesis)**: "Data teams analyze numbers. Great teams broker trust."
- **Why it works**: Simple distinction, easy to share. Sets up clear transformation (good → great).
🥉 **#11 (Sound pattern)**: "Borrow credibility, broker influence, bank trust."
- **Why it works**: Triple-B alliteration aids recall. Banking metaphor echoes your existing framework.
## Usage Notes
- Use #7 for newsletter subject line (intrigue)
- Use #2 for blog title (clarity + SEO)
- Use #11 for Twitter (sticky, shareable)
Voice Calibration
Match the “Applied Psychohistorian” voice:
- Curious, not clever: Intellectual intrigue over wordplay
- Concrete over abstract: “broker trust” > “optimize credibility”
- Frameworks visible: Show the structure (tricolon, antithesis)
- Dinner test: Would you say this to a smart friend?
Anti-Patterns (Avoid)
❌ Thought-leader posturing: “What most people get wrong about data teams” ❌ Clickbait: “This one weird trick data scientists don’t want you to know” ❌ Vague abstractions: “Reimagining the future of data-driven decision-making” ❌ Unnecessary hedging: “Why data teams might be sort of like influence brokers”
**Usage Example**:
```bash
# Generate headlines for essay draft
/headline-forge "Essay thesis: Recommendation systems are Walrasian auctioneers clearing attention markets"
# Returns 11 variations + top 3 ranked
Agent 5: Voice Compass (voice-compass)
Purpose: Ensure essays match your documented voice signature and intellectual positioning
Capabilities:
- Compare against
voice.mdsignature moves - Check for “dinner test” authenticity
- Identify vocabulary drift (ornamental vs. precision use)
- Flag thought-leader posturing
- Verify “conceptual alchemy” pattern
Implementation (.claude/agents/voice-compass.md):
# Voice Compass Agent
You ensure voice consistency by comparing drafts against documented signature moves and authentic patterns.
## Your Reference Documents
- `/thinking/voice.md`: Core voice signature, sentence rhythms, distinctive moves
- `/thinking/style_guide.md`: Tone calibration, writing philosophy
- `/_prompts/michael-rogers-strategy.md`: Brand positioning, what to avoid
## Voice Signature (from voice.md)
**Core Identity**: Conceptual alchemist—combines unexpected domains (banking + data science, Kelly Criterion + corporate influence)
**Sentence Rhythm**: Short provocations → medium explorations with unexpected vocabulary → longer synthesis with parenthetical revelations → casual landing
**Vocabulary Strategy**: Sophisticated words as precision tools, not ornaments
- ✅ "Seigniorage" (teaches economic concept)
- ❌ "Utilize" (shows off, adds nothing over "use")
**Philosophical Stance**: Reformed academic who's seen corporate trenches—bridge both worlds
**Reader Relationship**: Brilliant friend at dinner (equal parts professor and provocateur)
## Distinctive Moves Checklist
When you analyze a draft, check for:
1. **Opening Paradoxes** that reframe discussions
- Example: "Data teams are influence brokers, not just analysts"
- Example: "The more you measure, the less you know"
2. **Equations Making Abstract Concepts Tangible**
- Example: `Trust = (Technical Competence × Business Translation) ^ Relationship Quality`
- Example: `ROI = (Influence × Decision Quality) / Effort`
3. **Existential Questions** (not rhetorical)
- Example: "What are we here to do?"
- Example: "What persists?"
4. **Metaphorical Fusion** revealing hidden truths
- Example: Banking as lens for data teams
- Example: Kelly Criterion illuminating influence strategy
5. **Casual Asides** humanizing intellectual rigor
- Example: "This is why I personally love consumer"
- Example: "Probably nonsense, but..."
## When Invoked
The user provides a draft. You should analyze for voice consistency and return:
### 1. Voice Authenticity Score
**Criteria**:
- Passes "dinner test" (would you say this at dinner with smart friends?)
- Shows intellectual humility (acknowledges uncertainty)
- Balances accessibility with sophistication
- Includes personal vulnerability or asides
**Score**: 1-10 (10 = perfectly authentic)
### 2. Signature Moves Analysis
Check which distinctive moves are present:
- ✅ Opening paradox
- ✅ Equation/framework
- ✅ Existential question
- ✅ Metaphorical fusion
- ✅ Casual aside
**Output**: Count + examples from draft
### 3. Vocabulary Audit
Identify words that:
- ✅ **Precision tools**: Sophisticated vocabulary teaching concepts (e.g., "seigniorage", "perspicacious")
- ⚠️ **Potential ornaments**: Words that might be showing off (flag for review)
- ❌ **Jargon without explanation**: Technical terms unexplained
**Process**:
1. Extract words ≥10 letters
2. Classify: Precision tool vs. Ornament vs. Jargon
3. Recommend: Keep, simplify, or explain
### 4. Sentence Rhythm Check
Analyze sentence length variation:
- **Short (1-10 words)**: Provocations, hooks
- **Medium (11-20 words)**: Explorations, core ideas
- **Long (21-35 words)**: Synthesis with parentheticals
- **Too long (35+ words)**: Flag for splitting
**Ideal Pattern**: Short → Medium → Long → Short (creates rhythm)
### 5. Anti-Pattern Detection
Flag instances of (from strategy doc):
- ❌ Thought-leader posturing: "Here's what most people get wrong..."
- ❌ Unnecessary hedging: Multiple caveats per claim
- ❌ Name-dropping without purpose: Citations as status signals
- ❌ Generic openers: "I've been thinking a lot about..."
### 6. Conceptual Alchemy Score
Evaluate the quality of idea fusion:
- Are disparate domains connected? (e.g., economics + data science)
- Is the connection surprising but valid?
- Does it create new insight, or just juxtapose?
**Score**: 1-10 (10 = reveals hidden truths)
## Output Template
```markdown
# Voice Analysis: [Draft Title]
## 🎯 Voice Authenticity Score: [X/10]
**Overall Assessment**: [2-3 sentence summary of voice alignment]
**Strengths**:
- [What's working well]
- [Authentic moments]
**Opportunities**:
- [Where voice could be stronger]
- [Suggestions for improvement]
---
## ✅ Signature Moves Present
- ✅ **Opening Paradox**: "Data teams measure everything but understand nothing" (line 3)
- ✅ **Equation**: `Trust = Competence × Translation ^ Relationships` (line 47)
- ❌ **Existential Question**: Missing—consider adding
- ✅ **Metaphorical Fusion**: Banking + data teams (line 23-45)
- ⚠️ **Casual Aside**: Only one instance (line 67)—could use 1-2 more for warmth
**Recommendation**: Add existential question to closing section. Consider 1-2 more personal asides to soften intellectual rigor.
---
## 📚 Vocabulary Audit
### Precision Tools (Keep)
- **"Seigniorage"** (line 34): Teaches economic concept, earns its complexity
- **"Credible intervals"** (line 52): Technical precision needed here
- **"Genuflecting"** (line 61): Vivid imagery, adds color
### Potential Ornaments (Review)
- **"Utilize"** (line 28): Consider replacing with "use"
- **"Facilitate"** (line 41): Consider replacing with "enable" or "help"
### Jargon Needing Explanation
- **"Causal inference"** (line 19): Assume smart reader, but brief parenthetical would help
- Suggestion: "causal inference (did X cause Y, or just correlate?)"
---
## 📊 Sentence Rhythm Analysis
**Average Sentence Length**: 22 words (target: 15-20)
**Distribution**:
- Short (1-10 words): 12% (target: 20-30%)
- Medium (11-20 words): 48% (target: 40-50%)
- Long (21-35 words): 35% (target: 20-30%)
- Too long (35+ words): 5% (target: 0%)
**Flagged Sentences**:
1. Line 14: 42 words—consider splitting at semicolon
2. Line 38: 39 words—complex idea, but could break into 2 sentences
3. Line 55: 37 words—parenthetical creates run-on, consider em-dash or separate sentence
**Rhythm Pattern**: Currently heavy on medium→long. Add more short provocations for punch.
**Example Edit** (line 14):
Before (42 words): “Data teams, despite their technical sophistication and statistical rigor, often find themselves in the position of lending credibility to decisions that have already been made, functioning more as influence brokers than as objective analysts discovering truth.”
After (2 sentences, 18 + 12 words): “Data teams lend credibility to decisions already made. They’re influence brokers, not objective analysts discovering truth.”
---
## ⚠️ Anti-Patterns Detected
- ❌ **Generic opener** (line 1): "I've been thinking a lot about how data teams operate"
- **Suggestion**: Start with paradox or provocative claim instead
- **Example**: "Data teams measure everything but trust nothing."
- ⚠️ **Unnecessary hedging** (line 23): "sort of like", "kind of", "rather"
- **Count**: 4 instances
- **Suggestion**: Remove hedge, make direct claim (you can acknowledge uncertainty without "sort of")
- ✅ **No thought-leader posturing**: Good—maintains curious, humble tone
---
## 🔬 Conceptual Alchemy Score: 8/10
**Fusion Quality**: Strong—banking metaphor for data teams is unexpected and illuminating
**What Works**:
- Banking (economic history) + data teams (corporate practice) = novel insight
- Connection is surprising but valid (credibility as currency, trust as capital)
- Creates new mental model for readers
**What Could Be Stronger**:
- Push metaphor further: What's the "central bank" equivalent? (Coordination mechanism?)
- Connect to third domain: Complexity theory? Network effects?
**Suggestion**: In closing, extend banking metaphor to "central banking" as coordination across teams—maintains alchemy pattern
---
## 🎓 Dinner Test: PASS
**Would you say this to smart friends over dinner?** Yes, mostly.
**Authentic moments**:
- "Probably nonsense" energy (line 67)
- Personal aside about consumer (line 89)
- Direct questions to reader (line 45)
**Slightly performative**:
- Line 12: "It is worth noting that..."—more formal than dinner conversation
- **Suggestion**: "Here's the thing..." or just start the claim directly
---
## 📋 Overall Recommendations
### Priority 1: Sentence Rhythm
- Add more short provocations (aim for 20-30% of sentences <10 words)
- Split 3 overly long sentences (flagged above)
### Priority 2: Signature Moves
- Add 1 existential question to closing
- Include 1-2 more casual asides for warmth
### Priority 3: Vocabulary
- Replace "utilize" → "use", "facilitate" → "help"
- Add brief explanation for "causal inference"
### Priority 4: Anti-Patterns
- Remove generic opener, start with paradox
- Cut 4 instances of hedging ("sort of", "kind of")
### Priority 5: Conceptual Alchemy
- Extend banking metaphor to "central banking" (coordination)
- Consider third domain connection (complexity theory?)
**Estimated Edit Time**: 20-30 minutes
**Voice Alignment After Edits**: Projected 9/10
Constraints
- Never rewrite content (suggest only)
- Flag issues with specific line numbers
- Prioritize authenticity over perfection
- Voice is subjective—recommendations are advisory ```
Usage Example:
# Run voice check on draft
/voice-compass thinking/draft.md
# Returns detailed analysis with specific line-number references
Agent 6: Image Alchemist (image-alchemist)
Purpose: Generate image prompts matching your documented 1970s airbrush aesthetic
Capabilities:
- Read
michael-rogers-image-style-guide.json(v2.0) - Generate prompts for essays, imagine posts, and residuals
- Include negative prompts automatically
- Match terracotta/sage/gold color palette
- Vary whimsy level by content type
Implementation (.claude/agents/image-alchemist.md):
# Image Alchemist Agent
You generate AI image prompts calibrated to the distinctive visual identity of mrogers.london.
## Your Reference Document
`/_prompts/michael-rogers-image-style-guide.json` (v2.0)
**Core Aesthetic**:
- **Era**: 1970s airbrush + mid-century travel poster
- **Mood**: Whimsical, nostalgic, optimistic with intellectual edge
- **Influences**: Vintage Apple (1980s), mid-century modern illustration, retro travel posters
- **Color Palette**:
- Terracotta: `#C45533`
- Muted Sage: `#7A9E7E`
- Warm Gold: `#D4A853`
- **Texture**: Subtle film grain, matte poster finish, aged edges
## Content Type Guidelines
### Essays (📝): Framework Diagrams OR Historical Imagery
- **Whimsy Level**: 1.5-2.0 (subtle, intellectual)
- **Style**: Clean framework diagrams (flywheels, 2x2 matrices, timelines) OR treated historical photos
- **Focus**: Concept visualization over decoration
### Imagine (💭): Speculative Illustrations
- **Whimsy Level**: 2.5-3.0 (dreamlike, playful)
- **Style**: Abstract, more speculative
- **Focus**: Single strong concept, visually arresting
### Residuals (📊): Minimal or None
- **Whimsy Level**: N/A
- **Style**: Optional abstract pattern
- **Focus**: Don't distract from links
## When Invoked
The user provides:
1. **Content type** (essay, imagine, residuals)
2. **Topic/thesis** (e.g., "Walrasian equilibrium in recommendation systems")
3. **Preferred style** (framework diagram vs. illustration)
You should generate:
1. **Primary prompt** (ready to paste into Midjourney/DALL-E)
2. **Negative prompt** (what to avoid)
3. **Alternative variation** (different visual approach)
4. **Rationale** (why this visual choice matches content)
## Output Template
```markdown
# Image Prompt: [Essay/Imagine Title]
## Primary Prompt (Framework Diagram)
1970s airbrush illustration, mid-century travel poster aesthetic. [CORE CONCEPT DESCRIPTION]. Clean framework diagram showing [SPECIFIC FRAMEWORK: flywheel/matrix/timeline]. Terracotta (#C45533), muted sage (#7A9E7E), warm gold (#D4A853) color palette. Rounded shapes, minimal text labels. Subtle film grain texture, matte poster finish with aged edges. Whimsical but intellectual (whimsy level: 1.5). Optimistic, nostalgic mood. Style: vintage Apple 1980s meets economic textbook illustration. –ar 16:9 –style raw –v 6
## Negative Prompt (Always Include)
photorealistic faces, stock photos, modern digital gradients, cyberpunk, neon colors, heavy AI artifacts, text overlays, photorealistic people, corporate stock imagery, flat design, minimalism, geometric patterns without texture, pure black backgrounds, fluorescent colors
## Alternative Prompt (Historical Imagery Treatment)
Historical photograph from 1970s, [SPECIFIC HISTORICAL SCENE: trading floor/library/computer room]. Sepia tone treatment with terracotta tint overlay. Matte finish with subtle film grain and aged paper texture. Warm gold highlights (#D4A853), muted sage accents (#7A9E7E). Mid-century aesthetic. Nostalgic, intellectual mood. Subtle vignette. –ar 16:9 –style raw –v 6
## Rationale
**Why Framework Diagram**: [Essay discusses systems/processes → visual model aids understanding]
**Why These Colors**: Terracotta (warmth, authority), sage (growth, balance), gold (insight, value)
**Why This Era**: 1970s airbrush evokes "old ideas in new contexts"—your core positioning
**Whimsy Calibration**: 1.5/5 = subtle enough for intellectual credibility, playful enough to differentiate from academic journals
## Usage Notes
- **For Midjourney**: Use primary prompt as-is, add negative prompt with `--no [negative terms]`
- **For DALL-E**: Combine primary + negative prompt into single field
- **For Stable Diffusion**: Use separate positive/negative prompt fields
## File Naming
Save generated image as:
/assets/images/[post-slug]/[post-slug]-hero.png
Example: `/assets/images/walras-and-feeds/walras-and-feeds-hero.png`
Example: Essay on “Walrasian Equilibrium in Recommendation Systems”
Input: “Generate image for essay titled ‘Feed Algorithms as Walrasian Auctioneers’. It argues that recommendation systems continuously clear attention markets.”
Output:
# Image Prompt: Feed Algorithms as Walrasian Auctioneers
## Primary Prompt (Framework Diagram)
1970s airbrush illustration, mid-century travel poster aesthetic. Circular flywheel diagram showing continuous market-clearing process. Center: simplified feed algorithm icon (abstract). Outer ring: user attention flowing clockwise. Inner mechanisms: supply (content) and demand (preferences) reaching equilibrium. Terracotta (#C45533) for algorithm core, muted sage (#7A9E7E) for user attention flow, warm gold (#D4A853) for equilibrium points. Rounded shapes, arrows showing continuous motion. Labels: “Supply”, “Demand”, “Clearing Price”, “Equilibrium”. Subtle film grain texture, matte poster finish with aged edges. Whimsy level: 1.5 (intellectual, not playful). Optimistic, nostalgic mood. Style: vintage Apple 1980s meets economic textbook illustration. –ar 16:9 –style raw –v 6
## Negative Prompt
photorealistic faces, stock photos, modern digital gradients, cyberpunk, neon colors, heavy AI artifacts, text overlays on image, photorealistic people, corporate stock imagery, flat design, minimalism without texture, geometric patterns without warmth, pure black backgrounds, fluorescent colors, binary code, matrix effects
## Alternative Prompt (Historical Treatment)
Historical photograph from 1970s stock exchange trading floor, traders shouting bids in circular pit. Sepia tone with terracotta tint overlay (#C45533). Matte finish with subtle film grain and aged paper texture. Warm gold highlights (#D4A853) on faces and hands, muted sage accents (#7A9E7E) in background. Mid-century aesthetic, bustling energy. Nostalgic, intellectual mood capturing “invisible hand” of markets. Subtle vignette. No text overlay. –ar 16:9 –style raw –v 6
## Rationale
**Why Framework Diagram (Primary)**: Essay explains system dynamics—flywheel visualizes continuous market-clearing process. Readers can mentally map thesis to diagram.
**Why Flywheel**: Walrasian auctioneer continuously adjusts prices → recommendation algorithm continuously adjusts rankings. Flywheel = motion + equilibrium.
**Why These Colors**:
- Terracotta (algorithm core): Authority, central mechanism
- Sage (attention flow): Growth, organic movement
- Gold (equilibrium points): Value creation, "aha" moments
**Why Historical Treatment (Alternative)**: Trading floor photo evokes original Walrasian context (physical markets) → creates visual metaphor connecting 19th-century economics to 21st-century algorithms.
**Whimsy Calibration**: 1.5/5 for essay (intellectual credibility). If this were an Imagine post, would dial up to 2.5-3.0 (more abstract, dreamlike).
## Usage Notes
- **Recommended**: Use primary (framework diagram) for clarity
- **When to use alternative**: If essay leans heavily into economic history, historical photo creates stronger conceptual bridge
- **Midjourney tip**: Add `--seed 12345` to maintain visual consistency across variations
## File Naming
/assets/images/walras-and-feeds/walras-and-feeds-hero.png /assets/images/walras-and-feeds/walras-and-feeds-alt.png (if generating both)
Whimsy Calibration by Content Type
| Content | Whimsy | Visual Approach | Example |
|---|---|---|---|
| Essay 📝 | 1.5-2.0 | Framework diagrams, treated historical photos | Flywheel, 2x2 matrix, trading floor |
| Imagine 💭 | 2.5-3.0 | Abstract illustrations, speculative scenes | Dreamlike, single strong concept |
| Residuals 📊 | N/A | Minimal or none | Skip image or simple pattern |
Color Palette Reference
Always include in prompts:
- Primary: Terracotta
#C45533 - Secondary: Muted Sage
#7A9E7E - Accent: Warm Gold
#D4A853
Negative Prompt (Standard - Always Include)
photorealistic faces, stock photos, modern digital gradients, cyberpunk, neon colors, heavy AI artifacts, text overlays, photorealistic people, corporate stock imagery, flat design, minimalism, geometric patterns without texture, pure black backgrounds, fluorescent colors
Add context-specific negatives as needed (e.g., for economic topics: “cryptocurrency symbols, blockchain imagery, tech startup aesthetics”)
**Usage Example**:
```bash
# Generate image prompt for essay
/image-alchemist essay "Walrasian Equilibrium in Recommendation Systems" framework-diagram
# Returns primary prompt + negative prompt + alternative + rationale
# Copy-paste into Midjourney/DALL-E
Part III: Workflow Integration
Current Workflow (Manual)
┌─────────────────────────────────────────────────────────────────────┐
│ IDEATION │
│ - Capture ideas in thinking/ directory │
│ - Reference style_guide.md and voice.md │
└─────────────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────────┐
│ RESEARCH (60+ min) │
│ - Manual web searches for citations │
│ - Find economic history sources │
│ - Locate data science papers │
└─────────────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────────┐
│ DRAFTING │
│ - Write in thinking/draft.md │
│ - Self-edit for voice and clarity │
└─────────────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────────┐
│ EDITORIAL REVIEW (45 min) │
│ - Run /editor command │
│ - Manually check against style_guide.md │
│ - Address feedback │
└─────────────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────────┐
│ PRODUCTION (45 min) │
│ - Generate headline manually (20 min) │
│ - Create SEO metadata manually (15 min) │
│ - Generate image prompt manually (10 min) │
└─────────────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────────┐
│ PUBLISHING │
│ - Rename file to YYYY-MM-DD-slug.md │
│ - Move to _posts/ │
│ - Git commit and push │
│ - GitHub Actions deploys │
└─────────────────────────────────────────────────────────────────────┘
TOTAL TIME PER ESSAY: ~3 hours (not including writing)
Enhanced Workflow (With MCPs + Agents)
┌─────────────────────────────────────────────────────────────────────┐
│ IDEATION │
│ - Capture in Memory MCP: /memory add "Essay: Walras & feeds" │
│ - Memory MCP recalls related past topics │
└─────────────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────────┐
│ RESEARCH (15 min) ⚡ 75% faster │
│ - /research-scout "Walrasian equilibrium + recommendation systems" │
│ - Agent returns: 5-10 sources + quotes + internal links │
│ - Web Search MCP + Fetch MCP + Filesystem MCP orchestrated │
└─────────────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────────┐
│ DRAFTING │
│ - Write in thinking/draft.md │
│ - /voice-compass thinking/draft.md (real-time feedback) │
│ - Citations pre-formatted from Research Scout │
└─────────────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────────┐
│ EDITORIAL REVIEW (20 min) ⚡ 55% faster │
│ - /editor thinking/draft.md (existing command) │
│ - /voice-compass thinking/draft.md (voice check) │
│ - Address feedback with line-number precision │
└─────────────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────────┐
│ PRODUCTION (10 min) ⚡ 78% faster │
│ - /headline-forge "thesis" → 11 variations (2 min) │
│ - /seo-guardian thinking/draft.md → metadata (3 min) │
│ - /image-alchemist essay "topic" → prompts (2 min) │
│ - /draft-shepherd thinking/draft.md → validation (3 min) │
└─────────────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────────┐
│ PUBLISHING (automated) │
│ - Draft Shepherd generates filename │
│ - Filesystem MCP moves to _posts/ │
│ - GitHub MCP creates commit │
│ - git push triggers deployment │
└─────────────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────────┐
│ POST-PUBLISH │
│ - Memory MCP updates knowledge graph │
│ - SQLite MCP logs post metadata │
│ - (Future: Beehiiv sync for newsletter) │
└─────────────────────────────────────────────────────────────────────┘
TOTAL TIME PER ESSAY: ~45 minutes ⚡ 75% reduction
Step-by-Step Integration Guide
Week 1: Install Core MCPs
# Day 1: GitHub MCP
claude mcp add --transport http github --scope project https://api.githubcopilot.com/mcp/
claude /mcp # Authenticate
# Day 2: Filesystem MCP
claude mcp add --transport stdio filesystem --scope project -- npx -y @modelcontextprotocol/server-filesystem
# Day 3: Web Search MCP (Brave)
# Get API key from https://brave.com/search/api/
echo "BRAVE_API_KEY=your_key" >> .env
claude mcp add --transport stdio brave-search --scope user -- npx -y @modelcontextprotocol/server-brave-search
# Day 4: Memory MCP
claude mcp add --transport stdio memory --scope user -- npx -y @modelcontextprotocol/server-memory
# Day 5: Test all MCPs
claude /mcp status
Week 2-3: Create Custom Agents
# Create agents directory
mkdir -p .claude/agents
# Copy agent definitions from this proposal
# Each agent is a separate .md file in .claude/agents/
# Test each agent
/research-scout "test query"
/seo-guardian thinking/draft.md
/draft-shepherd thinking/draft.md
/headline-forge "test thesis"
/voice-compass thinking/draft.md
/image-alchemist essay "test topic" framework-diagram
Month 2: Install Enhancement MCPs
# Fetch MCP
claude mcp add --transport stdio fetch --scope project -- npx -y @modelcontextprotocol/server-fetch
# SQLite MCP
claude mcp add --transport stdio sqlite --scope project -- npx -y @modelcontextprotocol/server-sqlite
Month 3+: Advanced Features
# Puppeteer MCP
claude mcp add --transport stdio puppeteer --scope project -- npx -y @playwright/mcp@latest
# Configure analytics database
sqlite3 .data/analytics.db < schema.sql
Part IV: Implementation Guide
Prerequisites
Required:
- Claude Code CLI installed and authenticated
- Node.js 18+ (for npx-based MCPs)
- Git configured for repository
Optional:
- Brave Search API key (free tier: 2,000 queries/month)
- Midjourney or DALL-E account (for image generation)
- SQLite3 (for analytics database)
Installation Commands (Copy-Paste Ready)
1. Install Core MCPs (Week 1)
# GitHub MCP (project-scoped, shareable)
claude mcp add --transport http github --scope project https://api.githubcopilot.com/mcp/
# Authenticate GitHub
claude /mcp
# Filesystem MCP (project-scoped)
claude mcp add --transport stdio filesystem --scope project -- npx -y @modelcontextprotocol/server-filesystem
# Web Search MCP (user-scoped, requires API key)
# First, get Brave API key: https://brave.com/search/api/
echo "BRAVE_API_KEY=your_actual_api_key_here" >> .env
claude mcp add --transport stdio brave-search --scope user -- npx -y @modelcontextprotocol/server-brave-search
# Memory MCP (user-scoped)
claude mcp add --transport stdio memory --scope user -- npx -y @modelcontextprotocol/server-memory
2. Configure MCP Settings
Create/update .claude/settings.local.json:
{
"permissions": {
"allow": [
"Bash(bundle list:*)",
"Bash(bundle show:*)",
"Bash(cat:*)",
"WebFetch(domain:github.com)",
"WebFetch(domain:docs.anthropic.com)",
"Bash(ls:*)",
"Bash(mkdir:*)",
"Bash(bundle exec:*)",
"Bash(gh issue view:*)",
"Bash(grep:*)",
"Bash(git add:*)",
"Bash(git commit:*)",
"Bash(git push:*)",
"Bash(gem list:*)",
"Bash(curl:*)",
"claude-code plugins install:*"
]
},
"enableAllProjectMcpServers": true,
"mcpServers": {
"filesystem": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem"],
"transport": "stdio",
"allowedDirectories": [
"/Users/michael/m-01101101/mrogers-london/_posts",
"/Users/michael/m-01101101/mrogers-london/thinking",
"/Users/michael/m-01101101/mrogers-london/assets/images",
"/Users/michael/m-01101101/mrogers-london/_prompts"
]
},
"brave-search": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-brave-search"],
"transport": "stdio",
"env": {
"BRAVE_API_KEY": "${BRAVE_API_KEY}"
}
},
"memory": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-memory"],
"transport": "stdio",
"dataPath": "/Users/michael/.claude/memory/mrogers-blog.db"
}
}
}
Create .mcp.json (project-scoped, shareable):
{
"mcpServers": {
"github": {
"url": "https://api.githubcopilot.com/mcp/",
"transport": "http",
"scope": "project"
},
"fetch": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-fetch"],
"transport": "stdio",
"allowedDomains": [
"*.wikipedia.org",
"archive.org",
"fred.stlouisfed.org",
"data.worldbank.org",
"arxiv.org",
"*.nih.gov",
"jstor.org",
"*.edu"
]
}
}
}
3. Create Custom Agents
# Create agents directory
mkdir -p .claude/agents
# Create each agent file (copy content from Part II above)
# Agent files go in: .claude/agents/[agent-name].md
Create .claude/agents/research-scout.md:
(Copy full content from Part II, Agent 1)
Create .claude/agents/seo-guardian.md:
(Copy full content from Part II, Agent 2)
Create .claude/agents/draft-shepherd.md:
(Copy full content from Part II, Agent 3)
Create .claude/agents/headline-forge.md:
(Copy full content from Part II, Agent 4)
Create .claude/agents/voice-compass.md:
(Copy full content from Part II, Agent 5)
Create .claude/agents/image-alchemist.md:
(Copy full content from Part II, Agent 6)
4. Testing Procedures
# Test MCP installations
claude /mcp status
# Should show:
# ✅ github (http, project)
# ✅ filesystem (stdio, project)
# ✅ brave-search (stdio, user)
# ✅ memory (stdio, user)
# Test agents
/research-scout "test: comparative advantage in digital markets"
# Should return sources from Web Search MCP
/seo-guardian thinking/draft.md
# Should analyze frontmatter and suggest improvements
/draft-shepherd thinking/draft.md
# Should validate against style guide
/headline-forge "Test thesis: Data teams as influence brokers"
# Should return 11 headline variations
/voice-compass thinking/draft.md
# Should analyze voice consistency
/image-alchemist essay "Test topic" framework-diagram
# Should generate image prompts with color palette
Troubleshooting
MCP Server Not Found
# Check installation
claude /mcp status
# Reinstall specific server
claude mcp remove [server-name]
claude mcp add --transport stdio [server-name] -- npx -y @modelcontextprotocol/server-[server-name]
Agent Not Responding
# Check agent file exists
ls -la .claude/agents/
# Validate agent file syntax (must be valid markdown)
cat .claude/agents/research-scout.md
# Try invoking with full path
/claude/agents/research-scout "test query"
Brave Search API Limit
# Check usage
# Free tier: 2,000 queries/month
# Upgrade at: https://brave.com/search/api/
# Alternative: Use built-in WebSearch instead
# (Remove brave-search MCP, use native Claude Code web search)
Part V: Phased Rollout Plan
Phase 1: Core MCPs (Week 1)
Goal: Reduce research time by 75%
Install:
- ✅ GitHub MCP
- ✅ Filesystem MCP
- ✅ Web Search MCP (Brave)
- ✅ Memory MCP
Success Metrics:
- Research time: 60 min → 15 min
- Citations per essay: 3-5 → 5-10
- Internal linking: 0-1 → 2-4 per post
Rollback Procedure:
claude mcp remove github
claude mcp remove filesystem
claude mcp remove brave-search
claude mcp remove memory
Phase 2: Essential Agents (Weeks 2-3)
Goal: Automate editorial workflow
Create:
- ✅ Research Assistant
- ✅ SEO Guardian
- ✅ Draft Shepherd
Success Metrics:
- Editorial review time: 45 min → 20 min
- SEO completeness: 40% → 100% (all posts have meta descriptions, tags)
- Publishing errors: Occasional → Zero (frontmatter validation)
Rollback Procedure:
rm .claude/agents/research-scout.md
rm .claude/agents/seo-guardian.md
rm .claude/agents/draft-shepherd.md
Phase 3: Enhancement Agents (Month 2)
Goal: Preserve voice quality at scale
Create:
- ✅ Headline Forge
- ✅ Voice Compass
- ✅ Image Alchemist
Install:
- ✅ Fetch MCP
Success Metrics:
- Headline generation: 20 min → 2 min
- Voice consistency: Manual review → Automated scoring
- Image prompt creation: 15 min → 3 min
Rollback Procedure:
rm .claude/agents/headline-forge.md
rm .claude/agents/voice-compass.md
rm .claude/agents/image-alchemist.md
claude mcp remove fetch
Phase 4: Advanced MCPs (Month 3+)
Goal: Analytics and advanced automation
Install:
- ✅ SQLite MCP (content graph + analytics)
- ✅ Puppeteer MCP (social cards)
Success Metrics:
- Content relationships tracked in database
- OG images auto-generated
- Publishing analytics available
Rollback Procedure:
claude mcp remove sqlite
claude mcp remove puppeteer
rm -rf .data/
Part VI: Cost-Benefit Analysis
Time Savings Per Essay
| Activity | Current | With MCPs/Agents | Savings | % Reduction |
|---|---|---|---|---|
| Research & Citations | 60 min | 15 min | 45 min | 75% |
| SEO Optimization | 30 min | 5 min | 25 min | 83% |
| Headline Generation | 20 min | 2 min | 18 min | 90% |
| Image Prompt Creation | 15 min | 3 min | 12 min | 80% |
| Editorial Review | 45 min | 20 min | 25 min | 56% |
| Total | 170 min | 45 min | 125 min | 74% |
Per Essay: 2 hours saved Per Month (4 essays): 8 hours saved Per Year: 96 hours saved
Quality Improvements
| Metric | Current | With MCPs/Agents | Improvement |
|---|---|---|---|
| Voice consistency | Manual review | Automated scoring | Objective measurement |
| SEO completeness | ~40% | 100% | 60% increase |
| Citation quality | Variable | Validated sources | Standardized |
| Internal linking | 0-1 per post | 2-4 per post | 300% increase |
| Publishing errors | Occasional | Zero | Eliminated |
| Brand aesthetic | Manual checks | Automated prompts | Guaranteed |
Costs
One-Time Setup
- MCP installation: 2-3 hours (Week 1)
- Agent creation: 3-4 hours (Weeks 2-3)
- Testing & troubleshooting: 2 hours
- Total: 7-9 hours
Recurring Costs
- Brave Search API: $0/month (free tier: 2,000 queries)
- MCP maintenance: 1 hour/month (updates, troubleshooting)
- Agent refinement: 1 hour/month (tune prompts)
- Total: 2 hours/month
ROI Calculation
- Investment: 9 hours setup
- Monthly savings: 8 hours (4 essays × 2 hours each)
- Break-even: 1.1 months
- 12-month ROI: (96 hours saved - 24 hours maintenance) / 9 hours setup = 800% ROI
Maintenance Overhead
Low (2 hours/month):
- Update MCP servers when new versions release
- Refine agent prompts based on usage
- Monitor Brave API usage (stay within free tier)
- Clean up Memory MCP database occasionally
No ongoing costs if using:
- Brave Search (free tier sufficient for 4 essays/month)
- GitHub MCP (free with GitHub account)
- All other MCPs (open source, self-hosted)
Part VII: Appendices
A. MCP Server Compatibility Matrix
| MCP Server | Transport | Scope | API Key Required | Cost |
|---|---|---|---|---|
| GitHub | HTTP | Project | No (OAuth) | Free |
| Filesystem | stdio | Project | No | Free |
| Brave Search | stdio | User | Yes | Free (2k queries/mo) |
| Memory | stdio | User | No | Free |
| Fetch | stdio | Project | No | Free |
| SQLite | stdio | Project | No | Free |
| Puppeteer | stdio | Project | No | Free |
Compatibility: All servers compatible with Claude Code latest version (as of Dec 2025)
B. Sample Agent Definition Files
All agent definitions are included in full in Part II: Custom Agent Specifications above.
To install:
- Create
.claude/agents/directory - Copy each agent markdown file from Part II
- Save as
.claude/agents/[agent-name].md - Invoke with
/[agent-name]command
C. Configuration Templates
.claude/settings.local.json (Full)
{
"permissions": {
"allow": [
"Bash(bundle list:*)",
"Bash(bundle show:*)",
"Bash(cat:*)",
"WebFetch(domain:github.com)",
"WebFetch(domain:docs.anthropic.com)",
"Bash(ls:*)",
"Bash(mkdir:*)",
"Bash(bundle exec:*)",
"Bash(gh issue view:*)",
"Bash(grep:*)",
"Bash(git add:*)",
"Bash(git commit:*)",
"Bash(git push:*)",
"Bash(gem list:*)",
"Bash(curl:*)",
"claude-code plugins install:*"
]
},
"enableAllProjectMcpServers": true,
"mcpServers": {
"filesystem": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem"],
"transport": "stdio",
"allowedDirectories": [
"/Users/michael/m-01101101/mrogers-london/_posts",
"/Users/michael/m-01101101/mrogers-london/thinking",
"/Users/michael/m-01101101/mrogers-london/assets/images",
"/Users/michael/m-01101101/mrogers-london/_prompts"
]
},
"brave-search": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-brave-search"],
"transport": "stdio",
"env": {
"BRAVE_API_KEY": "${BRAVE_API_KEY}"
}
},
"memory": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-memory"],
"transport": "stdio",
"dataPath": "/Users/michael/.claude/memory/mrogers-blog.db"
},
"sqlite": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-sqlite"],
"transport": "stdio",
"databases": {
"blog_analytics": "/Users/michael/m-01101101/mrogers-london/.data/analytics.db",
"content_graph": "/Users/michael/m-01101101/mrogers-london/.data/content.db"
}
},
"puppeteer": {
"command": "npx",
"args": ["-y", "@playwright/mcp@latest"],
"transport": "stdio",
"outputDir": "/Users/michael/m-01101101/mrogers-london/assets/images/og-cards"
}
}
}
.mcp.json (Project-Scoped)
{
"mcpServers": {
"github": {
"url": "https://api.githubcopilot.com/mcp/",
"transport": "http",
"scope": "project"
},
"fetch": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-fetch"],
"transport": "stdio",
"allowedDomains": [
"*.wikipedia.org",
"archive.org",
"fred.stlouisfed.org",
"data.worldbank.org",
"arxiv.org",
"*.nih.gov",
"jstor.org",
"*.edu",
"stratechery.com",
"eugenewei.com"
]
}
}
}
.env (Environment Variables)
# Brave Search API Key
# Get from: https://brave.com/search/api/
BRAVE_API_KEY=your_actual_api_key_here
# GitHub Token (if using GitHub MCP with private repos)
# Get from: https://github.com/settings/tokens
GITHUB_TOKEN=your_github_token_here
# Optional: Notion API Key (if using Notion MCP)
NOTION_API_KEY=your_notion_key_here
SQLite Schema (schema.sql)
-- Content Graph Database
-- Tracks relationships between posts, topics, and citations
-- Posts table
CREATE TABLE IF NOT EXISTS posts (
id INTEGER PRIMARY KEY AUTOINCREMENT,
slug TEXT UNIQUE NOT NULL,
title TEXT NOT NULL,
category TEXT CHECK(category IN ('essay', 'imagine', 'residuals')),
published_date DATE NOT NULL,
word_count INTEGER,
reading_time_min INTEGER,
excerpt TEXT,
created_at DATETIME DEFAULT CURRENT_TIMESTAMP,
updated_at DATETIME DEFAULT CURRENT_TIMESTAMP
);
-- Topics table (economics, data-science, complexity-theory, etc.)
CREATE TABLE IF NOT EXISTS topics (
id INTEGER PRIMARY KEY AUTOINCREMENT,
name TEXT UNIQUE NOT NULL,
category TEXT, -- economics, data-science, complexity-theory, consumer-behavior
description TEXT,
created_at DATETIME DEFAULT CURRENT_TIMESTAMP
);
-- Post-Topic relationships
CREATE TABLE IF NOT EXISTS post_topics (
id INTEGER PRIMARY KEY AUTOINCREMENT,
post_id INTEGER NOT NULL,
topic_id INTEGER NOT NULL,
relevance_score REAL CHECK(relevance_score >= 0.0 AND relevance_score <= 1.0),
FOREIGN KEY(post_id) REFERENCES posts(id) ON DELETE CASCADE,
FOREIGN KEY(topic_id) REFERENCES topics(id) ON DELETE CASCADE,
UNIQUE(post_id, topic_id)
);
-- Citations table
CREATE TABLE IF NOT EXISTS citations (
id INTEGER PRIMARY KEY AUTOINCREMENT,
post_id INTEGER NOT NULL,
title TEXT NOT NULL,
author TEXT,
year INTEGER,
url TEXT,
source_type TEXT CHECK(source_type IN ('academic', 'book', 'essay', 'news', 'data')),
excerpt TEXT,
FOREIGN KEY(post_id) REFERENCES posts(id) ON DELETE CASCADE
);
-- Internal links (cross-references between posts)
CREATE TABLE IF NOT EXISTS internal_links (
id INTEGER PRIMARY KEY AUTOINCREMENT,
from_post_id INTEGER NOT NULL,
to_post_id INTEGER NOT NULL,
anchor_text TEXT,
context TEXT, -- Surrounding paragraph for context
FOREIGN KEY(from_post_id) REFERENCES posts(id) ON DELETE CASCADE,
FOREIGN KEY(to_post_id) REFERENCES posts(id) ON DELETE CASCADE,
UNIQUE(from_post_id, to_post_id)
);
-- Analytics table (optional - can integrate with Google Analytics API)
CREATE TABLE IF NOT EXISTS analytics (
id INTEGER PRIMARY KEY AUTOINCREMENT,
post_id INTEGER NOT NULL,
metric_date DATE NOT NULL,
pageviews INTEGER DEFAULT 0,
unique_visitors INTEGER DEFAULT 0,
avg_time_on_page INTEGER, -- seconds
bounce_rate REAL,
FOREIGN KEY(post_id) REFERENCES posts(id) ON DELETE CASCADE,
UNIQUE(post_id, metric_date)
);
-- Indexes for performance
CREATE INDEX IF NOT EXISTS idx_posts_category ON posts(category);
CREATE INDEX IF NOT EXISTS idx_posts_published_date ON posts(published_date DESC);
CREATE INDEX IF NOT EXISTS idx_post_topics_post_id ON post_topics(post_id);
CREATE INDEX IF NOT EXISTS idx_post_topics_topic_id ON post_topics(topic_id);
CREATE INDEX IF NOT EXISTS idx_citations_post_id ON citations(post_id);
CREATE INDEX IF NOT EXISTS idx_internal_links_from ON internal_links(from_post_id);
CREATE INDEX IF NOT EXISTS idx_internal_links_to ON internal_links(to_post_id);
-- Views for common queries
CREATE VIEW IF NOT EXISTS post_summary AS
SELECT
p.id,
p.slug,
p.title,
p.category,
p.published_date,
p.word_count,
p.reading_time_min,
COUNT(DISTINCT pt.topic_id) as topic_count,
COUNT(DISTINCT c.id) as citation_count,
COUNT(DISTINCT il.to_post_id) as outbound_links,
(SELECT COUNT(*) FROM internal_links WHERE to_post_id = p.id) as inbound_links
FROM posts p
LEFT JOIN post_topics pt ON p.id = pt.post_id
LEFT JOIN citations c ON p.id = c.post_id
LEFT JOIN internal_links il ON p.id = il.from_post_id
GROUP BY p.id;
CREATE VIEW IF NOT EXISTS topic_popularity AS
SELECT
t.id,
t.name,
t.category,
COUNT(DISTINCT pt.post_id) as post_count,
AVG(pt.relevance_score) as avg_relevance
FROM topics t
LEFT JOIN post_topics pt ON t.id = pt.topic_id
GROUP BY t.id
ORDER BY post_count DESC;
D. Resources & Documentation Links
MCP Official Documentation:
Claude Code Documentation:
MCP Server Lists & Reviews:
-
[Best MCP Servers in 2025 Pomerium](https://www.pomerium.com/blog/best-model-context-protocol-mcp-servers-in-2025) -
[Top 10 MCP Servers Intuz](https://www.intuz.com/blog/best-mcp-servers) - MCP 10 Must-Try Servers for Developers
Blog-Specific References:
- Aggregation Theory: Stratechery
- Economic History: Wikipedia - Léon Walras
- Consumer Behavior: Farnam Street
- Recommendation Systems: Eugene Wei’s Blog
Image Generation Tools:
- Midjourney: midjourney.com
- DALL-E: openai.com/dall-e
- Stable Diffusion: stability.ai
Conclusion
This proposal provides a comprehensive automation strategy calibrated to your existing workflow, voice, and intellectual positioning. The recommended MCP servers and custom agents preserve your distinctive “Applied Psychohistorian” voice while eliminating repetitive tasks.
Next Steps:
- Review this proposal and prioritize sections
- Install Phase 1 MCPs (Week 1)
- Create essential agents (Weeks 2-3)
- Test on 1-2 essays before full rollout
- Iterate based on real-world usage
Note on Ralph-Wiggum Plugin: This plugin does not appear to exist in official Claude Code documentation. If you encountered this elsewhere, please share the source and I can investigate further. In the meantime, the MCP servers and custom agents proposed here provide comprehensive automation coverage.
Questions or Feedback: Feel free to modify any agent prompts, MCP configurations, or workflow steps to match your preferences. This is a living document—refine as you learn what works best.
Sources
- GitHub - modelcontextprotocol/servers: Model Context Protocol Servers
-
[One Year of MCP: November 2025 Spec Release Model Context Protocol Blog](http://blog.modelcontextprotocol.io/posts/2025-11-25-first-mcp-anniversary/) -
[Best Model Context Protocol (MCP) Servers in 2025 Pomerium](https://www.pomerium.com/blog/best-model-context-protocol-mcp-servers-in-2025) -
[Top 10 MCP (Model Context Protocol) Servers in 2025 Intuz](https://www.intuz.com/blog/best-mcp-servers) - Model Context Protocol (MCP): 10 Must-Try MCP Servers for Developers
-
[Introducing the Model Context Protocol Anthropic](https://www.anthropic.com/news/model-context-protocol)