CONFIDENTIAL — Prepared exclusively for Evan Waters. Not for distribution.
LUNAR AI LAB

Automate the Work That Doesn't Scale.

Four AI agents built for Evan Waters, replacing the manual loops across Chimo and consulting operations.

Scope
4 Agents + R&D Exploration
Timeline
8–12 Weeks
Date
March 2026

7 years in tech marketing.
300+ projects. Now building AI.

Lunar AI Lab is the build team inside Lunar Strategy, a European growth marketing agency with 7+ years in tech and crypto. 300+ projects delivered. $60M+ in managed marketing budgets.

We build custom AI agent systems for marketing and revenue teams. We started by automating our own workflows, then brought the same approach to clients. We build for operators, not engineers.

0
Years in tech & Web3
0
Projects delivered
0
Managed budgets
Trusted by: Polkadot · OKX · Polygon · Cardano · DFINITY · BitMEX · Neo
Lunar AI Lab team

The manual work is the bottleneck.
Every hour spent here is an hour not spent on growth.

Current State

  • Monetization data scattered across Stripe, admin panel, and mail service. Each test cycle means hours of manual aggregation.
  • Critical insights like payment failure patterns only surface by accident.
  • Every new class action triggers the same copy-prompt-paste-into-CMS loop, repeated across 5+ lifecycle checkpoints.
  • Claim form logic is hand-coded from PDFs. Bugs hit production before they're caught.
  • A proprietary Google Ads bidding method runs across ~70 companies, 8+ times a month. Undocumented and entirely manual.

What Changes

  • One agent connects all data sources. Ask a question in plain English, get an answer in seconds.
  • Anomalies and trends flagged automatically. No more stumbling onto patterns weeks late.
  • New filings trigger automatic content generation and lifecycle updates. No manual input.
  • Claim forms get parsed, logic gets built, and QA runs across every branch before anything goes live.
  • Bidding methodology gets extracted, documented, and replicated. Human approval on every cycle.

Four agents. Each one replaces
a specific manual workflow.

Monetization Analytics

  • Connects Stripe API + admin panel + mail service
  • On-demand cohort analysis, revenue projections
  • Anomaly flagging and trend detection
  • Natural language queries
2–3 weeks

Claims Content Engine

  • Monitors for new class action filings
  • Auto-generates SEO pages, social posts, triggers
  • Tracks lawsuit lifecycle checkpoints
  • Pushes to CMS or staging fallback
3–4 weeks

Form Logic + QA

  • Ingests claim form PDFs
  • Extracts conditional parameters
  • Builds user-facing claim flow logic
  • Automated QA across all branches
3–5 weeks

Ads Bidding Automation

  • Replicates proprietary methodology
  • Pulls data, applies logic, generates bids
  • Human approval gate, graduated autonomy
  • Full audit trail per cycle
2–3 weeks
R&D Exploration

Synthetic Split Testing

Chimo has too many pricing variables and not enough traffic to A/B test them all. We'll assess whether AI can simulate user behavior to provide directional signals on which offers to test in production. This is research, not a guaranteed deliverable.

  • Research existing approaches to low-traffic offer optimization.
  • Assess whether Chimo's user behavior data is sufficient to train a model.
  • Deliver a written go/no-go report with alternative approaches if no-go.

If feasible, the agent build gets scoped and priced separately in the full proposal.

Four phases. 8–12 weeks.

Weeks 1–3

Phase 1: Monetization Analytics

  • Zero dependency on dev team
  • Stripe API + CSV exports from admin panel
  • Fastest win, immediate ROI visibility
  • Builds trust and working rhythm
Weeks 3–7

Phase 2: Claims Content

  • CMS write API spec delivered to dev team
  • Content generation works immediately
  • CMS push added when API is ready
  • Fallback: outputs to Google Doc / Notion
Weeks 6–11

Phase 3: Form Logic + QA

  • Highest complexity, builds on Phase 2 CMS
  • Requires staging environment access
  • Human approval on all deployments
  • Automated regression testing
Weeks 4–8

Phase 4: Ads Bidding

  • Independent from Chimo infrastructure, runs on its own timeline
  • Starts after knowledge extraction session
  • Human approval for 3+ cycles minimum
  • Methodology documented before build begins

What it costs. What it saves.

Agent Est. Hours Saved / Month What Changes Scales With
Monetization Analytics 3–5 hours Manual aggregation → on-demand queries Test frequency
Claims Content Engine 5–10 hours Copy-paste workflow → automated generation Active claims onboarded
Claims Form Logic + QA 3–6 hours Manual coding + bugs → auto-built logic with QA Claims volume
Google Ads Bidding 4–6 hours 30 min/account × 8+ sessions → automated Client count
Total 15–27 hours/month

Based on the workflows you described on our call. Actual savings scale with volume.

$15,000 – $20,000

Estimated total for all four agents. Final pricing confirmed in the full proposal after the workshop.

Add-ons
$1,000

Synthetic Testing: Feasibility Assessment

Written go/no-go report with alternative recommendations

  • Research existing approaches to synthetic A/B testing and low-traffic optimization.
  • Evaluate Chimo's existing user behavior data for model training viability.
  • Consult with technical lead on architecture feasibility.
  • Deliver written feasibility report with clear recommendation.
  • If no-go: provide alternative approaches (e.g., Bayesian multi-armed bandit testing).
PRICED AFTER FEASIBILITY ASSESSMENT

Synthetic Testing: Agent Build

Scoped and quoted only if the feasibility assessment confirms viability

  • Train agent on representative user behavior data.
  • Build simulation pipeline for offer variant ranking.
  • Integrate with Chimo's existing test infrastructure.
  • Deliver directional signal reports per test cycle.
  • Ongoing calibration as real user data validates predictions.

Terms

Ownership. All agents, code, configuration, and documentation belong to Evan upon deployment. We retain no access post-handoff unless ongoing support is agreed.

Infrastructure during development. All infrastructure costs (AI API, hosting, tooling) are covered by Lunar AI Lab during the build. Upon deployment, we hand over everything to Evan's accounts with full setup assistance and usage estimates.

Running costs post-deployment. Once agents are live, API token costs and hosting sit on Evan's accounts. We provide detailed estimates per agent so there are no surprises.

How to get started.

1

Sign NDA

Both parties. Target: Monday, March 30.

2

Workshop: walk through all four use cases

Wednesday, April 1, 11:00 Lisbon. We'll send prep materials and requirements after the NDA is signed.

3

Full proposal and confirmed scope from Lunar AI Lab

Detailed architecture, final pricing, and delivery timeline based on workshop findings.

4

Review, confirm, and kick off the build

Align on scope and pricing, then we start.