← portfolio
AI Tooling Marketing Automation 2024 – 2025 Solo Build

Four tasks that ran
on a human every week.
Now they don't.

AI Tooling · Google Ads · Competitive Intel · Proposals · Outbound

The Problem
The most expensive work isn't the hard work, it's the predictable work that runs every week.

Four tasks kept recurring: daily reporting, competitor ad monitoring, proposal builds, and personalized outbound. Each one followed the exact same steps every time, same inputs, same output format, same person doing it. Nothing was hard. Everything was slow.

The question was which ones had enough structure to encode once and run without intervention. Four did.

Fixed inputs. Fixed output format. Fixed cadence. That's the definition of a system waiting to be built.
The pattern
Collect. Format. Deliver. Repeat next week.

Competitive intel: open the Ad Library, scroll, screenshot, copy into a doc, summarize. Proposals: pull research, apply a template, adjust tone per client. Reporting: check the dashboard, note the drops, send an update. Each task had structure. The structure never changed. The only variable was which human did it that week, and how long it took them.

Daily reporting Ad library scrapes Proposals rebuilt each time Generic outbound 20-hour intel cycles Senior time on formatting
Four builds
Each agent has one job. One input. One output. No one watching.

These aren't general-purpose assistants. Each one was scoped to a single recurring task, fixed inputs, fixed output format, triggered on a schedule or an event. Here's what was built and what it replaced.

02. Competitive Intel Pipeline

LinkedIn and Google Ad Libraries turned into a scheduled, hands-free competitive intelligence feed

Problem: Tracking competitor ads meant manually opening the Ad Library, scrolling, screenshotting, copying into a doc, and writing a summary, every week, for every competitor. The full cycle took 20 hours. By the time the report landed, the ads it referenced had already been live for days.

Solution: Built a scheduled agent that scrapes both the LinkedIn and Google Ad Libraries on a fixed cadence. It detects messaging shifts, flags volume changes per competitor, and outputs a structured PDF. No screenshots, no copy-paste, no human input.

Result: 20 hours of manual work runs in under 5 minutes. The report exists before anyone asks for it. Intel lag went from days behind to same day.

Terminal output — Claude Code running competitive intel scrape
competitive-intel
Claude Code terminal showing 4 competitors, 373 ad creatives, 298 screenshots captured
Time to full report
20 hrs <5 min
Human input needed
Manual weekly None
Intel lag time
Days behind Same day
03. Proposal Agent

Senior strategist's proposal process encoded into an agent, drop a domain and transcript, get a client-ready deck

Problem: PPC and SEO proposals took 30 hours each. Not because the thinking was complex, because the research, structure, and client framing got rebuilt from scratch every time. The best strategists were spending most of that time on formatting, not judgment.

Solution: Before writing a single prompt, interviewed the strategists who write the strongest proposals: what data they pull, how they frame recommendations, what tone shifts per client type. That knowledge became a Claude agent. Input: domain + meeting transcript. Output: client-ready proposal draft.

Result: Proposal time: 30 hours → under 20 minutes. Senior strategist involvement: every proposal → one-time knowledge capture.

Terminal output — Claude Code running proposal agent
proposal-agent
Claude Code terminal showing proposal built in 18 min from domain + transcript
Time per proposal
30 hrs <20 min
Senior time required
Every proposal Once
Inputs needed
Domain + call transcript
04. Outbound Landing Pages

Pipeline that finds lookalike accounts, audits their SEO gaps, and generates a personalized landing page per company

Problem: Generic outbound gets ignored. A landing page that names a company, surfaces its actual SEO gaps, and maps those gaps to the service offering converts. But doing that research 30 times manually takes three days, and still ends up feeling templated because it is.

Solution: Clay identifies lookalike accounts from closed-won data, scores them against fit signals, and selects the top 30–40. Audit tools pull real SEO data per domain. A Claude agent writes a landing page around each company's specific gaps, their keywords, their traffic problems, their context, and pushes it to the sales team.

Result: 30 per-company landing pages built in 45 minutes. Research and build time: 3 days → 30 minutes. Each page references real data specific to that company.

Terminal output — Claude Code running outbound pipeline
outbound-landing-pages
Claude Code terminal showing 34 personalized landing pages built in 45 minutes
Research + build time
3 days 30 min
Pages generated
30 in 45 min
Personalization depth
Generic Per-company
Total output
Four agents. All four loops gone.
Intel report time
<5 min
down from 20 hours
Proposal time
<20 min
down from 30 hours
Campaigns pushed
12
in 20 minutes, not half a day
Outbound pages built
30
per-company, in 45 minutes
Manual reporting loops
0
completely eliminated
Why it worked
Not every recurring task is automatable. These four had three things in common.
01. Defined inputs
Each task started with the same data every time. A domain, a competitor list, a campaign ID, a meeting transcript. No ambiguity at the entry point. The agent always knew what to pull and where to start, it didn't need to decide.
02. Fixed output format
The deliverable never changed shape. A proposal doc. An intel PDF. A campaign config. A per-company landing page. When the output template is fixed, the agent fills it correctly every time. Judgment calls don't enter the picture.
03. Predictable trigger
All four ran on a schedule or a drop event. No human had to decide when to start them. That's the filter. Any task that requires someone to decide "now is the right moment to run this" isn't ready to automate.
04. The knowledge was capturable
For the proposal agent, this was the hardest part. Before writing a prompt, the approach was to interview the strategists who wrote the best proposals, not to copy their templates, but to encode their judgment. The agent inherited the thinking. The templates came after.
The real cost isn't the hard work.
It's the predictable work that runs
on a human every single week.

Most marketing bottlenecks aren't complexity problems. They're repeatability problems. The same steps, the same inputs, the same output, done manually because no one stopped to encode them. These four builds freed up roughly 50+ hours a week of recurring execution time.

If a task runs the same way every time and you can describe its inputs and output format, it's a system. Build it once, run it indefinitely.

There are more loops worth killing.

If you're running a recurring task that looks the same every week, it's probably buildable.

Vinay Kumar · Built using Claude Code + MCP · 2025