AI & Automation

How We Automated Google Ads
Campaign Setup for Law Firms

How we built an AI system using the Claude API and Google Ads API to automate campaign setup for law firms. Architecture, prompts, failure modes, and real costs.

Reading path

AI visibility needs to connect back to the foundations.

The firms that benefit most from AI search and automation are usually the same firms with better structure, stronger content, and clearer entity signals underneath.

16 min read Reading time
3,200 Words
8 FAQs answered
Mar 31, 2026 Last updated

Setting up a Google Ads campaign for a law firm takes 4-6 hours of skilled work. Keyword research, match type decisions, negative keyword lists, ad group structure, ad copy variations, location targeting, bid strategy selection, conversion tracking configuration. Multiply that by 20 new clients a month and you have a full-time employee doing nothing but campaign setup.

We built a system that does the first 80% of that work automatically, using the Claude API for the thinking and the Google Ads API for the execution. A human still reviews everything before it goes live. Nothing runs without approval.

Here is how it works, what broke along the way, and where we landed.

Why this exists

Legal marketing agencies hit a scaling wall around 100 clients. The work is repetitive but not simple. Every personal injury firm in Miami needs similar keywords, but the specific mix depends on their practice areas, case types they want, geographic coverage, and budget. A family law firm in Denver has a completely different keyword universe.

The setup process follows a pattern, but the pattern has enough variation that you cannot just copy-paste a template. You need someone who understands both Google Ads and the legal vertical to make judgment calls about which keywords to include, how to structure ad groups, and what the ad copy should say.

That is exactly the kind of work that AI handles well. It is pattern-based with structured variation. The inputs are predictable (practice areas, location, budget, case types). The output is structured data (keyword lists, ad group hierarchies, ad copy). And the quality bar is “good enough for a human to review and approve”, not “perfect on the first try.”

The architecture

The system has three stages. Each stage produces structured output that feeds into the next. A human checkpoint sits between stage 2 and stage 3.

Stage 1: Client intake processing. A structured form captures the inputs: firm name, practice areas, target locations (cities, counties, radius), monthly budget, case types they want (and do not want), existing campaigns (if migrating), and any specific messaging requirements.

This data goes into a JSON document. No AI is involved in this stage. It is just data collection and normalization.

Stage 2: Campaign generation. This is where Claude does the work. The system sends the client intake JSON to the Claude API with a detailed system prompt that specifies exactly what to produce.

The prompt is not “create a Google Ads campaign.” That would produce garbage. The prompt is closer to 2,000 words of specific instructions covering:

  • How to structure ad groups by practice area and intent level
  • Which keyword match types to use and why (exact match for high-intent, phrase match for discovery, no broad match without explicit approval)
  • Negative keyword patterns specific to legal (free, pro bono, salary, jobs, school, how to become, DIY, forms)
  • Ad copy rules (character limits, dynamic keyword insertion placement, call-to-action patterns for legal)
  • Location targeting hierarchy (city-level campaigns with county-level ad groups)
  • Budget allocation logic (weighted by estimated search volume per practice area)

Claude returns a structured JSON response. Not freeform text. We use tool_use to force the output into a defined schema:

{
  "campaigns": [
    {
      "name": "PI | Miami-Dade | High Intent",
      "budget_daily_micros": 85000000,
      "location_targets": ["Miami, FL", "Miami Beach, FL", "Coral Gables, FL"],
      "ad_groups": [
        {
          "name": "Car Accident Attorney",
          "keywords": [
            { "text": "car accident attorney miami", "match_type": "EXACT" },
            { "text": "auto accident lawyer near me", "match_type": "PHRASE" },
            { "text": "car crash lawyer miami dade", "match_type": "EXACT" }
          ],
          "ads": [
            {
              "headlines": [
                "Miami Car Accident Attorney",
                "No Fee Unless You Win",
                "Free Case Review Today"
              ],
              "descriptions": [
                "Injured in a car accident? Our Miami attorneys have recovered $50M+ for clients. Call now.",
                "Trusted Miami car accident lawyers. Free consultation. No upfront fees."
              ]
            }
          ]
        }
      ],
      "negative_keywords": [
        "free legal advice", "pro bono", "lawyer salary",
        "how to become a lawyer", "law school", "legal forms",
        "cheap lawyer", "lawyer jokes"
      ]
    }
  ]
}

The schema is strict. Every field has a type. Keywords have match types. Budgets are in micros (one million micros equals one currency unit, which is how the Google Ads API represents money). Headlines respect the 30-character limit. Descriptions respect the 90-character limit.

Stage 3: Human review and deployment. The generated campaign structure goes into a review interface. A human sees every keyword, every ad, every budget allocation. They can approve, modify, or reject any piece.

Only after approval does the system touch the Google Ads API. It creates the campaigns, ad groups, keywords, and ads exactly as approved. Conversion tracking and bid strategies are configured based on the client’s goals (maximize conversions for lead gen, target CPA if there is enough historical data).

The prompt engineering problem

The first version of the system prompt was 400 words. It produced campaigns that looked reasonable at a glance but fell apart on review. Common failures:

  • Keywords that were too broad (“lawyer” without a location modifier)
  • Ad groups that mixed intent levels (informational keywords in the same group as hire-intent keywords)
  • Ad copy that used generic phrases instead of practice-area-specific language
  • Negative keyword lists that were too short and missed obvious terms
  • Budget splits that allocated evenly instead of weighting by search volume

Each failure mode got a specific instruction added to the prompt. The prompt grew from 400 words to 2,000 words over about three weeks of iteration.

The critical insight was that the prompt needs to encode the same decision-making heuristics that an experienced Google Ads manager carries in their head. “Use exact match for high-intent keywords” is not enough. You need to define what “high-intent” means in a legal context: keywords that contain “attorney”, “lawyer”, “law firm”, “hire”, “consultation”, or “near me” combined with a practice area. You need to say that “car accident” alone is informational, but “car accident attorney” is high-intent.

We version the prompts. Each version is a file with a timestamp and a changelog comment at the top. When we update a prompt, the old version stays. We can A/B test by routing some clients to the new version and comparing the review-rejection rate. If the new prompt produces campaigns that humans approve with fewer modifications, it is better.

What broke

Problem 1: Claude sometimes invented keywords. Early versions would generate keywords that sounded plausible but had zero search volume. “Automobile collision litigation specialist” is not something anyone types into Google. The fix was adding explicit instructions: generate keywords based on how people actually search, not how lawyers describe their services. Use plain language. Short phrases. The way someone would talk to a friend: “car accident lawyer miami”, not “vehicular collision attorney Miami-Dade County.”

Problem 2: Budget math did not add up. Claude would allocate 60% of the budget to personal injury, 30% to family law, and 25% to criminal defense. That is 115%. The fix was switching from percentage-based allocation to absolute amounts and adding a validation step that checks the sum before the output is accepted. If the sum does not match the total budget, the system retries with an explicit correction.

Problem 3: Headline character limits. Google Ads headlines have a strict 30-character limit. Claude would generate headlines like “Experienced Personal Injury Attorney” (36 characters) regularly. We added character counting to the tool schema validation. If a headline exceeds 30 characters, the tool call fails with an error message that includes the character count and asks for a shorter version. Claude corrects it on the retry.

Problem 4: Location targeting ambiguity. “Miami” could mean the city of Miami, Miami-Dade County, or the Miami metro area. These produce very different targeting in Google Ads. We now require the intake form to specify exact city names and zip codes, and the prompt explicitly instructs Claude to use only the provided locations, never to infer or expand the targeting area.

The Google Ads API integration

The Google Ads API is not simple. Authentication alone requires OAuth2 with a developer token, a client ID, a client secret, a refresh token, and a login customer ID. Rate limits are real. Error messages are often unhelpful.

A few things we learned:

  • Use the Google Ads API client library, not raw HTTP. The protobuf-based API is not designed for manual HTTP calls. The Python client library handles serialization, auth token refresh, and retry logic.
  • Create campaigns in the right order. Campaign first, then ad groups, then keywords and ads. If you try to create an ad group before its parent campaign exists, the API returns an error. This seems obvious, but when you are building from a flat JSON structure, you need to explicitly handle the dependency order.
  • Budgets are shared resources. A CampaignBudget is a separate entity from a Campaign. You create the budget first, then reference it from the campaign. Multiple campaigns can share a budget. Our system creates one budget per campaign for simplicity, but the data model allows sharing.
  • Use batch operations. Creating 200 keywords one at a time hits rate limits. The mutate endpoint accepts batches of up to 5,000 operations. We batch all keywords for an ad group into a single request.
  • Validate before sending. The API returns cryptic errors when field values are invalid. Checking character limits, match type enums, and required fields before the API call saves debugging time.

What the system does not do

  • It does not optimize running campaigns. This system handles initial setup only. Optimization (bid adjustments, keyword pruning, ad rotation) is a different problem with different inputs (performance data over time, not client intake info).
  • It does not write landing pages. The ads point to existing pages on the client’s website. If those pages are bad, the ads will not convert regardless of how well the campaign is structured.
  • It does not handle billing. Google Ads billing setup, payment methods, and account access are handled manually. The API can manage campaigns but not payment infrastructure.
  • It does not replace the human. Every campaign goes through human review. The system gets the work 80% done in 10 minutes instead of 4 hours. The human spends 30-45 minutes reviewing and refining instead of building from scratch.

The economics

Before the system: 4-6 hours of skilled work per campaign setup. At $75/hour loaded cost for a competent Google Ads specialist, that is $300-450 per client.

After the system: 10 minutes of compute time (Claude API calls are a few cents) plus 30-45 minutes of human review. Total cost is roughly $45-60 per client.

At 20 new clients per month, that is a savings of $5,000-8,000 monthly. The system paid for its development time within the first month.

The real benefit is not cost savings. It is consistency. Human-built campaigns vary in quality depending on who builds them, what time of day it is, and how many campaigns they have already set up that week. The AI system produces the same quality every time, and the human review catches the same categories of issues every time because the reviewers use a standardized checklist.

The stack

  • Python 3.12 for the orchestration layer
  • Anthropic Python SDK for Claude API calls with tool_use for structured output
  • Google Ads API Python client for campaign creation
  • Supabase for client intake data and campaign state tracking
  • Simple approval UI built with Flask — nothing fancy, just a table showing the generated campaign with approve/reject buttons per element

There are no orchestration frameworks. No LangChain, no LangGraph, no CrewAI. The workflow is three functions called in sequence with a database write between each step. Adding a framework for a linear three-step process would be overhead with no benefit.

If the workflow had branching logic, parallel agents, or dynamic tool selection, a framework might make sense. This one does not. It is a pipeline. Pipelines do not need orchestration frameworks.

How to think about building something like this

If you are a legal marketing agency looking at automating campaign setup, here is what matters:

  1. Start with the structured output, not the prompt. Define exactly what JSON schema the system should produce. That schema becomes the contract between the AI and the Google Ads API. Get it right first, then figure out the prompt that produces it.
  2. Encode your expertise in the prompt. The AI does not know that “car accident lawyer” is a better keyword than “motor vehicle accident attorney” for Google Ads. You do. Write that down. Every heuristic, every rule of thumb, every judgment call — it goes in the prompt.
  3. Human review is not optional. Not because the AI is unreliable (it is surprisingly good with detailed prompts), but because a client’s money is at stake. The review step is fast and catches the 5% of outputs that need adjustment.
  4. Validate everything programmatically. Character limits, budget math, match type values, location format. If you can check it with code, do not rely on the AI to get it right every time.
  5. Version your prompts. You will improve them. You need to know which version produced which campaign so you can track quality over time.

The goal is not full automation. It is removing the repetitive work so your team spends their time on judgment calls instead of data entry.

Need a clearer next move?

Get a Free Google Ads Campaign Review

We'll audit your current Google Ads setup, identify wasted spend, and show you how automated campaign generation compares to your existing workflow.

Next steps

Use this topic inside the right part of your growth system.

Keep this topic grounded by moving into the AI-search guide, the service layer that supports citation readiness, or the broader research on how law firms are adapting.

Related reads

Other articles firms usually read next.

These are the closest matches by topic, so the next click keeps building useful context instead of sending you sideways.

Frequently asked questions

AI & Automation FAQ

Quick answers to the most common questions about this topic.

01

Can AI automate Google Ads campaign setup for law firms?

Yes, but not fully. We built a system that automates roughly 80% of the setup work: keyword research, match type selection, ad group structure, ad copy generation, negative keyword lists, location targeting, and budget allocation. A human reviews everything before it goes live. The AI handles the labor-intensive parts. The human provides the strategic judgment. Nothing touches a client's Google Ads account without approval.

02

How does the Claude API work for Google Ads automation?

The system sends client intake data (practice areas, locations, budget, case types) to the Claude API with a detailed 2,000-word system prompt that encodes Google Ads best practices specific to legal marketing. Claude returns structured JSON output using tool_use (function calling), not freeform text. The JSON conforms to a strict schema that maps directly to the Google Ads API data model, with fields for campaigns, ad groups, keywords with match types, ad copy with character limits, and budget allocations in micros.

03

What does the Google Ads campaign automation system cost to run?

Each campaign generation costs roughly $0.25 in Claude API usage. A human spends 30-45 minutes reviewing and refining the output, compared to 4-6 hours building from scratch. At 20 new clients per month, the system saves $5,000-8,000 monthly in specialist time. The orchestration layer runs on one VPS at $40/month, with database hosting and API usage as separate line items.

04

How do you prevent the AI from generating bad Google Ads keywords?

Three safeguards. First, the system prompt explicitly instructs Claude to generate keywords based on how people actually search, not how lawyers describe their services. Second, we use tool_use with strict schema validation that rejects outputs with wrong types or missing fields. Third, every generated campaign goes through human review before anything is created in Google Ads. About 5% of outputs need meaningful adjustments.

05

What are the most common AI failure modes in Google Ads automation?

The four most common failures we encountered: Claude inventing plausible-sounding keywords with zero search volume, budget allocations that add up to more than 100%, headlines exceeding the 30-character limit, and location targeting ambiguity where a city name could refer to a city, county, or metro area. Each failure mode got a specific programmatic validation check and prompt instruction to prevent recurrence.

06

Should I use LangChain or CrewAI for Google Ads automation?

For a linear pipeline like campaign setup, no. Our system is three functions called in sequence with a database write between each step. Adding an orchestration framework for a sequential process adds complexity without solving any real problem. If your workflow had branching logic, parallel agents, or dynamic tool selection, a framework might make sense. A pipeline does not need an orchestration framework.

07

How do you handle Google Ads API rate limits?

Batch operations. The Google Ads API mutate endpoint accepts batches of up to 5,000 operations in a single request. We batch all keywords for an ad group into one request instead of creating them one at a time. We also validate all field values locally before sending API calls, which eliminates most error-retry cycles that would otherwise consume rate limit quota.

08

What prompt engineering techniques work for structured Google Ads output?

The prompt needs to encode the same decision-making heuristics an experienced Google Ads manager carries. Saying 'use exact match for high-intent keywords' is not enough. You need to define what high-intent means in a legal context: keywords containing attorney, lawyer, law firm, hire, consultation, or near me combined with a practice area. You need to specify that 'car accident' alone is informational but 'car accident attorney' is high-intent. Every rule of thumb goes in the prompt.

Next step

Want to Automate Your Law Firm's Google Ads?

Book a free strategy session. We'll show you how AI-powered campaign setup works, review your current campaigns, and identify where automation can save your team time.

Book my strategy call Free SEO Audit
No obligation 100% confidential Custom roadmap included