AI Agents··5 min read

You Probably Don't Need an AI Agent

Everyone's throwing LLMs at problems that don't need them and calling everything an agent. Here's a simple way to think about what goes where.

Ram BakthavachalamRam Bakthavachalam
You Probably Don't Need an AI Agent

It has become a trend to call everything an AI Agent.

"I am building an AI Agent to parse a CSV file and write the contents into a database."

"I am building an AI Agent that calls Yahoo Finance API, fetches quotes, and checks if my stock position is green or red."

There is real confusion about when you need an AI Agent versus an AI-powered automation versus just a regular script. So let me try to clear this up.

Most of your work is probably just regular automation

Think about what most software does. It takes an input, follows some rules, and produces an output. A deploy pipeline, a backup script, a billing job that runs every night. None of this needs AI.

for file in Path("inbox").glob("*.csv"):
    validate(file)
    transform(file)
    shutil.move(file, "processed" / file.name)

This is boring. It's also fast, free, predictable, and easy to fix when something breaks. You can write a test for it. You can read the logs and know exactly what happened.

If your task has clear inputs and clear rules, just write code. You don't need an LLM. You don't need an agent. You need a function.

When do you actually need an LLM?

Here's a real example. You're building a support system and tickets come in as free-text emails. Some are billing issues, some are bugs, some are feature requests. A customer might write "I got charged twice last month" or "your checkout page is broken on Safari" or "would be cool if you had dark mode."

Try writing if-else logic for that. You can't. The inputs are messy and there are infinite ways people can write the same thing. This is where an LLM is useful.

def categorize_ticket(text: str) -> str:
    response = llm.chat(messages=[{
        "role": "user",
        "content": f"Categorize this ticket as billing, "
                   f"technical, or feature-request: {text}"
    }])
    return response.content.strip()

Look at what's happening here. You still have a pipeline. Tickets come in, get categorized, get sent to the right team. That flow is regular code. The LLM handles one step that needs language understanding.

This is what most "AI automation" actually is. Or should be. A normal workflow with an LLM plugged in where you need it to understand something that's not structured. Classification, summarization, extraction, translation. Things that used to need a whole NLP team.

The rest of the pipeline stays deterministic. You're still in control.

So what's an actual AI agent?

Imagine asking someone: "Figure out who our main competitors are, compare their pricing to ours, and write up a summary."

You're not giving them steps. You're giving them a goal. They have to figure out how to get there. Maybe they search for some companies, visit their pricing pages, check review sites, compare with your pricing, and write something up. If one website is down, they try another. If they find something unexpected, they look deeper.

That's what an agent does. You give it a goal and some tools, and it plans its own steps.

agent = Agent(
    model="claude-sonnet-4-5-20250514",
    tools=[web_search, read_url, write_file],
    instructions="You are a market research analyst."
)

result = agent.run("Analyze the competitive landscape for Vercel")

You didn't tell it to search the web, or which pages to read, or what to compare. It figured that out on its own. That's the difference. An agent has autonomy. It decides what to do next based on what it finds.

This is useful for open-ended work. Research, investigation, complex debugging. Tasks where you can't plan the steps ahead of time because you don't know what you'll find.

But it's also slow (30+ seconds for a multi-step task), expensive (dollars, not cents), and hard to debug. When an agent goes wrong, tracing what happened through a 15-step reasoning chain is painful.

The simple test

When you're building something, ask yourself:

Do I know the steps? If yes, write regular code. No LLM needed.

Do I know the steps, but one of them needs to understand language? Use an LLM for that one step. Keep everything else as normal code.

Is the task genuinely open-ended and I can't predict what needs to happen? Now you might need an agent.

That's it. Most things are in the first two categories. Agents are for the rare cases where you really can't plan what comes next.

Automation vs LLM vs AI Agents

How they work together

The best systems use all three. Here's a real pattern:

A cron job runs every hour and pulls new support tickets (regular automation). Each ticket gets categorized by an LLM (LLM automation). Most tickets get routed automatically. But when the LLM flags something as "potential security issue," an agent starts up to investigate: it checks the user's recent activity, looks at related tickets, searches the codebase for the mentioned endpoint, and writes a brief for the security team (AI agent).

The cron job is free and instant. The LLM call costs a fraction of a cent and takes a second. The agent costs a dollar and takes a minute. Each one does what it's good at.

Don't use an agent to move CSV files. Don't use a script to understand customer emails. Match the tool to the problem.