Skip to content
Diosh Lequiron
AI & Technology11 min read

Automate This Before Reaching for AI

Most organizations reach for AI before automating what should already be automated. A decision framework for choosing the right tool — and auditing what you already have.

The Seduction of the More Sophisticated Tool

There is a pattern I have seen repeat itself across organizations of different sizes, sectors, and technical sophistication: a team encounters a repetitive problem, decides the solution is AI, and invests weeks or months building something that a scheduled script or a webhook integration would have solved in a day.

This is not a technology failure. It is a framing failure. When AI became accessible to non-technical teams — through APIs, no-code platforms, and consumer tools — it also became the default answer to any automation question. The question stopped being "what is the right tool for this problem?" and became "how do we use AI for this?"

The result is a category of AI deployments that are more expensive, less reliable, and harder to maintain than the automation solutions they replaced — or more precisely, the automation solutions that were never built because the team reached for AI instead.

This article is a framework for making that choice deliberately. It does not argue against AI. AI is genuinely useful for a specific class of problems. It argues for automating the things that should be automated before reaching for AI, so that when you do use AI, you are using it where it actually provides value.

The RUDE Framework: Four Questions Before You Reach for AI

Before evaluating whether AI is the right tool for a problem, run it through four questions. I call this the RUDE framework — not as a judgment of the impulse to reach for AI, but as a useful mnemonic for the filter.

R — Rule-definable? Can you write down, in plain language, all the conditions that determine what action to take? If someone with no domain expertise could follow your rules and produce the correct output every time, the problem is rule-definable. Automation handles rule-definable problems better than AI does.

U — Unstructured input required? Does solving the problem require interpreting natural language, images, audio, or other unstructured data where meaning is ambiguous or context-dependent? If the input is always structured — a form submission, a database record, a file in a known format — unstructured input interpretation is not adding value, and you do not need AI for it.

D — Decisions involve judgment? Is there genuine ambiguity in the decision, where a reasonable expert might make a different call depending on context that cannot be fully specified in advance? Judgment under ambiguity is where AI has a legitimate advantage over rule-based systems. If the decision space is finite and enumerable, it is not judgment — it is a lookup table.

E — Error tolerance is low? What happens when the system is wrong? If the cost of an error is high — a payment processed incorrectly, a compliance flag missed, a patient record misrouted — you want deterministic behavior, which automation provides and AI does not. AI errors are probabilistic and not always predictable in advance. Rule-based systems fail in predictable, auditable ways.

If the problem is rule-definable, operates on structured input, involves finite decision space, and has low error tolerance: automate it. Do not build an AI system for it.

If the problem requires interpreting unstructured input, involves genuine judgment under ambiguity, and has error tolerance that allows for probabilistic behavior with human oversight: then consider AI.

The Four Workflow Categories That Belong in Automation

There are four categories of organizational work that are consistently misrouted to AI when they should be automated. These are not edge cases. They represent the majority of the "we should use AI for this" requests I encounter.

Data transformation. Moving data from one system to another, converting it from one format to another, validating it against known rules, and loading it into a destination. This is the domain of ETL pipelines, API integrations, and scheduled scripts. The rules are knowable in advance, the inputs are structured, and the errors are detectable. AI adds latency, cost, and unpredictability to a problem that is already well-solved by automation. If your team is using an LLM to transform CSV data or reformat API responses, replace it with a script.

Notification and alerting. Sending the right message to the right person when a defined condition is met. This is a conditional trigger, not a judgment problem. "When inventory falls below X units, notify the purchasing team" does not require AI. "When a customer submits a form, send a confirmation email" does not require AI. Any workflow that can be expressed as a set of conditions and corresponding actions belongs in an automation tool — a workflow engine, a webhook handler, or a simple event-driven script. AI in notification systems typically adds hallucination risk (generating message content that deviates from what was intended) without adding value.

Repetitive document generation. Producing documents from templates with variable data — contracts, reports, invoices, status summaries — is a template problem, not a generation problem. The structure is fixed. The content variables are known. The output format is defined. Template engines handle this reliably and cheaply. AI-generated documents introduce variance in structure, language, and formatting that creates downstream review burden and compliance risk. If you know what you want the document to say before you generate it, use a template.

Status tracking and reporting. Aggregating data from multiple sources and producing a summary of current state. This is a query and formatting problem. The logic for determining what counts as "on track," "at risk," or "blocked" is definable in advance. The data sources are structured. The output format is consistent. Automation handles status tracking more reliably than AI, and it does so without the risk of the system misrepresenting the state of a project because it generated a plausible-sounding summary rather than an accurate one.

Where AI Actually Earns Its Place

None of this means AI is not useful. It means AI is useful for a specific kind of problem that the four categories above do not represent.

AI genuinely adds value when:

The input is unstructured and variable. Customer emails, support tickets, free-text feedback, documents with inconsistent formatting, voice recordings — these require interpretation, not just parsing. A rule-based system cannot reliably classify a customer email because the same intent can be expressed in thousands of different ways. AI handles this class of problem better than automation does.

The decision space is not fully enumerable. When you cannot list all possible inputs and their correct outputs in advance — because the problem involves genuine judgment, context-sensitivity, or the interpretation of novel situations — AI's probabilistic approach is appropriate. Routing a complex support ticket to the right team when the ticket could describe several different issues is a judgment problem. A rule-based system will get it wrong on the edge cases; AI may get it wrong less often.

The cost of imperfect output is manageable with human review. AI output under human oversight is a different risk profile than AI output operating autonomously. If a human reviews every AI-generated draft before it is sent, the cost of AI errors is the cost of human review time, not the cost of incorrect outputs reaching customers. The appropriate use of AI often involves keeping a human in the loop precisely because the output is probabilistic.

The task requires synthesis across large bodies of unstructured information. Summarizing a long document, identifying patterns across hundreds of customer interactions, generating a first draft of a report that requires interpretation of complex data — these are synthesis problems where AI reduces labor significantly and the cost of imperfection is manageable.

The distinction is not between "hard problems" and "easy problems." Some of the problems that belong in automation are genuinely complex — multi-step data transformation with many conditional branches can be intricate to build. The distinction is between problems where the correct answer is deterministic and auditable, and problems where it requires interpretation and judgment.

Auditing Your Current AI Spend for Automation Candidates

If you are already using AI in your workflows, it is worth auditing your current deployments against the RUDE framework. In my experience, a significant fraction of AI spend in most organizations is on problems that could be solved more reliably and cheaply with automation.

The audit process:

List every AI-powered workflow in your stack. Include API calls to LLMs, AI features in SaaS tools you use, and any internal tools built on AI models. Be specific about what each workflow does — not "AI handles customer support" but "AI classifies incoming emails into five categories and routes them to the appropriate queue."

Apply the RUDE filter to each workflow. For each one: Is the problem rule-definable? Does it operate on structured input? Is the decision space finite? Is the error tolerance low? If the answer to most of these questions is yes, the workflow is an automation candidate.

Estimate the cost difference. AI API costs, the latency of AI processing, the engineering overhead of prompt management, and the monitoring burden of probabilistic systems all add up. Compare this against the cost of the automation equivalent — typically a script, a workflow tool, or a simple conditional logic layer.

Prioritize by reliability gap. Some automation candidates will be failing quietly — the AI output is wrong often enough that humans are routinely correcting it, but no one has formally tracked the error rate. These are the highest-priority replacements, because you are paying for AI and paying for human review of AI errors simultaneously.

Plan the migration carefully. Replacing an AI workflow with automation is not always simple. If the AI workflow was doing something that is genuinely hard to express as rules — even if it should have been expressed as rules — you will need to do the work of writing those rules before you can automate. This is investment, but it is investment in a more reliable system.

The Organizational Tendency to Prefer AI

There is a social dimension to this problem that the RUDE framework does not fully address. In many organizations, recommending automation over AI requires defending a less exciting choice.

"We should build a webhook integration that fires when a form is submitted" does not generate the same enthusiasm as "we should use AI to process form submissions." The technology is older, the description is less compelling, and the outcome — a reliable, boring system that does exactly what it should — does not produce good material for presentations or external communications.

This preference for the more sophisticated tool is not irrational. AI is genuinely more capable than automation for a class of problems. The problem is that the enthusiasm for AI tends to detach from the specifics of whether AI is the right tool for the problem at hand. Teams that have recently built AI systems have evidence that AI works in some context; they generalize from that evidence to the assumption that AI is appropriate for all contexts.

The cost of this tendency is borne by the systems that result. AI systems built for problems that should have been automated tend to be harder to debug (because the behavior is probabilistic), harder to maintain (because prompt changes can shift behavior in ways that are difficult to predict), more expensive to operate (because LLM API costs at scale are significant), and less reliable (because the error rate of probabilistic systems is higher than the error rate of deterministic ones).

The correction is not to be hostile to AI. It is to be precise about when AI is the right tool, and to have the organizational vocabulary to defend automation as a legitimate, valuable choice when it is the right one.

Named Framework: The RUDE Decision Protocol

To make this operational, here is the full protocol in five steps:

Step 1 — Problem definition. Write one sentence describing exactly what the system needs to do: what input it receives, what decision or transformation it performs, and what output it produces.

Step 2 — RUDE filter. Apply all four questions. Score each: Yes / No / Partial.

Step 3 — Automation candidate assessment. If three or four RUDE questions score Yes, document this as an automation candidate. List the specific automation approach (webhook, scheduled script, workflow engine, template engine, ETL pipeline) that would address it.

Step 4 — AI justification requirement. If the team still wants to use AI despite the automation candidate assessment, require a written justification that addresses: what specific AI capability is needed that automation cannot provide, what the acceptable error rate is, what the human oversight mechanism is, and what the monitoring plan is.

Step 5 — Cost comparison. Before any AI deployment, produce a cost estimate for both the AI approach and the automation approach. The automation approach cost is often dramatically lower. The comparison should be visible to the decision-maker, not just the engineering team.

This protocol does not prevent AI adoption. It ensures that when AI is adopted, the decision was made deliberately, with awareness of the alternatives, and with a documented rationale that can be evaluated after the system has been operating for a few months.

Building the Automation Foundation First

There is a sequencing argument embedded in all of this that is worth making explicit. Organizations that have strong automation foundations — well-integrated systems, reliable data pipelines, structured workflows with clear triggers and actions — are better positioned to use AI effectively than organizations that have not built that foundation.

This is because AI is most useful as a layer on top of structured systems, not as a replacement for them. An AI system that classifies customer emails is more effective when those emails are already being captured reliably, routed to a queue, and processed in a consistent format. An AI system that generates reports is more effective when the underlying data it is summarizing is clean, structured, and current.

Organizations that skip the automation foundation and go straight to AI often end up with AI systems that spend significant computation on problems that could have been solved with data governance: inconsistent formats, missing fields, unclear routing logic, unreliable triggers. The AI becomes a patch for a structural problem rather than a genuine capability layer.

Build the automation foundation first. It makes everything that comes after it — including AI — more reliable, more maintainable, and more valuable.

ShareTwitter / XLinkedIn

Explore more

← All Writing