Every business I work with has a backlog of “we should automate that” ideas. Most of them are good instincts. Some of them are genuinely worth doing. A few of them would cause more problems than they solve.

The challenge isn’t having ideas — it’s deciding which ones to act on first. Here’s the framework I use to make that call quickly.

Score each idea on three dimensions

For each automation idea, give it a score from 1–5 on these three questions:

1. How often does this happen?

A task that happens twice a year isn’t a strong automation candidate, even if it takes a full day each time. A task that happens fifty times a day is a different matter entirely. Frequency multiplies impact.

  • 1 = a few times a year
  • 3 = weekly
  • 5 = daily or near-constant

2. How rule-based is it?

Automation works best when the logic is clear. “If the form says X, do Y” is automatable. “Use your best judgement based on context” is not — at least not without a more sophisticated approach. Be honest here. If a human reads the task and thinks, that’s complicated to explain, it’ll be complicated to automate.

  • 1 = requires significant human judgement
  • 3 = mostly rule-based with some exceptions
  • 5 = fully rule-based, no discretion needed

3. What’s the cost of getting it wrong?

This is the one most frameworks miss. A high-frequency, rule-based task is still a bad automation candidate if errors are catastrophic. Automating invoice processing incorrectly costs real money. Automating an internal data summary slightly wrong mostly costs a few minutes of someone’s time.

  • 1 = errors are very costly (financial, legal, reputational)
  • 3 = errors are annoying but correctable
  • 5 = errors are trivial and easily caught

How to use the scores

Multiply the three scores together. You get a number from 1 to 125.

Anything above 60 is worth serious attention — these are your quick wins.

Anything between 30 and 60 goes on a second-priority list. Worth doing, but get the quick wins working first.

Anything below 30 — park it. Either the frequency, the rules complexity, or the error risk isn’t there yet.

An example in practice

A client recently wanted to automate their customer onboarding emails. Let’s score it:

  • Frequency: They onboard around ten new customers a week. Not constant, but regular. Score: 3.
  • Rule-based: The emails follow a clear sequence triggered by signup events. Score: 4.
  • Error risk: Wrong email goes to a new customer — annoying, but recoverable. Score: 4.

Total: 48. Second priority. Worth doing, but we started with their invoice categorisation first (scored 75) and saved fifteen hours a week before even touching onboarding.

What the framework doesn’t do

This is a triage tool, not a feasibility assessment. A task can score 100 and still be technically difficult to automate because of how the data is structured, or what system it lives in. The score tells you where to look first — you still need someone to check whether the thing is actually buildable before you commit time to it.

That’s usually where a proper discovery conversation pays for itself.


Got a list of automation ideas and not sure where to start? An AI clarity audit maps your operations and comes back with a prioritised plan.