A well-kept risk register can still leave trustees feeling uncertain. It might be updated regularly, discussed at committee, and full of sensible wording, yet nobody can confidently say which risks have been tested, which controls are genuinely working, and where the trust’s exposure is quietly growing.
That is the gap a risk-led internal scrutiny programme is meant to close. The Academy Trust Handbook requires trusts to identify, on a risk basis and with reference to the risk register, the areas they will review each year. It also expects the audit and risk committee to review the ratings and responses on the risk register to inform the programme of work. (GOV.UK)
In other words, the risk register is not just something you review for completeness. It is meant to drive assurance activity. The internal scrutiny good practice guide goes even further and describes the relationship as iterative: internal scrutiny is informed by the risk register, and findings should then inform updates to the risk register and its scoring. (GOV.UK)
This blog explains how to build that line of sight in a practical way. It is written for academy trusts that want a programme trustees can use, executives can deliver, and committees can oversee without getting pulled into unhelpful detail.
What “risk-led” looks like in a trust setting
In trust assurance work, “risk-led” is easy to say and surprisingly hard to do. The most common drift I see is towards an activity-led plan that feels comprehensive but does not change risk outcomes. You get a neat schedule of reviews, you tick them off, and the same underlying weaknesses reappear because the plan is not anchored to your real exposure.
A risk-led programme starts with two questions that are uncomfortable, but useful.
First, where would a control failure hurt us most, either financially, operationally, or reputationally?
Second, where is our confidence in control operation weakest, based on evidence rather than gut feeling?
When you start there, you naturally prioritise the areas where assurance has the highest value. The plan becomes easier to defend in committee because each piece of work has a clear purpose.
The Academy Trust Handbook frames internal scrutiny as evaluating the suitability of, and compliance with, financial and non-financial controls, offering advice and insight on weaknesses, and ensuring all categories of risk are adequately identified, reported and managed. (GOV.UK) If your programme is heavily weighted towards one comfortable corner of the organisation, or if it only tests whether documents exist, it is unlikely to meet that expectation.
Use the risk register as your first dataset, not your only dataset
The risk register should be where you begin, but it is rarely enough on its own. Risk registers often capture what the board worries about, while operational teams hold a different set of “quiet risks” that show up in near misses, recurring exceptions, and awkward workarounds.
The internal scrutiny good practice guide makes clear that planning must be a risk-based exercise between the trust board, audit and risk committee, and the internal scrutineer, with input as required from the CEO and CFO. (GOV.UK) That wording is helpful because it points to a shared design process, not an outsourced technical exercise.
In practice, I suggest you combine four sources when building the programme:
Your risk register, including the current ratings and the narrative behind them.
Your internal scrutiny history, especially repeat themes and any areas where you have never achieved confident closure.
Your action tracker, particularly overdue medium and high-risk actions. If actions are repeatedly overdue, that itself is a risk signal, often pointing to capacity, unclear ownership, or unrealistic control design.
Your change pipeline. Growth, restructures, new finance systems, MIS changes, new estates arrangements, or a shift in delegated authority will change your risk profile quickly, even if your register has not caught up yet.
Bringing these together keeps the programme grounded. It also reduces the “plan by template” habit that many trusts fall into when they are busy.
Mapping risk statements to testable control questions
A risk register line is rarely written in a way that directly tells you what to test. That is normal. Risk statements often describe outcomes, such as “risk of non-compliance with procurement requirements” or “risk of inaccurate funding returns”. Your programme needs to translate those outcomes into controls you can examine in evidence.
A practical mapping method is to turn each high-priority risk into three things:
A control objective What needs to be true for this risk to be properly managed? For example, “procurement decisions follow delegated authority, conflict controls, and value for money requirements”.
A small number of key controls The controls that actually stop the risk becoming a problem. Try to avoid listing every control you can think of. Choose the few that matter most.
A set of evidence you would accept What would make trustees comfortable? Minutes and policies are rarely enough. You will usually want a mix of process evidence and real transaction or case sampling.
This is also where the Academy Trust Handbook requirement helps. It expects internal scrutiny to evaluate both suitability of controls and the level of compliance with them. (GOV.UK) Suitability is about design. Compliance is about whether people follow the design consistently. Your mapping should cover both, otherwise you risk producing reassuring reports that do not reflect day to day reality.
Prioritisation that trustees can understand, and challenge constructively
Trusts often ask for a prioritisation matrix, then over-complicate it. You do not need a fancy model. You need a transparent one.
A simple scoring approach works well, provided you document your rationale. For each possible review area, score it against:
- potential impact if controls fail
- likelihood or exposure, based on your context
- control maturity, based on evidence and history
- recent change, such as system change or growth
- governance sensitivity, where reputational risk is high
You are not trying to generate a perfect number. You are creating a disciplined conversation. Trustees can then challenge in a helpful way. For example, they might accept that a stable area is lower priority this year, but ask for light-touch sampling to confirm it remains stable.
The Handbook supports this approach because it requires the audit and risk committee to review the ratings and responses on the risk register to inform the programme of work. (GOV.UK) That is not a passive obligation. It is an invitation to make prioritisation visible.
Sequencing your programme across 12 months without crushing capacity
A common reason risk-led programmes lose impact is sequencing. Trusts cluster too much work into one term, then wonder why management responses are thin and follow-up slips.
The internal scrutiny good practice guide notes that visits should be timed to feed into audit and risk committee meetings, and it suggests that visits should be evenly spread throughout the year. (GOV.UK) The Handbook also expects the programme to be timely, spread appropriately across the year, and to include reports to each committee meeting plus an annual summary report for the year ended 31 August. (GOV.UK)
A practical sequencing approach is to plan around decision points. Early in the year, focus on areas where trustees need confidence for budget setting, staffing structures, delegated authority, and major contracts. Mid-year is often the best time for deeper thematic work and follow-up on earlier findings. Late-year should be about closing the loop, not launching brand new high-risk reviews that you cannot follow through before the annual summary.
You should also plan for change. A risk-led programme needs rules for reprioritisation when new risks emerge. If you have a cyber incident, a safeguarding issue, a sudden change in senior staff, or a significant expansion, the plan should move. Treat that as maturity, not failure.
Fieldwork standards that protect the value of your programme
A risk-led plan is only as strong as the fieldwork that delivers it. In multi-academy trusts, inconsistency between reviewers, or between schools, is where assurance becomes fuzzy.
I encourage trusts to set a small set of minimum standards and stick to them. Not because trustees want bureaucracy, but because consistency is what allows the committee to compare findings across the year.
At minimum, you want agreed scope and control objectives before fieldwork begins, evidence-based testing with traceable sampling, and a clear distinction between design weaknesses and operating weaknesses. When a finding is significant, root cause matters. Is it training, capacity, unclear ownership, poor system design, or local variation? Without that, recommendations often default to “remind staff” and the issue returns.
This aligns with the Academy Trust Handbook’s expectations for internal scrutiny to offer advice and insight to the board on how to address weaknesses. (GOV.UK) Advice without root cause is rarely useful.
Turning findings into verified risk reduction
Trustees do not gain assurance from a report. They gain assurance from what changes afterwards, and whether the change is real.
The risk management good practice guide uses a “lines of defence” model and describes internal scrutiny as a line that provides independent assurance on the effectiveness of risk management and controls. (GOV.UK) That framing is helpful because it reminds everyone that internal scrutiny sits within a wider control system. It is meant to test whether the system is working.
Action governance is where many trusts struggle. The tracker might look healthy, but closure is often based on a status update rather than evidence. Over time, that creates a false sense of progress.
A practical approach is to tighten what “complete” means for medium and high-risk actions. If an action claims a control has changed, you should expect either a re-test, or an independent verification step, depending on the risk. The internal scrutiny programme then becomes an engine for risk reduction, not a reporting function.
The internal scrutiny good practice guide describes the relationship between scrutiny and the risk register as iterative, where internal scrutiny findings influence risk scores and risks are updated accordingly. (GOV.UK) If you do not update the register based on what you have found, the board ends up managing risk from old assumptions.
Reporting that supports decisions, not passive updates
Committee reporting is often where good programmes lose momentum. Reports become long, technical, and descriptive. Trustees skim them, actions drift, and assurance weakens.
The Handbook expects the audit and risk committee to oversee and approve the programme, ensure risks are addressed appropriately, and report to the board on the adequacy of the internal control framework. (GOV.UK) To do that well, committees need reporting that highlights what matters.
A good committee pack usually makes it easy to see:
- delivery against the plan, and any changes with reasons
- the significant findings and what they mean for risk
- the overdue actions that require escalation
- repeat themes across schools or functions
- decisions required from the committee, such as agreeing reprioritisation or supporting capacity
You do not need a dashboard full of indicators. You need a clear narrative, backed by enough data to be credible.
One practical habit that improves governance quickly is to include a short “committee asks” section. It keeps the meeting focused on decisions. It also supports accurate minute-taking because the outcome is clearer.
The common mistakes that make a programme look risk-led but feel weak
When trusts tell me their programme is risk-led, and trustees still feel uncertain, it usually comes down to one of these issues.
The plan references the risk register but the mapping logic is not clear, so nobody can explain why a review sits where it sits.
The plan spreads effort evenly across topics, regardless of risk. That feels fair, but it is not assurance-led.
Scopes are too broad. Reviewers cover a lot of ground and test very little. Reports then read like process descriptions rather than evidence-based assurance.
Follow-up discipline is weak. High-risk issues stay open for months, or close without convincing evidence.
Reporting to the board focuses on activity and not on whether risk is moving in the right direction.
Most of these are operating model problems. When you tighten the mapping, prioritisation, and follow-up rules, quality improves without needing heroic effort.
A 90-day reset when your current programme needs strengthening
If your current plan feels activity-led, you can reset without pausing assurance work.
In the first month, refresh the risk-register mapping and produce a prioritised shortlist for the next two terms. Use the existing register ratings, but challenge whether the ratings still reflect current reality, especially in areas that have changed.
In the second month, redesign the sequence and committee timetable so that reviews land before decisions, and follow-up can happen before year-end reporting.
In the third month, set fieldwork and reporting standards, then launch the first cycle of revised work with a clear action governance approach for closure evidence.
The Handbook expects trusts to keep their approach to internal scrutiny under review and to consider suitability as size, complexity, or risk profile changes. (GOV.UK) A structured reset provides a clear evidence trail of improvement, which is useful for trustees and reassuring for executives.
How internalscrutiny.co.uk can help
internalscrutiny.co.uk helps trusts build and run risk-led internal scrutiny programmes that connect risk registers to real testing and real improvement. We focus on the whole assurance chain: mapping logic, prioritisation, fieldwork quality, governance reporting, and verified action closure.
To align risk-led design with delivery, visit our internal scrutiny service page, review our process overview, or discuss your trust context through Contact.
Sources
Checked on 24 February 2026.
- GOV.UK, Academy trust handbook 2025: effective from 1 September 2025 (updated 22 October 2025). (GOV.UK)
- GOV.UK, Internal scrutiny in academy trusts (published 14 February 2024). (GOV.UK)
- GOV.UK, Academy trust risk management (guide within Managing risk in an academy trust, published 18 May 2021). (GOV.UK)