“How often should we schedule internal scrutiny?” is one of the most common questions trustees and executive teams ask, and it is also one of the easiest questions to answer badly.
The easy answer is to pick a number, put it in a plan, and repeat it every year. The better answer is to design a cadence that matches your trust’s risk profile, size, complexity, and capacity, then be able to explain that design to the audit and risk committee in plain language.
DfE guidance is helpful here because it makes two points that are easy to miss if you only read the headlines. First, the Academy Trust Handbook does not set a universal “X visits per year” rule. Instead, it expects an internal scrutiny programme that is timely, spread appropriately over the year, and reported regularly to the audit and risk committee. Second, the internal scrutiny good practice guide explicitly says the Handbook does not stipulate how often visits should occur, and it frames frequency as something the audit and risk committee should actively shape based on context. (GOV.UK)
So the right question is not “what number do other trusts use?” It is “what cadence gives our board confidence that controls are operating effectively, and that we will spot slippage early enough to do something about it?”
Start with what is actually required
Even though there is no fixed mandated visit count, there are clear expectations that influence frequency.
The Academy Trust Handbook requires each trust to have a programme of internal scrutiny providing independent assurance to the board that financial and non-financial controls and risk management procedures are operating effectively. It also requires the trust to identify, on a risk basis and with reference to its risk register, the areas it will review each year. (GOV.UK)
On timing and reporting, the Handbook is more direct than many people realise. It says the programme must be timely, spread appropriately over the year so higher-risk areas are reviewed in good time, and it must include regular updates to the audit and risk committee, including a report of work to each audit and risk committee meeting and an annual summary report for each year ended 31 August. It also says the trust must submit its internal scrutiny summary report to DfE by 31 December when it submits audited annual accounts. (GOV.UK)
That set of requirements immediately rules out an “all in one month” approach. If you schedule all scrutiny activity in late summer, you may technically complete reviews, but you have lost the governance value of timely challenge and early intervention. A mature board wants to see issues early enough to influence decisions, not as a retrospective list.
There is another practical anchor point. The Handbook says the audit and risk committee should meet at least three times a year. (GOV.UK) The good practice guide mirrors this, and adds a useful point: committee meetings should be timed to follow visits so interim findings can be discussed. (GOV.UK) That strongly implies a minimum rhythm of termly reporting, and a programme structure where fieldwork lands before committee meetings rather than after them.
So while “frequency” is not mandated, a credible programme almost always has a termly backbone.
Why “termly” alone is rarely the full answer
Many trusts hear “termly” and assume the job is done. Termly reporting is often necessary, but it is not always sufficient.
A termly rhythm works well for planned reviews and formal oversight. Where it can fall down is follow-up, action closure, and emerging risk. In real life, the things that derail control environments do not wait politely for the next termly meeting. A key person leaves, a new system goes live, an incident happens, a school joins the trust, or a pattern of overdue actions quietly grows.
This is why the best programmes separate two types of scrutiny activity:
Planned assurance work that is scoped, delivered and reported as part of the annual programme.
Light-touch monitoring and follow-up activity that keeps momentum between formal meetings, without turning internal scrutiny into a constant audit presence.
When trusts mix these together, they either over-promise (too many full reviews, not enough capacity) or under-deliver (a few reviews, little follow-up, and recurring findings).
What the good practice guide says about frequency, in plain English
The DfE internal scrutiny good practice guide has a dedicated section on frequency of visits, and it is worth reading because it sets the tone for how boards should think about this.
It says the Handbook does not stipulate how often visits should occur. It then says the audit and risk committee will want to ensure frequency and length of visits allow appropriate coverage for the trust’s size and complexity, with consideration of previous matters arising, external audit findings, the trust’s context and other sources of assurance. (GOV.UK)
It goes on to suggest trusts consider specific context factors, such as being newly established, changes in senior management, or taking on new academies. It also reinforces that visits should be timed to feed into audit and risk committee meetings, and that it would be appropriate for visits to be evenly spread throughout the year. (GOV.UK)
Two implications sit behind that wording.
First, frequency is a governance decision. It should be shaped by the audit and risk committee, not left to habit.
Second, frequency should flex. A trust that is stable and confident in control operation does not need the same intensity as a trust that is growing, changing systems, or dealing with repeated high-risk findings.
A practical way to decide cadence without guessing
If you want a defensible answer, you need a method that turns “it depends” into a board-ready rationale. The simplest way I have seen trusts do this well is to start with a termly baseline, then adjust intensity by looking at four questions.
How high is the risk exposure in the area? This is your risk register starting point, and the Handbook explicitly expects your programme to reference it. (GOV.UK)
How mature are the controls, based on evidence? Not based on confidence, but based on what internal scrutiny, external audit, and management monitoring have shown over time.
How quickly does the risk move? Payroll risk can change monthly. Procurement risk spikes around major tenders and contract renewals. Safeguarding recruitment risk spikes around peak hiring windows. IT risk can move fast when threats or systems change.
How good is the trust at closing actions properly? If closure is slow or optimistic, you need more follow-up touchpoints. If closure is disciplined and verified, you can usually reduce intensity in that domain.
If you document those four questions for your main risk domains, your programme design conversation becomes much easier, and the audit and risk committee can challenge intelligently without getting stuck on anecdotes.
Cadence models that work in practice
It is tempting to divide trusts into neat categories, but real trusts sit on a spectrum. Still, it helps to describe a few practical models, because boards often need a starting point.
Here is a set of models that many MATs find workable, with the understanding that your risk profile and capacity should drive the final design.
| MAT profile | A sensible backbone | What adds the most value |
|---|---|---|
| Smaller or stable MAT | Termly planned reviews and termly reporting | Targeted follow-up on high-risk actions between meetings |
| Mid-sized MAT with mixed maturity | Termly reviews plus rolling follow-up | Monthly monitoring of a small set of high-risk domains and overdue actions |
| Large or fast-changing MAT | A rolling programme spread across the year | A two to three-year coverage cycle, plus faster follow-up where risk is highest |
That last point about multi-year coverage is directly reflected in the good practice guide, which notes larger trusts may adopt a two or three-year internal scrutiny cycle to ensure appropriate coverage. (GOV.UK)
The important detail is that “monthly” does not have to mean “a full audit every month”. It often means keeping the pressure on the areas that repeatedly slip, and being honest about where the trust needs closer oversight for a period of time.
Termly reviews versus monthly monitoring, where each has a place
A termly cycle is your governance spine. It supports committee oversight, structured reporting, and clear decision points. It also aligns naturally to how trusts run budgets, policies, staffing changes, and school improvement cycles.
Monthly activity is usually about keeping control improvements alive. It works best when it is focused on a small number of things that genuinely move risk, such as the status and evidence of high-risk actions, a small set of key compliance controls (for example, website statutory information, declarations of interest updates, payroll leaver processing), or monitoring after an incident or major change.
The Academy Trust Handbook expects regular updates to the audit and risk committee and a report of work to each meeting. (GOV.UK) If you only look at action progress termly, you risk turning committee oversight into a cycle of late surprises. A small amount of monthly monitoring can prevent that, as long as it stays proportionate and does not swamp capacity.
A useful discipline is to decide, in advance, what you will monitor monthly and why. If you cannot explain why it needs monthly attention, it probably does not.
Trigger points that justify accelerating scrutiny
Even the best designed annual plan will not anticipate everything. The good practice guide explicitly suggests trusts consider context factors such as being newly established, changes in senior management, or taking on new academies, when deciding frequency. (GOV.UK)
In practice, it helps to define a few trigger points that allow you to accelerate work without debate about governance authority. These do not need to be many. A small number of clear triggers is more useful than a long list that nobody remembers.
Common examples include a significant control failure or incident, repeated high-risk findings in the same area, a major structural change such as a merger or rapid expansion, or a key leadership change in a high-risk function. When a trigger fires, the trust should be able to re-sequence the programme and explain why, linking back to the Handbook’s expectation that the programme is risk-based and timely. (GOV.UK)
The aim is not to react to every wobble. It is to avoid a situation where risk has clearly increased but your scrutiny programme remains fixed because “the plan is the plan”.
Capacity planning, the part that makes or breaks frequency
Frequency decisions that ignore capacity tend to create two problems. Reviews become rushed and shallow, and follow-up becomes weak because management response bandwidth is exhausted.
A credible cadence takes account of who is doing the work, how independent they are, and how quickly the trust can realistically respond to findings. The good practice guide lists factors for the audit and risk committee to consider, including trust size and complexity, the complexity of the area being reviewed, specialist knowledge required, and value for money. (GOV.UK)
In practical terms, I encourage trusts to plan capacity in three layers.
First, commit to the baseline programme that covers the highest-risk areas and meets reporting expectations.
Second, build in follow-up capacity. If you plan reviews but do not plan follow-up testing, you are only doing half the job.
Third, hold a small amount of contingency capacity for triggers. That is often the difference between a programme that adapts calmly and one that collapses into unplanned work.
This is also where external provider lead times matter. If you outsource internal scrutiny, you need to schedule with enough notice that visits and reporting line up with committee meetings. The good practice guide notes meetings should be timed to follow visits so interim findings can be discussed. (GOV.UK) That is much harder to achieve if the diary is built too late.
What a “defensible” annual timetable looks like
Trustees often feel more confident when they can see a timetable that links fieldwork, committee meetings, follow-up, and the year-end summary. The Handbook requires an annual summary report to the audit and risk committee for each year ended 31 August, and submission of the internal scrutiny summary report to DfE by 31 December. (GOV.UK)
A practical pattern that many trusts use is:
Autumn term: one or two high-risk reviews early, plus follow-up on the prior year’s open actions, then committee reporting.
Spring term: thematic reviews aligned to risk priorities and any emerging concerns, plus targeted follow-up, then committee reporting.
Summer term: final planned reviews and follow-up testing so that the annual summary is evidence-based, then committee reporting and early drafting for the year-end summary.
Early autumn (new academic year): consolidate findings for the annual summary for the year ended 31 August, complete any final follow-up needed, and finalise the summary in time to support the 31 December submission. (GOV.UK)
The key is not the labels of the terms. It is the discipline of spreading work across the year and leaving enough time for follow-up and reporting.
How you know if your cadence is working
If your programme frequency is right, you usually see three outcomes.
Planned work is delivered on time, because the programme is realistic.
High-risk actions are closed faster, and closure is backed by evidence rather than optimistic statements.
Recurrence drops, because follow-up testing catches issues before they become the next year’s findings.
The good practice guide describes the risk review process as iterative, where internal scrutiny findings inform the risk register and risk scores are updated accordingly. (GOV.UK) If your cadence is working, you should see that iteration in practice. Risks should move when controls improve, and scrutiny intensity should ease in areas where assurance is consistently strong.
If you are not seeing those outcomes, the answer is not always “do more”. Sometimes it is “do fewer things, better”, then reintroduce additional coverage once follow-up discipline is working.
The frequency mistakes that create busy work and weak assurance
When frequency models fail, it is usually because they are driven by habit, not risk.
One-size-fits-all termly reviews across every area can produce a lot of reports but limited depth where it matters. On the other hand, a very aggressive monthly plan can overwhelm both reviewers and management, leaving weak action implementation and poor committee reporting.
Another common trap is heavy planning and light follow-up. The Handbook expects regular reporting and an annual summary that draws conclusions for the year, which is hard to do credibly if follow-up is weak. (GOV.UK)
The most reliable programmes are the ones that stay honest about capacity and focus intensity where the evidence shows it is needed.
How internalscrutiny.co.uk can help
internalscrutiny.co.uk helps MATs design scrutiny frequency models that are realistic, risk-led, and governance-ready. We support trusts to balance planned coverage with responsive capacity, keep work spread across the year, and maintain clear oversight of high-risk actions, with reporting that supports audit and risk committee decision-making.
You can map your current approach against our process model, review practical options on pricing, or discuss your trust’s frequency model through Book Audit.
Sources
Checked on 24 February 2026.
- GOV.UK, Academy trust handbook 2025: effective from 1 September 2025 (updated 22 October 2025), including requirements on audit and risk committee meetings, programme timing and reporting, annual summary report for year ended 31 August, and submission by 31 December. (GOV.UK)
- GOV.UK, Internal scrutiny in academy trusts (published 14 February 2024), including the statement that the Handbook does not stipulate visit frequency and guidance on timing visits to feed into committee meetings. (GOV.UK)
- GOV.UK, Academy trust risk management (good practice guidance), referenced by the internal scrutiny guide as further guidance on risk management. (GOV.UK)