Smartgrade Features Built for MATs

From trust-wide insights to seamless setup, every feature is shaped around the way MATs operate.

Our guiding principles

Smartgrade has opinions about what good assessment looks like, and we are proud to work with thousands of schools and MATs that share our values.

Curriculum comes first

To assess effectively we need to understand what we were trying to teach in the first place. If your assessment approach does not align with your curriculum, then it’ll struggle to give you actionable information.

Assessment quality matters

We work with our assessment partners and MATs to help them iteratively improve their assessments.

We reduce teacher workload wherever we can

When we design our products, we think about teacher workload at each step. Our online markbooks are super simple to use, and our online assessments allow students to enter their results directly into Smartgrade, reducing marking.

Analysis should drive decision-making

We want every report in Smartgrade to lead to action. It’s easy to get caught up in creating endless reports that look pretty but lack a clear purpose. So we make sure Smartgrade analysis reports are always clear, show national context wherever possible, and lead to simplified decision-making for school leaders.

We believe in the power of standardisation

If you’re using unbenchmarked assessments, it’s hard to know what good looks like. That’s why assessments on our platform contextualise performance using a MAT or national standardisation sample. This means that teachers and leaders get meaningful benchmarking data to help them understand performance and spot gaps in learning.
Powermark's Guiding Principles

Highly engaged MATs trust us

Why not become one of them?

MAT-wide dashboards

Analyse performance across schools by question, topic, or demographic - aligned with the DfE’s MAT Assurance Framework call for "smart data systems."

Simple Standardisation

Our powerful standardisation algorithm allows trusts to compare results across schools or benchmark with national results.

Custom Assessments

Create and standardise trust-wide assessments (paper or online) to align with your curriculum. Benchmark results nationally to ensure consistency and raise standards. Now with AI Automarking!

Built for MATs

Managing MAT-wide assessment is simple: assign assessments, manage markbooks, export data, analyse results, and manage user roles centrally - or delegate to schools.

Effortless Data Access

Export data from all schools in one file, orautomate via API - ensuring seamlessreporting and analytics at scale.

Targeted Intervention

Use gap analysis to pinpoint trust-widelearning gaps and direct support whereit matters most.Powermark AI-powered automarking is designed to be affordable for schools and MATs working with tight budgets.

Testimonials

“Smartgrade is a key element of our curriculum and assessment strategy across our 33 primaries.”
“Analytical tools enable deep and detailed understanding of what pupils know and smart marking features mean minimal workload for maximum information to inform next steps. We are data-rich without wasting a minute of learning or teacher time!”
Emily Hobson - Oasis Community Learning
“Smartgrade allows us to standardise our assessments across a national cohort, giving teachers, leaders and students in our group and beyond a highly valid and powerful understanding of how their performance compares to their peers.”
Dale Bassett – United Learning
“Smartgrade’s process allows us to celebrate successes and target support where it’s needed, at a curriculum level. It is a crucial tool as part of our KS3 common assessments.”
Nimish Lad - Creative Education Trust

Read about Powermark AI auto-marking

AI-Auto-Marking
Primary Schools
KS1
KS2
MAT

How Smartgrade works with MATs to automark their own assessments

This is the third blog in a series outlining how we approach AI for automarking, and also introducing our products in this area.
Read more
AI-Auto-Marking
Primary Schools
HeadStart Primary
KS2
KS1

Introducing: standardised Reading and GPS assessments, automarked for you!

This is the second blog in a series outlining how we approach AI for automarking, and also introducing our products in this area. You may also be interested our other AI blogs, which explains the principles that underpin our automarking work, and how we’ve conducted the research that led us to these launches.
Read more
Secondary Schools
Primary Schools
Custom Assessments
HeadStart Primary
AI-Auto-Marking

We’re about to launch automarking powered by AI - here’s how we got here

There’s a lot of talk about AI as a tool to mark students’ work right now. We’ve been working on this for almost two years, and following in the footsteps of the ever-admirable No More Marking, we think we have a responsibility to be transparent about what we’re doing and why. This blog is therefore our first instalment in a series which will shed some light on how we’ve come up with an approach to automarking that saves teachers time AND improves the insights they can derive from an assessment.
Read more

Frequently Asked Questions

As with teacher marking, accuracy varies depending on the type of assessment and the level of subjectivity involved in marking. As a principle, we aim for 94%+ accuracy.
Where we have detailed marking guidance, we find that we can consistently achieve this target of 94%+ accuracy. For example, in our largest ever test, using HeadStart Reading and GPS assessments and evaluating across a sample of 9,500 student responses in multiple primary year groups, our AI achieved 97% accuracy, compared with 94% for teachers.

Where some questions are ambiguous or the marking guidance is less rigorous, accuracy can dip into the 80-90% range. For this reason, we do not currently make automarking available by default for all assessments. Instead, we work with our assessment and MAT partners to ensure that assessments have high-quality marking guidance before we enable automarking. 
We continue to evaluate accuracy on an ongoing basis as the system is rolled out more widely, with random sampling and expert review to ensure quality is maintained.
Smartgrade mitigates risks commonly associated with AI systems, such as brittleness, hallucinations, embedded bias, uncertainty, and false positives using the following measures:
Brittleness
We have conducted extensive testing and piloting using a wide range of assessment types, subjects and year groups. This ensures that our approach does not just work within limited, tested parameters, reducing brittleness.

Uncertainty
We are currently implementing an approach which involves the AI assigning a confidence score (1–10) to each mark, where 10 is full confidence and 0 is no confidence. Questions with a confidence score of 7 or below will be flagged as “priority for teacher review” in the product going forward.

Hallucinations
We are experiencing minimal levels of hallucination, partly because we automark using tightly defined marking guidance, which leaves less space for hallucination. That said, we still check for hallucinations in a number of ways. Teachers moderate automarking, allowing them to spot and correct hallucinations. A sample of teacher adjustments are then checked by us to scrutinise for hallucinations, amongst other things.Hallucinations are more likely when the AI is less certain of its answer, so by taking the “confidence score” approach described above we can further mitigate the risk of hallucinations appearing, and make it more likely that teachers will correct them when they do occur.We do periodic random sampling of all marks in our assessments and we get an expert to automark those samples and flag any hallucinations. In the most recent 2,000 expert-marked questions we have discovered no hallucinations.   

Embedded Bias
Our approach to automarking involves no passing of personally identifiable information to our marking engine, so no bias can be derived from knowledge of a student’s characteristics. Moreover, we use a prescriptive mark scheme that the AI applies directly, rather than allowing the model to generate open-ended interpretations, which could in theory be subject to bias of some form. 
False Positives
We evaluated false positives using a confusion matrix. While present, they were uncommon. Interestingly, we observed more false negatives than false positives.
Automarking of scanned paper assessments is the next phase of Powermark - Smartgrade’s auto-marking project.
We’re currently trialing using AI to mark scanned in paper assessments.
If your school or MAT would like to be part of this pilot project, please get in touch with us on sales@smartgrade.co.uk.
We do not believe there is any legal obligation under UK law to inform students or parents of AI use.
This is because no personally identifiable information is passed to the AI system, and no automated decision-making occurs.
However, if participating schools would like to notify parents, we provide wording that can be used in parent-school communications.

Everything you need for smarter assessments

Book a Demo Today