RICS AI Compliance Hub

AI is now part of surveying practice.
Is your firm documenting it?

From monitoring report drafts in ChatGPT to comparable analyses in Copilot — AI tools are part of everyday surveying workflows. The RICS Professional Standard on Responsible Use of AI became mandatory on 9 March 2026, with no grace period and no firm size threshold. This hub is the definitive resource for firms that need to understand and comply — all twelve practical areas of the standard, explained across three phases for construction finance QS firms.

Download the compliance checklist Read the full guide
The situation right now

The risk isn't the AI.
It's the missing paperwork.

Your team has been using ChatGPT to draft monitoring report sections for months. One surveyor uses Copilot to summarise facility agreements. Another pastes cost schedules into an AI tool to check for anomalies.

None of it is logged. None of it is disclosed to clients. There is no reliability assessment on file. There is no AI usage register.

On 9 March 2026, the RICS Professional Standard on Responsible Use of AI took effect. Your firm is now non-compliant — not because you did something wrong, but because you didn't document what you were already doing.

The documentation doesn't have to be complex.
It has to exist.
"The firms that will struggle are not the ones using the most AI — they're the ones who assumed documentation could wait."
Standard mandatory from
9
March
2026

RICS Professional Standard on Responsible Use of AI in Surveying Practice. 1st edition. ISBN 978 1 78321 555 3.

⚠ In effect now · No grace period

Why it matters

Three categories of exposure

01 / Regulatory
Disciplinary proceedings

RICS has stated the standard will be taken into account in regulatory and disciplinary proceedings from 9 March 2026. Non-compliance with a mandatory professional standard is taken into account in any regulatory or disciplinary proceedings — the AI standard explicitly states this.

02 / Insurance
PI coverage risk

If your firm uses AI in service delivery without documenting it and a claim is made, your PI insurer will ask what AI was used and how it was validated. No documentation means no answer. That is a coverage gap, not a theoretical one.

03 / Commercial
Bank panel position

As the RICS AI standard establishes clear requirements for QS firms, lenders have a legitimate basis to ask about AI governance when reviewing their monitoring panels. Firms that can demonstrate documented compliance will be better positioned. Firms that cannot may find the question is asked at the next annual review.


What the standard requires

Twelve areas. Three phases.

The RICS standard is organised into five chapter-level sections. For firms using AI in service delivery — rather than developing AI systems themselves — those sections translate into twelve practical areas your firm needs in place. We've structured this hub, our compliance guide, and our compliance checklist around those twelve, grouped into three phases: know and decide, govern, and deliver and prove.

Phase 1

Know and decide

Before AI is used in any client-facing work, your firm needs to know what it is, who can use it, and when it triggers compliance obligations.

01 · §2
Baseline Knowledge & Training

Every RICS member using AI must develop a basic understanding of AI types and limitations, erroneous outputs, bias, and data usage risks. An active obligation on each individual.

Per individual · Maintained over time
02 · §1.2 & §3.2
Material Impact & Appropriateness

A written determination that AI use has material impact on service delivery, plus a written appropriateness assessment per AI system before use. Material impact is the gateway — once crossed, everything below applies.

Written record · Reviewed when AI use changes
Phase 2

Govern

Before AI reaches a client-facing output, your firm needs the documented governance a regulator, insurer, or lender can audit.

03 · §3.1
Data Governance

Written policies covering secure storage, restricted access, and annual staff training on AI-related privacy risks. Private and confidential data not uploaded to AI systems without consent and a risk check.

Firm-wide policy · Annual training
04 · §3.1
Client Consent

Express written consent from each client in advance before any private or confidential data is uploaded to an AI system. Verbal consent is not sufficient.

Written · In advance of data upload
05 · §3.2
Responsible AI Use Policy

A written firm-wide policy covering roles, annual training, how human judgement interacts with AI, and risk guidance. Covers internal and third-party AI. A meeting is not a policy.

Written policy · Annual review
06 · §3.2
AI System Register

A maintained written register of every AI tool used in service delivery — including ChatGPT, Copilot, and shadow AI used without formal approval.

Includes unapproved tools · Reviewed periodically
07 · §3.3
Risk Register per AI System

Documented risks per tool: bias, erroneous outputs, information limitations, data retention. Description, likelihood, impact, mitigation, risk appetite, RAG rating. Reviewed quarterly.

Per system · Quarterly review
08 · §4.1
Procurement Due Diligence

Written requests to AI vendors covering environmental impact, data compliance, training data quality and bias, and vendor liability. Documented follow-ups. Gaps logged in the risk register.

Per vendor · Written record
Phase 3

Deliver and prove

When AI contributes to a client output, the professional judgement behind it has to be visible and defensible.

09 · §4.2
Reliability Decision per Output

For each material AI output: written record of assumptions, concerns, mitigations, and a fitness-for-purpose conclusion signed by a named, qualified surveyor.

Named QS · At point of review
10 · §4.2
Dip-Sampling

For automated or high-volume AI outputs, randomised dip samples at regular intervals. Methodology documented. The firm remains accountable for every output regardless.

Methodology documented · Firm accountable for all outputs
11 · §4.3
Client Disclosure

Written disclosure per bank relationship, in terms of engagement, in advance. Covers when AI is involved, PI cover, how to contest, how to seek redress, how to opt out.

Written · Per client · In advance
12 · §4.4
Explainability Readiness

Ability to provide on request written information about the AI system used, its workings and limitations, due diligence, risk management, and reliability decisions.

Audit trail accessible on request

A fifth chapter-level section of the standard — §5 Development of AI — applies only to firms that develop their own AI systems. Most QS firms use AI rather than build it, so §5 sits outside the twelve practical areas above. Read the compliance guide for how this distinction works.

How BankBuild handles this
Compliance built into the workflow. Not bolted on top.

Most QS firms will build their RICS AI compliance framework manually across all twelve areas — spreadsheets, Word templates, email trails, version-controlled PDFs. It works, but it requires discipline to maintain every register, every reliability decision, every disclosure, on every project, for every surveyor.

BankBuild automates RICS AI compliance across every area of the standard, as a byproduct of the monitoring workflow. Every AI interaction is logged at the point it happens. Every reliability decision is captured when the surveyor reviews the output. Client disclosure is generated as a PDF appendix on every report. No separate system. No extra overhead. The documentation the standard requires is produced by the work your team already does.

See how BankBuild works for QS firms in construction finance →

AI Transparency Register

Every AI interaction logged automatically per project, per page, with timestamp, system detail, prompt, and response. The §4.4 audit trail produced as a byproduct of workflow, not retrospectively.

Named surveyor sign-off at point of review

The §4.2 reliability decision captured when the surveyor approves the output. Named accountability, assumptions, concerns, fitness-for-purpose conclusion — recorded in one action, not assembled at end-of-month.

Auto-generated disclosure PDF

Client disclosure is appended to every exported report automatically — the written disclosure required under §4.3 of the standard, without any manual drafting.

Interactive training module

Training covering core §2 baseline knowledge topics, with completion tracking per surveyor and a certificate recorded to the firm's compliance register. The §2 obligation backed by evidence, not assumption.


Compliance resources

Everything in one place.

Live now
The Complete RICS AI Compliance Guide
for Construction Finance QS Firms

All twelve areas of the RICS AI standard explained across three phases — know and decide, govern, and deliver and prove. Every area mapped to its specific RICS section, written for construction finance QS firms.

22 min read 12 areas 3 phases Practitioner checklist included
Read the guide → Free · No account required
Common questions

What QS firms keep asking.

More in the full FAQ · or read the complete compliance guide.

Yes. The standard sets no usage frequency threshold. Material impact applies where an AI output is capable of influencing the delivery of the service. If AI is used at any point in the monitoring workflow with material impact, all documentation requirements apply. Occasional use is not a compliance position.
Yes. The standard sets no firm size threshold. A one-person practice using AI in service delivery has the same documentation obligations as a 200-person firm. The word "must" appears throughout the mandatory requirements and applies to every RICS-regulated firm.
A short, anonymous internal survey — "what AI tools do you use day-to-day?" — is the most practical first step. The results typically surprise principals. Build your usage register from that baseline. The firm is responsible for all AI in service delivery, whether formally approved or not.
Free download
RICS AI Compliance Checklist

Twelve areas. Three phases. Work through it in a single principals' meeting. Know exactly what you have, what form it needs to take, and what's missing.

Want to talk it through? hello@bankbuild.com

01Baseline knowledge & training — per individual, maintained
02Material impact & appropriateness — written determination, on file
03Data governance policy — storage, access, retention, annual staff training
04Client consent — written, in advance of data upload
05Responsible AI use policy — written, covering roles, training, oversight, risk
06AI usage register — all systems including informal tools
07Risk register — per system, reviewed quarterly
08Procurement due diligence — per AI vendor, written record
09Reliability assessment framework — named surveyor sign-off per material output
10Dip-sampling programme — methodology documented for automated/high-volume use
11Client disclosure — written, per client, in advance
12Explainability readiness — audit trail accessible on request
Built for QS firms in construction finance
Updated for the mandatory March 2026 standard
Free · No account required
Last updated: 26 April 2026
↑ Back to top
See compliance built into the workflow — not bolted on top.

BankBuild is designed so that RICS AI compliance documentation — usage register, reliability decisions, client disclosure — is a natural byproduct of running a monitoring inspection, not a separate process bolted on afterwards. 15 minutes to see it live.

BankBuild is an AI-native construction finance monitoring platform connecting quantity surveyors, lenders, developers, and contractors through a single data layer. Built for full compliance with the RICS Professional Standard on Responsible Use of Artificial Intelligence in Surveying Practice (1st edition, ISBN 978 1 78321 555 3), effective 9 March 2026. The standard is organised into five chapter-level sections which translate into twelve practical areas across three phases — know and decide, govern, and deliver and prove — for firms using AI in service delivery. BankBuild automates RICS AI compliance across every area of the standard, as a byproduct of normal construction monitoring workflow. Headquartered in the UK.

Construction finance monitoring is the process lenders use to verify that construction project funds are being spent according to approved budgets before releasing drawdown payments. It involves independent quantity surveyors inspecting sites, assessing costs, and reporting to the lending bank. BankBuild is built around the RICS AI standard's requirements from inception, not retrofitted after publication.