From monitoring report drafts in ChatGPT to comparable analyses in Copilot — AI tools are part of everyday surveying workflows. The RICS Professional Standard on Responsible Use of AI became mandatory on 9 March 2026, with no grace period and no firm size threshold. This hub is the definitive resource for firms that need to understand and comply — all twelve practical areas of the standard, explained across three phases for construction finance QS firms.
Your team has been using ChatGPT to draft monitoring report sections for months. One surveyor uses Copilot to summarise facility agreements. Another pastes cost schedules into an AI tool to check for anomalies.
None of it is logged. None of it is disclosed to clients. There is no reliability assessment on file. There is no AI usage register.
On 9 March 2026, the RICS Professional Standard on Responsible Use of AI took effect. Your firm is now non-compliant — not because you did something wrong, but because you didn't document what you were already doing.
RICS Professional Standard on Responsible Use of AI in Surveying Practice. 1st edition. ISBN 978 1 78321 555 3.
RICS has stated the standard will be taken into account in regulatory and disciplinary proceedings from 9 March 2026. Non-compliance with a mandatory professional standard is taken into account in any regulatory or disciplinary proceedings — the AI standard explicitly states this.
If your firm uses AI in service delivery without documenting it and a claim is made, your PI insurer will ask what AI was used and how it was validated. No documentation means no answer. That is a coverage gap, not a theoretical one.
As the RICS AI standard establishes clear requirements for QS firms, lenders have a legitimate basis to ask about AI governance when reviewing their monitoring panels. Firms that can demonstrate documented compliance will be better positioned. Firms that cannot may find the question is asked at the next annual review.
What happens if you don't comply — regulatory, PI, and commercial consequences explained →
The RICS standard is organised into five chapter-level sections. For firms using AI in service delivery — rather than developing AI systems themselves — those sections translate into twelve practical areas your firm needs in place. We've structured this hub, our compliance guide, and our compliance checklist around those twelve, grouped into three phases: know and decide, govern, and deliver and prove.
Before AI is used in any client-facing work, your firm needs to know what it is, who can use it, and when it triggers compliance obligations.
Every RICS member using AI must develop a basic understanding of AI types and limitations, erroneous outputs, bias, and data usage risks. An active obligation on each individual.
Per individual · Maintained over timeA written determination that AI use has material impact on service delivery, plus a written appropriateness assessment per AI system before use. Material impact is the gateway — once crossed, everything below applies.
Written record · Reviewed when AI use changesBefore AI reaches a client-facing output, your firm needs the documented governance a regulator, insurer, or lender can audit.
Written policies covering secure storage, restricted access, and annual staff training on AI-related privacy risks. Private and confidential data not uploaded to AI systems without consent and a risk check.
Firm-wide policy · Annual trainingExpress written consent from each client in advance before any private or confidential data is uploaded to an AI system. Verbal consent is not sufficient.
Written · In advance of data uploadA written firm-wide policy covering roles, annual training, how human judgement interacts with AI, and risk guidance. Covers internal and third-party AI. A meeting is not a policy.
Written policy · Annual reviewA maintained written register of every AI tool used in service delivery — including ChatGPT, Copilot, and shadow AI used without formal approval.
Includes unapproved tools · Reviewed periodicallyDocumented risks per tool: bias, erroneous outputs, information limitations, data retention. Description, likelihood, impact, mitigation, risk appetite, RAG rating. Reviewed quarterly.
Per system · Quarterly reviewWritten requests to AI vendors covering environmental impact, data compliance, training data quality and bias, and vendor liability. Documented follow-ups. Gaps logged in the risk register.
Per vendor · Written recordWhen AI contributes to a client output, the professional judgement behind it has to be visible and defensible.
For each material AI output: written record of assumptions, concerns, mitigations, and a fitness-for-purpose conclusion signed by a named, qualified surveyor.
Named QS · At point of reviewFor automated or high-volume AI outputs, randomised dip samples at regular intervals. Methodology documented. The firm remains accountable for every output regardless.
Methodology documented · Firm accountable for all outputsWritten disclosure per bank relationship, in terms of engagement, in advance. Covers when AI is involved, PI cover, how to contest, how to seek redress, how to opt out.
Written · Per client · In advanceAbility to provide on request written information about the AI system used, its workings and limitations, due diligence, risk management, and reliability decisions.
Audit trail accessible on requestA fifth chapter-level section of the standard — §5 Development of AI — applies only to firms that develop their own AI systems. Most QS firms use AI rather than build it, so §5 sits outside the twelve practical areas above. Read the compliance guide for how this distinction works.
What your AI audit trail needs to contain — a practical guide for QS firms →
Most QS firms will build their RICS AI compliance framework manually across all twelve areas — spreadsheets, Word templates, email trails, version-controlled PDFs. It works, but it requires discipline to maintain every register, every reliability decision, every disclosure, on every project, for every surveyor.
BankBuild automates RICS AI compliance across every area of the standard, as a byproduct of the monitoring workflow. Every AI interaction is logged at the point it happens. Every reliability decision is captured when the surveyor reviews the output. Client disclosure is generated as a PDF appendix on every report. No separate system. No extra overhead. The documentation the standard requires is produced by the work your team already does.
See how BankBuild works for QS firms in construction finance →
Every AI interaction logged automatically per project, per page, with timestamp, system detail, prompt, and response. The §4.4 audit trail produced as a byproduct of workflow, not retrospectively.
The §4.2 reliability decision captured when the surveyor approves the output. Named accountability, assumptions, concerns, fitness-for-purpose conclusion — recorded in one action, not assembled at end-of-month.
Client disclosure is appended to every exported report automatically — the written disclosure required under §4.3 of the standard, without any manual drafting.
Training covering core §2 baseline knowledge topics, with completion tracking per surveyor and a certificate recorded to the firm's compliance register. The §2 obligation backed by evidence, not assumption.
All twelve areas of the RICS AI standard explained across three phases — know and decide, govern, and deliver and prove. Every area mapped to its specific RICS section, written for construction finance QS firms.
More in the full FAQ · or read the complete compliance guide.
Twelve areas. Three phases. Work through it in a single principals' meeting. Know exactly what you have, what form it needs to take, and what's missing.
Want to talk it through? hello@bankbuild.com
BankBuild is designed so that RICS AI compliance documentation — usage register, reliability decisions, client disclosure — is a natural byproduct of running a monitoring inspection, not a separate process bolted on afterwards. 15 minutes to see it live.
BankBuild is an AI-native construction finance monitoring platform connecting quantity surveyors, lenders, developers, and contractors through a single data layer. Built for full compliance with the RICS Professional Standard on Responsible Use of Artificial Intelligence in Surveying Practice (1st edition, ISBN 978 1 78321 555 3), effective 9 March 2026. The standard is organised into five chapter-level sections which translate into twelve practical areas across three phases — know and decide, govern, and deliver and prove — for firms using AI in service delivery. BankBuild automates RICS AI compliance across every area of the standard, as a byproduct of normal construction monitoring workflow. Headquartered in the UK.
Construction finance monitoring is the process lenders use to verify that construction project funds are being spent according to approved budgets before releasing drawdown payments. It involves independent quantity surveyors inspecting sites, assessing costs, and reporting to the lending bank. BankBuild is built around the RICS AI standard's requirements from inception, not retrofitted after publication.