BankBuild is an AI-native construction finance monitoring platform connecting quantity surveyors, lenders, developers, and contractors through a single data layer. Built for full compliance with the RICS Professional Standard on Responsible Use of Artificial Intelligence in Surveying Practice (1st edition, ISBN 978 1 78321 555 3), effective 9 March 2026. The standard is organised into five chapter-level sections which translate into twelve practical areas across three phases — know and decide, govern, and deliver and prove — for firms using AI in service delivery. BankBuild automates RICS AI compliance across every area of the standard, as a byproduct of normal construction monitoring workflow. Headquartered in the UK.

Construction finance monitoring is the process lenders use to verify that construction project funds are being spent according to approved budgets before releasing drawdown payments. It involves independent quantity surveyors inspecting sites, assessing costs, and reporting to the lending bank. BankBuild is built around the RICS AI standard's requirements from inception, not retrofitted after publication.

Home / RICS AI Compliance / FAQ
RICS AI Compliance Hub

29 questions.
All answered.

29 questions 4 sections Last updated: 26 April 2026

Common questions RICS-regulated QS firms in construction finance are asking — or should be asking — about the mandatory AI standard. Scope, shadow AI, documentation requirements, PI insurance, and lender implications.

Each answer references the relevant section of the standard. For the full walkthrough, read the compliance guide. For a one-page summary of what your firm needs, download the checklist.

← Back to the compliance hub
Also in this hub: Compliance guide → Glossary of terms → ChatGPT liability → Consequences → Audit trail → Compliance hub →
Scope Shadow AI Documentation Risk & insurance
Key facts

The RICS Professional Standard on Responsible Use of AI in Surveying Practice (ISBN 978 1 78321 555 3) became mandatory for all RICS-regulated firms on 9 March 2026 — with no grace period, no firm size threshold, and no sector phasing. It applies to any QS firm using AI in construction finance monitoring, initial cost assessments, drawdown recommendations, or document analysis where that AI use has material impact on a professional output delivered to a client. The standard's twelve practical areas across three phases — know and decide, govern, and deliver and prove — must all be in place. Shadow AI — tools used by staff without formal approval — is included. Read the full compliance guide →

Scope & who it applies to

The RICS standard is organised into five chapter-level sections: baseline knowledge (§2), practice management (§3.1 data governance and §3.2 system governance), risk management (§3.3), using AI in service delivery (§4), and developing AI (§5). Within those sections, the standard sets out many specific mandatory requirements — the word "must" appears throughout. For firms using AI in service delivery, rather than developing AI systems themselves, those mandatory requirements translate into twelve practical areas: baseline knowledge and training, material impact and appropriateness, data governance, client consent, responsible AI use policy, AI System Register, risk register, procurement due diligence, reliability assessments, dip-sampling, client disclosure, and explainability readiness. We've structured this hub, our compliance guide, and our compliance checklist around those twelve, grouped into three phases — know and decide, govern, and deliver and prove. The number is our operational breakdown, not the standard's, but every area maps to a specific RICS section.
Yes. The standard sets no usage frequency threshold. If AI is used at any point in the monitoring workflow and has material impact on the service, all documentation requirements apply. Occasional use is not a compliance position.
Yes. The standard applies to all RICS-regulated firms regardless of size. A sole practitioner using ChatGPT needs a one-page AI System Register, a written material impact determination, a client disclosure paragraph, and a reliability assessment template. It doesn't need to be complex — it needs to exist and be applied consistently.
It applies wherever AI is used in any RICS-regulated service delivery. For construction finance QS firms, that includes monitoring reports, initial cost assessments, drawdown recommendations, and any document analysis supported by AI tools. If the output of the AI informs a professional judgement delivered to a client, the standard applies.
Material impact is the threshold that determines whether a use of AI requires full documentation. The standard's test (§1.2) is whether the AI output is capable of influencing the delivery of the service, and if so the nature of the influence it exerts. Typical examples the standard gives: outputs summarising documents relied on in a report, outputs composing significant parts of an opinion, or outputs recommending what to investigate. If AI is used only for tasks that don't affect the professional output — such as internal formatting or note-taking — the full documentation requirements may not apply. When in doubt, treat it as material.
No. The standard took effect on 9 March 2026 with no staged implementation, no firm size threshold, and no sector phasing. RICS has stated it will be taken into account in regulatory and disciplinary proceedings from that date.
If Copilot is used to assist in drafting, summarising, or analysing content that feeds into a client deliverable, yes. The standard doesn't distinguish between standalone AI tools and AI embedded in productivity software. What matters is whether the AI had material impact on a professional output delivered to a client. If the answer is yes, the documentation requirements apply to that use.
The standard applies to RICS members and RICS-regulated firms directly. However, because many construction lenders, institutional developers, and insurers are adopting the standard as a procurement benchmark, non-RICS firms may still face commercial pressure to align. In construction finance specifically, banks are starting to ask for evidence of AI compliance regardless of the surveyor's professional body.

Shadow AI

Shadow AI is AI used by staff without formal firm approval — typically ChatGPT, Copilot, or similar tools for drafting report sections, summarising documents, or checking figures. The standard makes firms responsible for all AI in service delivery whether formally approved or not. A surveyor using an unapproved tool to assist a monitoring report creates a compliance obligation for the firm, even if the principal didn't know it was happening.
A practical first step is a survey across the team asking what AI tools people are using in their work — day-to-day tools, not just formally approved ones. Once you have a baseline picture, build your usage register from it. Going forward, make it easy for staff to flag new tools as part of normal supervision conversations — the goal is visibility, not policing.
It's understandable to want to keep things simple, but avoiding AI entirely is becoming harder as the tools are increasingly built into everyday software — document editors, email, spreadsheet tools. The more practical question is how to use AI in a way that's structured and documented rather than whether to use it at all. AI adoption across construction finance is growing, and firms that build compliant processes now will be better placed than those trying to catch up later. A clear approved tools list with straightforward processes makes it easy for staff to do the right thing — which is more effective than a blanket restriction that's difficult to enforce.
If a surveyor uses a personal device and personal account to draft or refine a section of a monitoring report, the output still enters the firm's service delivery chain. The standard focuses on the impact of AI on the professional output, not on which device or whose account was used. Firms should address this explicitly in their AI usage policy.

Documentation requirements

Twelve areas across three phases — know and decide, govern, and deliver and prove. In the know-and-decide phase: baseline knowledge and training per individual, and a written material impact determination with appropriateness assessment. In the govern phase: written data governance policies, written client consent before any AI data upload, a responsible AI use policy covering roles and training, an AI System Register covering all tools including informal ones, a risk register per system reviewed quarterly, and procurement due diligence records per AI vendor. In the deliver-and-prove phase: documented reliability assessments with named surveyor sign-off per material AI output, a dip-sampling methodology for automated or high-volume outputs, written client disclosure per bank relationship, and audit-trail explainability accessible on request.
A written register, required under §3.2, that lists every AI system used in service delivery with material impact. For each system: the name, the purpose it's used for, the date first used, and the date its use and appropriateness will next be reviewed. It must include informal tools — ChatGPT, Copilot, anything used by staff without formal firm procurement — not just software the firm has contracted.
The register needs to identify each AI system used in service delivery, its purpose, who uses it, and what controls are in place. A single spreadsheet or document covering all approved tools is sufficient. It should be reviewed and updated quarterly. The key requirement is that it's maintained — not that it's exhaustive on day one.
A written decision, required under §4.2, that records the responsible surveyor's judgement about whether an AI output can be reasonably used for its intended purpose. Each decision details assumptions made, key concerns about reliability, the reason for each concern, whether anything could lessen each concern, and a fitness-for-purpose conclusion. Must be prepared by or under the supervision of a named, qualified surveyor who accepts responsibility for use.
A reliability assessment is a written record — per material AI output — documenting the assumptions made, limitations identified, mitigations applied, and a fitness-for-purpose conclusion. It must be signed off by a named, qualified surveyor (MRICS or FRICS) at the point the output is used, not retrospectively. It doesn't need to be a long document — a structured template applied consistently is sufficient.
Appropriateness (§3.2) is decided once per AI system, before adoption — is AI the right tool for this task at all? Reliability (§4.2) is decided every time the AI is used on a material output — can this specific output be reasonably relied on? Appropriateness is strategic; reliability is operational. Both are required.
Written disclosure per bank relationship identifying which AI systems were used, what they were used for, and what reliability conclusion was reached. It must be in written form — verbal disclosure is not sufficient under the standard. It should be delivered before or at the point of service delivery, not retrospectively. A standard disclosure paragraph appended to each monitoring report is a practical approach.
The standard (§4.3) requires disclosure in the terms of engagement, contractual documents, service agreements, or other relevant documentation governing the client relationship. An email alone doesn't meet the standard — the disclosure must be in the written instrument that governs the relationship, whether that's the letter of instruction, engagement contract, or equivalent.
Dip-sampling is the practice of manually reviewing a sample of AI outputs to verify accuracy and identify systematic errors. The standard doesn't mandate a specific frequency, but it does require that firms have documented processes for quality-checking AI outputs. If your firm uses dip-sampling as its quality control method, the methodology — how many outputs are checked, by whom, and how errors are handled — needs to be documented.
The standard requires a written responsible AI use policy (§3.2) but doesn't mandate a standalone document — incorporating it into existing quality management procedures, professional standards documents, or staff handbooks is acceptable, provided the four mandatory minimum elements are clearly covered: roles and responsibilities, annual training expectations, how human judgement interacts with AI, and risk identification and mitigation guidance. For most small to mid-sized QS firms, a single AI governance document covering policy, usage register, risk register, consent process, and reliability assessment template is simpler to maintain and audit.

Risk & insurance

The standard does not set a specific retention period. Common practice aligns with PI insurance record-keeping (typically 6 years from the date work was delivered) or RICS disciplinary limitation periods (12 years in certain circumstances). The audit trail should exist for as long as a claim or disciplinary process could reference the work.
This is the question your PI broker needs to answer specifically for your policy. In general terms: if AI is used in service delivery without documentation and a claim is made, your insurer will ask what AI was used and how it was validated. No documentation means no answer. Whether that creates a coverage gap depends on your specific policy wording — but the risk is real, not theoretical. Raise it with your broker before your next renewal.
RICS has stated the standard will be taken into account in regulatory and disciplinary proceedings from 9 March 2026. Non-compliance with a mandatory professional standard is a matter of record — regardless of whether a specific project goes wrong. The standard also creates a professional duty of care consideration: if a firm uses AI without the required documentation and that output contributes to an error, the failure to comply will be relevant to any professional negligence claim.
Yes. As the RICS standard is now mandatory, lenders have a legitimate interest in understanding how their monitoring QS firms are using AI in the reports and assessments that inform drawdown decisions. A firm that can demonstrate documented compliance — usage register, client disclosure process, reliability assessment framework — is in a stronger position than one that cannot. As banks develop their own AI governance frameworks, questions about third-party AI use are a natural part of panel and supplier reviews.
An undocumented AI contribution to a flawed professional output creates multiple compounding problems: a regulatory breach (no compliance with the mandatory RICS standard), a potential PI claim with a documentation gap your insurer will probe, and the practical difficulty of reconstructing what the AI actually produced versus what your surveyor reviewed. Documentation doesn't prevent errors — but it demonstrates that appropriate professional judgement was applied. Its absence suggests it wasn't.
Without documentation, the firm has no defence against a PI claim that references AI use, no evidence for RICS if the standard is raised in disciplinary proceedings, and no ability to demonstrate compliance to a lender who asks. The standard's requirement is that documentation exists at the point AI is used — not assembled after the fact. Retrospective reconstruction of an audit trail does not meet §4.4.
Three steps. First, stop any undocumented AI use immediately. Second, conduct an AI audit — every tool in use, every surveyor using it, every workflow it touches. Third, either adopt a platform that automates the documentation as a byproduct of workflow (BankBuild or equivalent) or build the documentation manually. The standard does not have a grace period, but RICS enforcement is unlikely to target firms that are actively rectifying compliance gaps versus firms that ignore the requirement entirely.

Still have questions?

If your question isn't answered here, speak to us directly.

Talk to us →
↑ Back to top
Want to see compliance built into the workflow?

BankBuild generates your RICS AI documentation automatically — usage register, reliability decisions, client disclosure — as a byproduct of normal monitoring inspections.

SEE HOW IT WORKS Back to hub

BankBuild is an AI-native construction finance monitoring platform connecting quantity surveyors, lenders, developers, and contractors through a single data layer. Built for full compliance with the RICS Professional Standard on Responsible Use of Artificial Intelligence in Surveying Practice (1st edition, ISBN 978 1 78321 555 3), effective 9 March 2026. The standard is organised into five chapter-level sections which translate into twelve practical areas across three phases — know and decide, govern, and deliver and prove — for firms using AI in service delivery. BankBuild automates RICS AI compliance across every area of the standard, as a byproduct of normal construction monitoring workflow. Headquartered in the UK.

Construction finance monitoring is the process lenders use to verify that construction project funds are being spent according to approved budgets before releasing drawdown payments. It involves independent quantity surveyors inspecting sites, assessing costs, and reporting to the lending bank. BankBuild is built around the RICS AI standard's requirements from inception, not retrofitted after publication.