The RICS Professional Standard on Responsible Use of Artificial Intelligence in Surveying Practice became mandatory on 9 March 2026. It applies to every RICS member and regulated firm worldwide, and for quantity surveying firms working in construction finance, the compliance requirements are more operationally demanding than most firms realise.

The RICS AI in Construction 2025 report, drawing on over 2,200 professionals globally, found that 45% of organisations report no AI use and fewer than 1% have scaled it across projects — yet 70% of project managers and quantity surveyors expect AI to deliver greater value. The profession knows AI is coming. The gap between that expectation and having the documentation, processes, and infrastructure to use it compliantly is wider than most firms expect.

This is not a general overview of the standard. This is a practical guide for QS firms that monitor construction projects, prepare drawdown reports, and verify costs on behalf of UK banks. If anyone in your firm uses ChatGPT, Copilot, automated report-drafting software, or any AI tool in any part of the monitoring workflow — your firm is using AI under this standard, and you are subject to specific documentation, disclosure, and quality assurance obligations that did not exist before it was published.

Here is what the standard actually requires, what it means for construction finance monitoring, and what your firm needs to have in place now.

Why the standard exists

RICS published the professional standard in September 2025, with an effective date of 9 March 2026. Its purpose is to protect the profession from the risk of AI being used without documentation, supervision, or accountability. The standard's own language is clear: the importance of the skill and experience of the professional surveyor must sit at the core of AI use, alongside a guard against complacency about the involvement of this technology when providing surveying services.

For QS firms in construction finance, the risks are concrete. AI drafting a site visit report that a lender relies on for a drawdown decision. AI extracting figures from a facility agreement that feed into a cost triangulation. AI summarising programme evidence that informs a completion forecast. In each case, AI is influencing the delivery of the surveying service — and the standard requires that this influence be recognised, managed, and documented.

The standard is organised into five chapter-level sections: baseline knowledge (§2), practice management covering data and system governance (§3.1 and §3.2), risk management (§3.3), using AI in service delivery (§4), and developing AI (§5). For firms using AI — rather than building it — those sections translate into twelve practical areas your firm needs in place. The rest of this guide walks through all twelve, phase by phase.

↑ Back to top

Phase 1 — Know and decide

Before AI is used in any client-facing work, your firm needs to know what it is, who can use it, and when it triggers compliance obligations. Two areas belong in this phase: baseline knowledge and training, and material impact and appropriateness.

Baseline Knowledge & Training (§2)

Every RICS member who uses AI to deliver surveying services must develop and maintain sufficient knowledge to support responsible use. At minimum, the standard requires a basic understanding of four topic areas: the different types and subsets of AI and their basic ways of working, limitations and failure modes; the risk of AI systems producing erroneous outputs; the inherent risk of bias in AI systems; and data usage and data risks relevant to AI use.

The standard acknowledges that knowledge across the profession is uneven. That makes this an active obligation on each individual surveyor, not a firm-wide assumption. A firm cannot meet §2 by declaring its staff "competent" — it needs an auditable record per surveyor showing what training has been completed and when the next refresher is due.

Material Impact & Appropriateness (§1.2 Scope & §3.2)

Material impact is the gateway. The standard's test (§1.2) is whether an AI output is capable of influencing the delivery of the service, and if so the nature of the influence it exerts. Examples the standard itself gives: outputs summarising documents relied on in a report, outputs composing significant parts of an opinion, outputs recommending which part of a building to investigate. In construction finance monitoring, almost every AI output crosses this threshold because the reports drive lending decisions.

Two written records are required. First, a material impact determination — a written record that the firm has assessed its AI use and concluded it has material impact, with the reasoning documented. Second, an appropriateness assessment (§3.2) carried out before any AI system with material impact is used, covering the surveying services provided, the nature of the task, available alternative tools, environmental and stakeholder impact, data risks, and the risk of erroneous output or bias.

Once the gateway is crossed, every area in Phase 2 and Phase 3 applies.

↑ Back to top

Phase 2 — Govern

Before AI reaches a client-facing output, your firm needs the documented governance a regulator, insurer, or lender can audit. Six areas sit in this phase: data governance, client consent, responsible AI use policy, AI usage register, risk register, and procurement due diligence.

Data Governance (§3.1)

Firms using AI must safeguard private and confidential data. The standard requires five specific measures: storing such data securely (encrypted and backed up); restricting access to staff who strictly need it; training those staff at least annually on AI-related privacy and confidentiality risks; preparing data for use in ways that protect privacy (including anonymisation where appropriate); and refraining from uploading private or confidential data to AI systems unless there is express written consent in advance and reasonable steps have been taken to confirm the upload does not pose unacceptable risk.

For construction finance monitoring specifically, this covers every piece of borrower information, every facility agreement, every contractor detail, every valuation. The expectation is that firms treat these as categories of data requiring active protection, not passive custody.

Client Consent (§3.1)

The standard is explicit: private and confidential data must not be uploaded to an AI system unless there is express written consent in advance from affected stakeholders. Verbal consent is not sufficient. This consent requirement sits before — and is separate from — the broader client disclosure requirement that appears in Phase 3. Consent is permission to upload their data to an AI system; disclosure is telling them how AI has been used in their service.

Firms must also take reasonable steps to confirm that uploading data to a given AI system does not pose unacceptable risk. This means due diligence on the AI system's own data handling (where does data go, how long is it retained, is it used for training) before any client data is uploaded. Consent alone, without the risk check, does not meet §3.1.

Responsible AI Use Policy (§3.2)

The standard requires firms that use or intend to use AI in service delivery to develop and implement a written policy on the responsible use of AI systems, informed by the risk register. The policy can be a standalone document or added to existing firm policies on IT usage, data protection, or client engagement — but it must exist as a written artefact.

At minimum, the policy must cover four things: the roles, responsibilities and liabilities of everyone involved in AI procurement and use; the regular (at least annual) training expectations for those individuals; how human control and judgement interacts with AI use, such as through regular monitoring or dip-sampling of outputs; and guidance for staff on identifying and mitigating AI risks. The policy must cover both internally developed AI systems and those provided by third parties.

A meeting is not a policy. An email is not a policy. The standard requires a written, documented policy that exists and is maintained. For firms unsure how to draft one, the four minimum elements above are the scaffold — everything else is firm-specific adaptation.

AI Usage Register (§3.2)

Every AI system used in service delivery with material impact must be logged in a written register, covering the system name, the purpose it is used for, the date it was first used, and the date its use and appropriateness will next be reviewed.

The register must include everything, not only software the firm has formally procured. ChatGPT accessed through a personal account by a junior surveyor. Copilot integrated into Microsoft Word. An AI transcription tool running on a laptop. A photo-extraction tool on someone's phone. If it is used in service delivery and has material impact, it is in scope of the register. Shadow AI — tools in use without formal firm approval — is one of the most common compliance gaps under this area.

Risk Register (§3.3)

The standard requires firms using material-impact AI to create and operate a risk register that documents four overarching risks: inherent bias in the AI system and its outputs; erroneous outputs from the AI system; limitations in the quantity and quality of information available about the system and its training data; and retention or use of data inputted by the firm into the system.

For each risk, the register must record a description, the likelihood and impact, a mitigation plan, the firm's risk appetite (one of averse, minimalist, cautious, open or ambitious per the standard's own glossary), status updates, and a RAG rating or similar categorisation. The register must be reviewed and updated at least quarterly by staff responsible for decisions about the firm's AI use.

Procurement Due Diligence (§4.1)

Before procuring any third-party AI system with material impact, firms must carry out detailed due diligence. This involves requesting information in writing from the vendor, following up in writing as necessary, recording the information provided, and keeping a record of practical testing for fitness for purpose.

The standard lists six specific written requests: environmental impact of the AI system; stakeholders involved in its development; compliance with applicable data and confidentiality laws; permissions obtained where data and content relating to individuals have been used; the accuracy, relevance and diversity of training data including known gaps and any particular known risks of bias; and the type and extent of vendor liability. Where a vendor provides limited or no information, firms must identify the resulting risks and record them in the risk register.

This is the one area that remains genuinely firm-led — the firm's procurement of any AI system is the firm's own written process.

↑ Back to top

Phase 3 — Deliver and prove

When AI contributes to a client output, the professional judgement behind it has to be visible and defensible. Four areas sit in this phase: reliability assessments, dip-sampling, client disclosure, and explainability readiness. This is where the paperwork the standard requires meets the work that actually happens.

Reliability Assessments (§4.2)

The standard's central requirement. For every AI output with material impact on service delivery, the responsible surveyor must apply professional judgement — defined as knowledge, skills, experience, and professional scepticism — to decide whether the output is reliable, and the decision must be documented in writing.

Each written reliability decision must detail five things: any relevant assumptions made; key areas of concern including the reliability of underlying datasets; the reason for each concern; whether anything could be done to lessen each concern; and the impact of the concerns on the overall reliability of the output, with a conclusion on whether the output can reasonably be used for its intended purpose.

The decision must be prepared by, or under the supervision of, an appropriately qualified and named surveyor who accepts responsibility for use. Where the firm determines that an output cannot reasonably be used for its intended purpose, this conclusion must be communicated in writing to the client together with the reasoning.

Dip-Sampling (§4.2)

Where AI is used to produce a high volume of outputs or to automate a series of outputs — for example extracting data across an entire drawdown pack, or processing hundreds of photos from a site visit — the standard recognises it is generally neither necessary nor proportionate to scrutinise every output individually. Instead, firms must undertake randomised dip samples at regular intervals to scrutinise and assure quality.

The methodology must be documented — how samples are selected, how often, what threshold triggers escalation — and the firm remains fully accountable for every output regardless of whether each was individually reviewed. Dip-sampling is a proportionality allowance, not a transfer of responsibility.

Client Disclosure (§4.3)

Transparency in AI use is one of the standard's core principles. Firms using AI systems with material impact on service delivery must make clear to clients, in writing and in advance of use, when and for what purpose AI is to be used.

The terms of engagement, contractual documents, service agreements, or other relevant documentation must detail six things in writing: when AI will be involved in the delivery of a surveying service; the parts of the process in which AI will be involved; the extent of professional indemnity cover for AI use by the firm if available; the internal processes to contest the use of an AI system; the processes to seek redress if a client feels they have been negatively affected by AI use; and how a client can opt out of AI use in the delivery of a service, if at all.

For QS firms in construction finance this means the bank client — the lender instructing the monitoring work — receives written disclosure before any AI-assisted work begins.

Explainability Readiness (§4.4)

Clients may ask to understand how AI was used in the delivery of their service, particularly if they want to challenge an outcome. The standard requires firms to be able to provide, on request, written information covering five things: the type of AI system used; the basic ways of working and limitations of the AI system; the due diligence processes carried out before using it; the way in which relevant risks are identified and managed; and the decisions made about the reliability of its output.

This is not a standing obligation to publish everything proactively. It is an obligation to have the audit trail ready when asked. Firms that document as they work can respond quickly to an explainability request. Firms that try to reconstruct the audit trail after the fact are already out of compliance — the documentation must exist at the point AI was used, not be assembled retrospectively.

↑ Back to top
Note on scope

This guide is written for QS firms in construction finance monitoring — our area of deepest expertise. The RICS AI standard applies identically across all disciplines. If your firm practises in valuation, building surveying, project management, or any other RICS-regulated area, the twelve areas across three phases are the same. The requirements don't change by discipline; only the worked examples do.

What this means specifically for construction finance monitoring

Based on observation across our client portfolio, the average UK construction drawdown takes roughly 23 days from site inspection to bank fund release. Of those 23 days, around 3 represent actual professional QS work — the inspection, cost assessment, and report preparation. The remaining 20 days are process: formatting reports, emailing documents between parties, chasing clarifications, waiting in review queues, and reconciling data across spreadsheets. That is 20 days of manual process on every drawdown, across a UK construction finance market worth roughly £350 billion — almost all of which still runs on spreadsheets, email, and PDFs. As AI adoption accelerates across this market, the volume of verification work flowing through AI-assisted processes will grow rapidly. The RICS standard is getting ahead of that curve.

This is precisely the workflow where AI adoption is accelerating fastest. QS firms are already using AI tools to speed up the administrative burden: drafting report templates, extracting cost data from contractor submissions, cross-referencing figures against previous valuations, and formatting outputs to match individual bank requirements. Based on what we are seeing across firms working in this space, AI-assisted report preparation can reduce preparation time from 6–8 hours to under an hour on standardised projects.

But under the new RICS standard, every one of those AI-assisted steps now requires documentation. If your firm prepares 100 monitoring reports per week across your project portfolio, and AI touches any part of that process, the compliance requirements are real. You need a register entry for each AI tool — with purpose, first-use date, and next review date. You need written client disclosure for each bank relationship — covering not just what AI does but how they can contest or opt out. You need documented reliability assessments with named surveyors taking responsibility. You need a dip-sampling programme. And you need a risk register reviewed quarterly.

Most QS firms currently have none of this in place. The gap between "we're using AI" and "we're using AI compliantly" is significant — and from 9 March 2026, it is a gap that carries regulatory, disciplinary, and professional indemnity consequences.

But compliance is only one dimension of what this standard signals. The deeper message is about where the profession is heading.

The RICS AI in Construction 2025 report — based on over 2,200 construction professionals globally — confirms the scale of this gap. 45% of organisations report no AI use at all, and just 1% have scaled it across projects. Yet nearly 70% of project managers and quantity surveyors expect AI to deliver greater value in their work. The profession sees where it's going. It just hasn't built the compliance infrastructure to get there.

QS firms broadly fall into three positions right now. Some are actively using AI across their monitoring workflows — ChatGPT for drafting, automated tools for cost extraction and benchmarking — but haven't documented any of it. They have a compliance problem they may not yet recognise. Others believe they don't use AI at all, while their staff are quietly using ChatGPT, Copilot, or other tools without formal approval — what the industry is now calling "shadow AI." The RICS standard applies to them too, and the first step is simply getting visibility of what's happening inside their own practice. And a third group has genuinely not adopted AI yet and is watching from the sidelines, uncertain whether the technology is mature enough to trust with professional work.

For that third group, the risk isn't just falling behind on compliance. It's falling behind on capability. AI in construction finance verification is not a passing experiment. Automated cost extraction, AI-assisted benchmarking against BCIS data, intelligent document processing, real-time portfolio analytics — these are becoming the operational baseline, not a competitive edge. The RICS standard exists precisely because AI adoption across the profession has reached the point where governance is mandatory. Firms that haven't started are not avoiding risk by waiting. They're accumulating a different kind of risk: the risk of being unable to match the speed, consistency, and transparency that banks and developers will increasingly expect from their monitoring QS.

The firms that will be strongest through this transition are not the ones that adopted AI first. They are the ones that adopted it with structure — with documented processes, clear governance, and compliance built into the workflow from day one. That is the real opportunity the RICS standard creates.

↑ Back to top

Three areas carry the most operational weight day-to-day

Twelve areas apply, but in practice three carry the highest operational burden for a working construction monitoring firm.

First, the AI System Register (§3.2). Operationally heaviest because it must be maintained continuously, includes shadow AI which means it's never "done", and grows as the firm's tool inventory evolves. For a typical QS firm handling 8–10 active monitoring projects, this register could involve documenting 5–15 separate AI tools or workflows.

Second, Client Disclosure (§4.3). Touches every bank relationship and every engagement letter. Must be in writing and in advance. Has six specific elements — when AI is involved, which parts of the process it touches, the extent of PI cover, how to contest, how to seek redress, how to opt out. High-frequency, high-stakes, contractual.

Third, Reliability Assessments (§4.2). Happens per material AI output. For a firm running 100 monitoring reports a week with AI in the workflow, this is the highest-volume documentation requirement. A named, qualified surveyor must record the assumptions, concerns, mitigations, and fitness-for-purpose conclusion for each one.

The other nine areas matter — but these three are where principals will spend their time.

↑ Back to top

How BankBuild handles this

BankBuild is an AI-native construction finance monitoring platform designed with the RICS AI standard's requirements in mind from the outset — not retrofitted after publication.

The principle behind BankBuild's approach is straightforward: compliance documentation should be a natural byproduct of how QS firms already do their monitoring work, not a separate administrative burden layered on top. When AI is used in the verification workflow, the interactions are logged, reliability decisions are captured at the point of QS review, and audit trails are maintained as part of the process — not assembled retrospectively.

This means QS firms using BankBuild for construction finance monitoring do not maintain separate compliance processes alongside their inspection workflow. The compliance infrastructure is embedded in the workflow itself. Training completion recorded per surveyor. AI System Register and Risk Register maintained automatically. Client consent capture supported at engagement. Reliability decisions recorded at point of review via named surveyor sign-off. Dip-sampling methodology supported. Client disclosure appended to every exported report. Audit trail ready on request.

One connected workflow. Zero manual compliance. QS firms using BankBuild are compliant by default — not because the platform replaces professional judgement, but because it surfaces the decisions the standard requires and captures them structurally as they happen.

BankBuild was built around the RICS AI standard's requirements from inception, not retrofitted after publication.

This covers AI used within BankBuild's monitoring workflow. For AI tools your firm uses outside the platform — such as ChatGPT for other tasks, or standalone cost estimation software — the standard's documentation requirements still apply, and your firm will need to maintain those records separately.

↑ Back to top

What happens if you don't comply

The standard is a conduct standard, which means non-compliance is addressed through RICS regulatory and disciplinary processes. The standard explicitly states that in regulatory or disciplinary proceedings, RICS will take the standard into account when deciding whether a member or regulated firm acted appropriately and with reasonable competence. It also confirms that in legal proceedings, judges, adjudicators, and equivalent decision-makers are likely to take RICS standards into account.

The practical consequences fall into three categories: professional indemnity exposure (claims involving undocumented AI use become harder to defend, and PI insurers are increasingly asking about AI governance at renewal); regulatory risk (RICS disciplinary action where the standard has been materially breached); and commercial risk (lenders, developers, and institutional clients increasingly asking for evidence of RICS AI compliance as part of procurement).

The full consequences are covered on a separate page: What happens if you don't comply.

The RICS AI standard isn't a burden — it's a competitive opportunity. The firms that move first to embed compliance into their workflows will be the ones that banks trust with their construction lending portfolios. The ones that wait will be scrambling to retrofit documentation onto processes that were never designed to produce it.

↑ Back to top

A note on §5 Development of AI

A separate section of the standard — §5 — applies only to firms that develop their own AI systems. Building models. Training them on firm data. Shipping AI tools to clients or internally. For most QS firms in construction finance this section does not apply. Firms that do build their own systems face additional obligations, including documenting applications and risks, carrying out sustainability impact assessments, involving diverse stakeholders, and obtaining written permissions for personal data use.

Most QS firms use AI rather than build it. This guide focuses on the twelve practical areas that apply to firms using AI in service delivery. If your firm is developing AI systems, speak to us directly — the §5 requirements sit alongside, not instead of, the twelve areas above.

↑ Back to top

Practical compliance checklist

If your QS firm uses any form of AI in construction finance monitoring — and most now do, even if it is just ChatGPT for drafting or an automated spreadsheet tool — here is what you should have in place today, covering all twelve areas across the three phases.

01 Baseline Knowledge & Training (§2). Per individual RICS member using AI, documented training covering AI types, limitations, erroneous output risk, bias, and data usage risks. Refresher schedule maintained.

02 Material Impact & Appropriateness (§1.2 & §3.2). Written determination that your firm's AI use has material impact. Written appropriateness assessment per AI system before use.

03 Data Governance (§3.1). Written policies on private and confidential data. Secure storage, restricted access, annual training for staff with access.

04 Client Consent (§3.1). Express written consent from each client in advance before their data is uploaded to any AI system. Risk check on the AI system itself before upload.

05 Responsible AI Use Policy (§3.2). Written firm-wide policy covering roles, annual training, human oversight, and risk guidance. Covers internally developed and third-party AI.

06 AI Usage Register (§3.2). Written register of every AI system used with material impact. Includes informal tools (ChatGPT, Copilot). Reviewed on a defined schedule.

07 Risk Register (§3.3). Written risk register covering bias, erroneous outputs, information limitations, and data retention. Per system, reviewed quarterly.

08 Procurement Due Diligence (§4.1). Written due diligence per AI vendor covering six specific information requests. Gaps logged in risk register. Practical testing recorded.

09 Reliability Assessment Framework (§4.2). Written reliability decision per material AI output. Named surveyor sign-off. Assumptions, concerns, mitigations, fitness-for-purpose conclusion recorded.

10 Dip-Sampling Programme (§4.2). Methodology documented for automated or high-volume AI outputs. Sampling frequency and escalation thresholds defined. Firm accountable for every output.

11 Client Disclosure (§4.3). Written disclosure in terms of engagement per client relationship. Six specific elements required, provided in advance of AI use.

12 Explainability Readiness (§4.4). Audit trail accessible on request. Documentation maintained at the point AI is used, not retrospectively assembled.

Free Resource
RICS AI Compliance Checklist for QS Firms
A one-page summary of what your firm needs to have in place across the three phases of the standard.
Download the checklist
↑ Back to top

Where to go next

Ready to see it? If you're exploring how to build compliance into your monitoring process rather than bolting it on, we're happy to walk you through the BankBuild workflow. Reach out at hello@bankbuild.com or connect with Laura, our CEO, on LinkedIn.

↑ Back to top

Frequently Asked Questions

A selection of the most common questions. For the full list visit the standalone FAQ page or the RICS AI compliance glossary for definitions of every term used in this guide.

The standard says an output has material impact if it is capable of influencing the delivery of the service — for example, outputs summarising documents relied on in a report, outputs composing significant parts of an opinion, or outputs recommending what to investigate. If your firm is using AI to draft monitoring reports, extract cost data, or benchmark against historical projects, that is almost certainly material.
The baseline knowledge requirement applies to all members. Even if your firm isn't using AI today, the standard requires awareness and readiness. And it's worth auditing whether your team is using tools like ChatGPT informally — "shadow AI" use is more common than most firms realise.
For individual outputs with material impact, yes — including assumptions, concerns, and a named surveyor accepting responsibility. For automated or high-volume outputs, dip-sampling at regular intervals is acceptable, but firms remain accountable for each output regardless.
In writing, in advance: when AI will be involved, which parts of the process it touches, the extent of PI cover if available, how to contest AI use, how to seek redress, and how to opt out. This must be in your terms of engagement or contractual documents.
Written requests to the vendor covering environmental impact, development stakeholders, data law compliance, permissions for personal data, training data accuracy and diversity including known gaps and bias risks, and the vendor's liability. Follow-ups must be in writing and recorded. If the vendor provides limited information, you must log the resulting risks in your risk register.
Yes, but it must be documented. ChatGPT or any generative AI tool used in preparing client deliverables is considered AI with material impact on service delivery. Your firm must log it in your AI System Register, disclose its use to the bank client in writing before it touches their work, and have a named surveyor conduct a written reliability assessment on every output that informs a client report. Using ChatGPT is not prohibited — using it without documentation is the compliance risk.
Shadow AI refers to staff using AI tools — typically ChatGPT, Copilot, or similar — without formal firm approval or documentation. Under the RICS standard, the firm is responsible for all AI use in service delivery, whether approved or not. If a junior surveyor uses ChatGPT to draft a section of a monitoring report and that report informs a bank's drawdown decision, the firm has used AI with material impact — and all documentation requirements apply. The first step for most firms is auditing what tools their staff are actually using.
The standard applies to AI systems, which it defines broadly. Simple rule-based macros — such as a spreadsheet formula that sums a column — are not AI. However, tools that use machine learning, natural language processing, or pattern recognition to generate outputs — including AI-powered spreadsheet add-ins that auto-categorise costs, predict values, or generate narrative text — would likely be considered AI with material impact if those outputs inform client deliverables. If in doubt, document it. Over-documenting carries no regulatory risk; under-documenting does.
The standard requires the risk register to be reviewed and updated at least quarterly by staff responsible for decisions about the firm's use of AI. Each review should assess whether risks have changed, whether new AI tools have been adopted, and whether mitigation measures remain adequate. The review itself should be documented with the date, reviewer name, and any changes made.
The standard requires firms to include in their terms of engagement how a client can opt out of AI use, if at all. If a bank client requests that no AI is used in their monitoring work, the firm must either comply or clearly explain why opt-out is not feasible for specific aspects of the workflow. This should be agreed in writing before the instruction proceeds.
Yes. The standard requires written disclosure in your terms of engagement or contractual documents for each client relationship where AI is used. Different banks may have different risk appetites, different requirements for AI transparency, and different contractual terms. A single generic disclosure is unlikely to satisfy the standard's requirement that clients are informed about AI use specific to their instruction.
The standard requires that each reliability decision is prepared by, or under the supervision of, an appropriately qualified surveyor who accepts responsibility. For construction finance monitoring, this would typically be a chartered surveyor (MRICS or FRICS) with experience in the relevant discipline. The named surveyor's credentials should be recorded alongside each reliability decision.
The standard took effect on 9 March 2026. There is no formal grace period. RICS has stated that the standard will be taken into account in regulatory, disciplinary, and legal proceedings from its effective date. Firms that are already using AI should have compliance documentation in place now. Firms that are adopting AI should implement documentation from the point of first use.
Explainability means your firm must be able to provide, on request, written information about each AI system used — including its type, basic workings and limitations, the due diligence conducted on it, how risks are managed, and the reliability decisions made. In practice, if a bank asks "how did AI contribute to this monitoring report?", you need to be able to answer specifically: which tool, what it did, what the surveyor checked, and why the output was deemed reliable. You do not need to explain the AI's internal algorithms — you need to explain your firm's process for using and validating it.
Yes, but the administrative burden is significant. A firm can maintain the AI usage register in a spreadsheet, write reliability decisions in Word documents, draft client disclosures manually, and keep a risk register in whatever format they choose. The standard does not require any specific software. However, firms handling multiple active monitoring projects will find that manual compliance documentation adds hours per week to an already admin-heavy workflow — which is the gap that platforms like BankBuild are designed to close.
↑ Back to top