Home / RICS AI Compliance / Glossary
RICS AI Compliance Hub

Every term.
Plain English.

Definitions of every term used in the RICS Professional Standard on Responsible Use of AI in Surveying Practice — plus the construction finance monitoring vocabulary QS firms encounter alongside it. Written for QS firms in construction finance.

Last updated: 26 April 2026
36
terms defined
Also in this hub: Full compliance guide → FAQ — 29 questions → ChatGPT liability → Consequences → Audit trail → Compliance hub →

A

AI System

Standard definition · Glossary of the RICS standard

Any software or tool that uses machine learning, large language models, or automated reasoning to generate, summarise, classify, or analyse information. Under the RICS standard, this includes general-purpose tools such as ChatGPT, Microsoft Copilot, and Google Gemini, as well as AI embedded in specialist surveying or document management software.

The RICS standard adopts the OECD's formal definition: "A machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment."

See also: Material Impact, AI System Register, Shadow AI

AI System Register

§3.2 · Mandatory · Written · Maintained

A written register of every AI system used in service delivery with material impact. Must include system name, purpose, date first used, and date of next review. Must include informal tools (ChatGPT, Copilot, tools used without formal procurement) alongside formally contracted software. The register is the firm's central record of what AI is in use and is the foundation for risk assessment, due diligence, and client disclosure. A single spreadsheet covering all approved tools is sufficient if maintained manually. BankBuild automates the register as part of platform configuration — every AI system inside the platform is registered automatically.

See also: Material Impact, Shadow AI, Risk Register

Appropriateness Assessment

§3.2 · Mandatory · Written

A written assessment, required before using any AI system with material impact on service delivery, confirming AI is the appropriate tool for the task. Must consider the surveying services provided, the nature of the task, available alternative tools, environmental and stakeholder impact, data risks, and the risk of erroneous output and bias. Can take the form of a periodically reviewed policy or standing written statement. Distinct from the reliability decision (§4.2), which happens per output.

See also: Material Impact Determination, Reliability Decision

Audit Trail

§4.4 · Explainability requirement

A chronological record of each material AI interaction in service delivery. For each output: the input provided to the AI, the output generated, any corrections or adjustments made by the surveyor, the quality checks applied, and the name and credentials of the surveyor who reviewed and approved it. Must be accessible on request — not necessarily published proactively.

See also: Explainability Readiness, Reliability Decision

B

BankBuild Risk Score (BRS)

BankBuild proprietary framework

A proprietary 12-dimension risk assessment for construction projects, evaluating financial summary, programme, statutory consents, insurances, professional team, site investigation, contract, outstanding information, site conditions, planning compliance, CDM compliance, and developer track record. Grades from A (85–100, Excellent) to E (0–29, Critical). Generated automatically from BankBuild platform data and approved by a named QS — meeting the RICS reliability decision requirement at point of review.

Baseline Knowledge & Training

§2 · Per individual RICS member

The knowledge every RICS member using AI in service delivery must develop and maintain. The standard specifies a basic understanding of four topic areas: AI types and subsets and their ways of working, limitations and failure modes; the risk of erroneous outputs; the inherent risk of bias; and data usage and data risks. Treated as an active obligation on each individual, not a firm-level policy that can be ticked off once. Firms must maintain an auditable record per surveyor showing training completed and next refresher due. BankBuild provides an interactive training module covering the core §2 baseline knowledge topics, with completion certificates recorded per surveyor.

See also: Material Impact, Shadow AI

Bias (in AI systems)

§3.3 · Risk category

The inherent risk that an AI system's outputs reflect systematic errors derived from its training data, programming, or the context of its use. Can result from variable training data quality, biased programming, lack of data diversity, or limitations of the AI system itself, as well as from AI systems responding to biases of the firm using them. Must be documented as a risk in the §3.3 risk register. The firm's response to known bias is part of the §4.2 reliability decision for affected outputs.

See also: Risk Register, Reliability Decision

C

§3.1 · Mandatory · Written · In advance

Express written consent from the client before any private or confidential data is uploaded to an AI system. Verbal consent is not sufficient. The firm must also take reasonable steps to confirm the upload does not pose an unacceptable risk — meaning due diligence on the AI system's own data handling before upload. Consent is distinct from client disclosure (§4.3): consent is permission to upload data; disclosure is telling the client how AI has been used in their service.

See also: Client Disclosure, Data Governance

Client Disclosure

§4.3 · Mandatory · Written · In advance

Written disclosure, required in the terms of engagement, contractual documents, service agreements, or other relevant documentation governing the client relationship. Must cover six specific elements: when AI will be involved; the parts of the process it will touch; the extent of PI cover for AI use if available; the internal processes to contest AI use; the processes to seek redress if a client feels negatively affected; and how a client can opt out of AI use, if at all. BankBuild generates the required disclosure automatically and appends it to every exported report.

See also: Client Consent, Professional Indemnity Cover

Construction Finance Monitoring

Industry definition

The process lenders use to verify that construction project funds are being spent according to approved budgets before releasing drawdown payments. Involves independent quantity surveyors inspecting sites, assessing costs against benchmark data, and reporting to the lending bank. BankBuild is an AI-native construction finance monitoring platform connecting quantity surveyors, lenders, developers, and contractors through a single data layer.

See also: Drawdown, Quantity Surveyor

Cost Triangulation

BankBuild proprietary methodology

A three-source benchmarking methodology comparing a borrower's stated construction costs against BankBuild's platform comparable database and BCIS lower quartile data. Applied per work zone to identify cost surplus or shortfall and flag where a borrower's budget benchmarks materially below market rates. Where AI is used to assist triangulation, a reliability decision is required.

D

Data Governance

§3.1 · Mandatory · Firm-wide

Written policies and procedures covering the firm's handling of private and confidential data when AI is in use. Must include secure storage (including encryption and backups), access restriction to staff who strictly need it, annual training for staff with access on AI-related privacy risks, data preparation practices (including anonymisation where appropriate), and the absolute rule against uploading private or confidential data to AI systems without express written consent and a completed risk check.

See also: Client Consent, Private and Confidential Information

Dip-Sampling

§4.2 · Mandatory for automated/high-volume AI

The practice of reviewing a randomised sample of AI outputs at regular intervals, rather than individually reviewing every output. Applies where AI is used to automate or produce a high volume of outputs. The methodology must be documented — how samples are selected, sampling frequency, escalation thresholds — and the firm remains fully accountable for every output regardless of whether each was individually reviewed. A proportionality allowance under the standard, not a transfer of responsibility.

See also: Reliability Decision

Drawdown

Industry definition

A staged release of construction loan funds by a lender to a developer, triggered by a QS inspection confirming that works have progressed to the required stage and costs are within budget. The average UK construction drawdown takes 23 days from QS inspection to bank fund release under manual processes. BankBuild's AI-assisted drawdown verification reduces this through automated cost validation and anomaly detection.

See also: Construction Finance Monitoring

E

Erroneous Output

§3.3 · Risk category

An AI output that is factually wrong, logically inconsistent, or otherwise unreliable despite appearing plausible. Sometimes called "hallucination" in the context of large language models. Must be documented as a risk in the §3.3 risk register. The firm's mitigation of erroneous output risk is part of both the §3.2 appropriateness assessment and the per-output §4.2 reliability decision.

See also: Reliability Decision, Risk Register

Explainability Readiness

§4.4 · Mandatory · On request

The firm's ability to provide, on request, written information about: the type of AI system used; its basic ways of working and limitations; the due diligence carried out before using it; how relevant risks are identified and managed; and the decisions made about the reliability of its output. Not a standing obligation to publish everything proactively — it is an obligation to have the audit trail ready when asked. Documentation assembled after the fact does not meet the standard; the audit trail must exist at the point AI was used.

See also: Audit Trail, Transparency Register, Reliability Decision

F

Failure Mode

Standard definition · Glossary of the RICS standard

A particular way in which an AI system might fail to perform its function. Knowledge of failure modes is one of the four §2 baseline knowledge topics every RICS member using AI must understand.

See also: Baseline Knowledge & Training

I

Initial Quote Builder (IQB)

BankBuild product · Six-step wizard

BankBuild's initial assessment workflow, covering the six-step generation of a construction finance monitoring quote. The GovernanceStrip captures the §4.2 reliability decision at point of review during the assessment phase of the IQB. Produces the structured data that feeds the rest of the monitoring workflow. Not part of the standard — BankBuild-specific.

M

Material Impact

§1.2 · Key threshold

The threshold under the RICS AI standard that determines whether a use of AI triggers the full documentation requirements. The standard's test (§1.2) is whether the AI output is capable of influencing the delivery of the service, and if so the nature of the influence it exerts. Typical examples the standard gives: outputs summarising documents relied on in a report, outputs composing significant parts of an opinion, or outputs recommending what to investigate. AI used only for internal tasks that don't affect client-facing outputs — such as formatting or internal scheduling — may not meet this threshold. When in doubt, treat it as material.

See also: Material Impact Determination, Appropriateness Assessment

Material Impact Determination

§1.2 · Written record required

A written record confirming that a firm has assessed its use of AI and determined it has material impact on service delivery. Required before any other compliance documentation is meaningful. Does not need to be a long document — a single written statement by a principal surveyor, with the reasoning documented, is sufficient. Should be reviewed when AI usage changes significantly.

See also: Material Impact

N

Named Surveyor Sign-off

§4.2 · Per material AI output

The requirement that every written reliability decision on an AI output with material impact is prepared by, or under the supervision of, an appropriately qualified and named surveyor who accepts responsibility for use. The named surveyor cannot be anonymous or a generic "the firm" — the individual must be identifiable and accountable. BankBuild captures named sign-off at the point of review via the GovernanceStrip on each AI output.

See also: Reliability Decision, Professional Judgement

P

Private and Confidential Information

Standard definition · Glossary of the RICS standard

Information that must not be unlawfully obtained, used or disclosed. Includes personal data (any information relating to an identified or identifiable natural person, under the UK GDPR) and may include commercially sensitive data. For construction finance monitoring, this covers borrower information, facility agreements, contractor details, valuations, and similar material — the standard's §3.1 data governance rules apply to all of it.

See also: Data Governance, Client Consent

Procurement Due Diligence

§4.1 · Per AI system · Required before procurement

Written due diligence required before procuring any third-party AI system with material impact. Involves requesting information in writing from the vendor, recording the information provided, and assessing it to inform procurement, use, and risk decisions. Must cover six specific information requests: environmental impact, development stakeholders, data law compliance, permissions for personal data, training data accuracy and diversity including known gaps and bias, and vendor liability. Where the vendor provides limited or no information, the resulting risks must be recorded in the risk register. BankBuild provides a Due Diligence Pack covering itself as a vendor.

See also: Risk Register

Professional Indemnity Cover (for AI use)

§4.3 · Disclose to clients

The extent to which a firm's PI insurance covers AI use, which must be disclosed to clients in the terms of engagement (§4.3) if available. Most PI policies do not yet carve out AI use specifically, but insurers are increasingly asking about AI governance at renewal. Firms that can demonstrate RICS AI compliance are in a stronger negotiating position.

See also: Client Disclosure

Professional Judgement

§4.2 · Core of reliability decisions

The judgement the responsible surveyor applies when deciding whether an AI output can reasonably be used for its intended purpose. Defined by the standard as comprising four things: knowledge, skills, experience, and professional scepticism. The §4.2 reliability decision is the documented outcome of professional judgement applied to a specific AI output.

See also: Reliability Decision, Professional Scepticism

Professional Scepticism

Standard definition · Glossary of the RICS standard

An attitude that includes a questioning mind, critically assessing evidence relied on, and being alert to conditions that may cause information provided to be misleading. Derived from the RICS Valuation Global Standards (PS 2). One of the four components of the professional judgement required under §4.2 for reliability decisions.

See also: Professional Judgement

Q

Quantity Surveyor (QS)

Professional context

A construction professional who assesses, monitors, and reports on construction project costs and progress. In construction finance, QS firms act as independent monitors for lenders — inspecting sites, validating cost claims, and recommending drawdown amounts. All RICS-regulated QS firms using AI in this service delivery role are subject to the mandatory RICS AI standard from 9 March 2026.

Quality Assurance (AI outputs)

Industry term · Related to §4.2

The process by which a firm verifies that AI outputs are accurate, appropriate, and fit for the professional purpose before they are used. May include named QS review at point of use, dip-sampling of high-volume outputs, cross-referencing against alternative data sources, or structured checklists. The quality assurance approach must be documented — the standard requires evidence that human professional judgement was applied, not merely that AI was used.

See also: Dip-Sampling, Reliability Decision

R

RAG Rating

§3.3 · Risk categorisation method

Red, Amber, Green — the categorisation method the RICS standard specifies (or similar) for classifying each risk in the §3.3 risk register. Allows firms to prioritise attention and review frequency based on severity.

See also: Risk Register

Reliability Decision

§4.2 · Mandatory · Written · Per material AI output

A written decision, required every time an AI output with material impact is produced, recording the responsible surveyor's judgement about whether the output can reasonably be used for its intended purpose. Must detail five things: assumptions made; key areas of concern including the reliability of underlying datasets; the reason for each concern; whether anything could lessen each concern; and a conclusion on fitness-for-purpose. Must be prepared by or under the supervision of a named, qualified surveyor who accepts responsibility. Where the firm determines an output cannot reasonably be used, the conclusion must be communicated in writing to the client with reasoning. BankBuild captures the reliability decision at the point of review via the GovernanceStrip — not retrospectively.

See also: Professional Judgement, Named Surveyor Sign-off, Dip-Sampling

Responsible AI Use Policy

§3.2 · Written · Firm-wide · Reviewed regularly

A written policy, required of every firm using AI in service delivery, covering the responsible use of AI systems across the firm and informed by the risk register. Must cover at minimum four elements: the roles, responsibilities and liabilities of everyone involved in AI procurement and use; annual training expectations; how human control and judgement interacts with AI; and guidance on identifying and mitigating AI risks. Covers both internally developed and third-party AI systems. A meeting is not a policy. An email is not a policy. The standard requires a written, documented one.

See also: AI System Register, Risk Register, Baseline Knowledge & Training

Risk Appetite

Standard definition · Glossary of the RICS standard

The level of risk that an organisation or individual is willing to accept in pursuit of its objectives. The RICS standard specifies five categories: averse, minimalist, cautious, open, and ambitious. Must be recorded for each risk in the §3.3 risk register.

See also: Risk Register

Risk Register

§3.3 · Mandatory · Reviewed quarterly

A documented list of risks associated with AI use in service delivery. The standard requires coverage of four categories of overarching risk: inherent bias, erroneous outputs, limitations in information available about the AI system and its training data, and retention or use of data inputted into the system. Each risk requires a description, likelihood rating, impact rating, mitigation, risk appetite (one of averse, minimalist, cautious, open or ambitious, per the standard's glossary), status updates, and RAG rating or similar. Reviewed and updated at least quarterly by a responsible staff member.

See also: Bias, Erroneous Output, Procurement Due Diligence, Risk Appetite

S

Shadow AI

Compliance gap · Related to §3.2

AI tools used by staff in service delivery without formal firm approval, procurement, or registration. Common examples: ChatGPT on personal accounts, browser extensions, mobile apps used on personal devices, Copilot features enabled by default in Microsoft 365. Shadow AI creates the biggest §3.2 AI usage register gap in most firms, because the firm doesn't know what's in use. Also creates §3.1 data governance exposure if client data is being uploaded without the firm's awareness.

See also: AI System Register, Data Governance

Site Visit Report

BankBuild product

BankBuild's multi-step inspection wizard for ongoing construction monitoring visits. Builds on previous reports via copy-forward. Where AI features are active in the platform, each interaction is logged automatically, with QS sign-off creating the required reliability decision. PDF export includes the auto-generated §4.3 client disclosure appendix.

T

Transparency Register (BankBuild AI Transparency Register)

BankBuild product · Not standard terminology

BankBuild's mechanism for logging every AI interaction automatically as part of the monitoring workflow — prompt, response, timestamp, system detail, named surveyor decision. The platform surface that makes §4.4 explainability readiness a byproduct of workflow rather than a retrospective reconstruction task. Not the standard's terminology — the RICS standard itself does not use this phrase — but the mechanism that satisfies the §4.4 audit trail obligation.

See also: Audit Trail, Explainability Readiness

↑ Back to top
Want to see RICS AI compliance built into the workflow?

BankBuild generates your usage register, reliability decisions, and client disclosure automatically — as a byproduct of normal monitoring inspections.

SEE HOW IT WORKS Back to hub

BankBuild is an AI-native construction finance monitoring platform connecting quantity surveyors, lenders, developers, and contractors through a single data layer. Built for full compliance with the RICS Professional Standard on Responsible Use of Artificial Intelligence in Surveying Practice (1st edition, ISBN 978 1 78321 555 3), effective 9 March 2026. Headquartered in the UK.