Temporary Internal AI Usage Policy
Effective
immediately
Status: Interim policy pending formal
approval and rollout of a managed company AI service
1. Purpose
The purpose of this
temporary policy is to provide clear guidance on the use of AI tools within the
business, particularly where company data, financial information, customer
information, commercial data, or operational information may be involved.
The business supports the
responsible use of AI where it improves productivity, technical capability,
reporting, analysis, documentation, or decision support. AI tools must,
however, be used in a controlled way to prevent accidental disclosure,
retention, misuse, or external processing of sensitive company information.
This policy is not
intended to discourage AI use. It is intended to ensure that AI is used safely,
appropriately, and under the correct business controls.
2. Scope
This policy applies to all
employees, contractors, temporary staff, and third parties who use AI tools in
connection with company work. This includes, but is not limited to:
•
ChatGPT
•
Claude
•
Microsoft Copilot
•
Gemini
•
Perplexity
•
AI coding assistants
•
AI spreadsheet or data-analysis tools
•
Any other public, personal, or web-based AI service
3. Immediate Position
Until a company-approved,
business-managed AI service is formally in place, staff must not upload, copy,
paste, or process sensitive company data in personal or consumer AI accounts.
This includes personally
owned accounts, free accounts, individual paid subscriptions, or AI services
not reviewed and approved by the business.
The main concern is not
the use of AI itself, but the lack of company control over:
•
Whether data is used to train or improve AI models.
•
Whether prompts, uploaded files, and outputs are logged
or retained.
•
Where the data is processed geographically.
•
Whether the business has visibility, audit,
administration, or access control.
•
Whether sensitive information remains in chat history,
exports, screenshots, or unmanaged locations.
4. Prohibited Use
The following must not be
entered into personal, public, or unmanaged AI tools:
•
Financial spreadsheets or financial reports.
•
Management accounts, budgets, forecasts, pricing,
margins, or costings.
•
Payroll, HR, disciplinary, or employee personal
information.
•
Customer, supplier, or contract data.
•
Commercially sensitive information.
•
Confidential board, management, or strategic
information.
•
Production, operational, or process data that could
expose business capability or risk.
•
Passwords, API keys, tokens, connection strings,
certificates, or credentials.
•
Personal data relating to employees, customers,
suppliers, or third parties.
•
Internal documents marked confidential or not intended
for external sharing.
•
Any data that would cause concern if it were retained,
disclosed, or processed outside company control.
5. Permitted Use
AI tools may be used for
low-risk tasks where no sensitive or confidential company data is included.
Examples of acceptable use include:
•
Drafting general wording, emails, or summaries where no
sensitive data is included.
•
Creating generic templates, checklists, or document
structures.
•
Rewording non-confidential text.
•
Asking general technical, coding, or troubleshooting
questions.
•
Generating sample code using dummy data.
•
Creating example formulas, scripts, or documentation
using fictional information.
•
Learning, research, or general productivity support.
Where possible, staff
should use anonymised, fictional, or heavily redacted examples rather than real
company data.
6. Use of Company Data
Before using any company
data in an AI tool, staff must consider:
•
Is the data confidential, financial, personal,
commercial, or operationally sensitive?
•
Is the AI account company-approved and
business-managed?
•
Has model training on company content been confirmed as
disabled?
•
Does the business have administration, audit, and
access control over the account?
•
Is there a legitimate business reason for using the
data in the AI tool?
If the answer to any of
these is unclear, the data must not be used.
7. Personal AI Accounts
Personal AI accounts must
not be used for sensitive company work. This includes accounts registered using
personal email addresses, individual subscriptions, or unmanaged business use
where the company has no administrative control.
The use of a paid personal
account does not automatically make the tool suitable for company data.
8. Business-Managed AI Services
The preferred approach is
to use a company-approved AI service with appropriate protections, including:
•
Central billing and administration.
•
Business account ownership.
•
Access control and user management.
•
SSO or domain-based controls where available.
•
Confirmation that company content is not used for model
training by default.
•
Appropriate retention, privacy, and compliance
controls.
•
The ability to remove users when they leave or change
role.
The business is reviewing
the use of a controlled Claude Team plan for an initial group of users as a
practical way to support AI adoption safely.
9. User Responsibilities
All users are responsible
for ensuring that they:
•
Do not enter sensitive company data into unmanaged AI
tools.
•
Use dummy, anonymised, or redacted data wherever
possible.
•
Check AI-generated outputs for accuracy before relying
on them.
•
Do not assume AI output is correct, complete, or
approved.
•
Do not use AI to make final decisions without
appropriate human review.
•
Do not store sensitive AI outputs in unmanaged
locations.
•
Ask IT or management for guidance if unsure.
10. Accuracy and Human Review
AI-generated content must
be treated as a draft or support tool only. Users must review AI outputs
carefully before using them, especially where the output relates to:
•
Finance.
•
Legal or compliance matters.
•
HR.
•
Customer communication.
•
Production or operational processes.
•
Technical configuration.
•
Security.
•
Business decisions.
AI must not be treated as
an authoritative source without validation.
11. Exceptions
Any request to use
sensitive company data in an AI tool must be approved in advance by the
appropriate manager and IT.
Approval will depend on
whether the tool is business-managed, whether no-training controls are
confirmed, and whether the data use is necessary and proportionate.
12. Incidents or Concerns
If company data has
already been entered into a personal or unmanaged AI tool, the user must report
this to IT or their manager as soon as possible.
This is to allow the
business to assess the risk, understand what data was involved, and take any
necessary action.
The purpose of reporting
is to manage risk, not to discourage responsible disclosure.
13. Temporary Status and Review
This is a temporary
internal policy and will remain in place until a formal AI usage policy is
approved.
The policy should be
reviewed once a company-approved AI service has been selected and implemented.
At that point, the business should issue clear guidance on:
•
Approved AI tools.
•
Approved users.
•
What types of data can be used.
•
What data remains prohibited.
•
Retention and audit controls.
•
Training requirements.
•
Ongoing governance.
Summary Rule
Until
further notice:
Do not
upload, paste, or process sensitive company data in personal or unmanaged AI
tools. AI use is supported, but company data must only be used where the tool
is company-approved, business-managed, and confirmed as having appropriate
no-training, privacy, retention, and access controls.