AI in corporate governance: how UK FTSE boards and company secretarial teams are adopting it

Zaryna Pirverdi

|

|

28–42 minutes

 read

,

Key insights

  • Human judgement is essential. AI is unlikely to replace governance professionals. Its adoption reinforces the importance of judgement, discretion and understanding board dynamics
  • Governance teams shape responsible adoption. As AI use expands, governance teams play a central role in ensuring adoption is responsible, well controlled and aligned with regulatory and board expectations.
  • Company secretaries drive structure and capability. For FTSE organisations, AI is moving from informal use into a structured part of the governance toolkit, requiring clearer policies, defined oversight and the right capability within governance teams.

Artificial intelligence (AI) is gradually becoming part of the corporate governance landscape in the UK’s largest listed companies. While adoption remains measured, governance and company secretarial teams across the FTSE 100 and FTSE 250 are starting to explore how AI can support their work, particularly as board agendas become more complex and reporting expectations continue to increase. In turn, it’s shaping demand for company secretarial hiring and recruitment as teams build out governance capability.

Meanwhile, boards are also placing greater focus on how AI is governed, not just how it is used, and are increasing expectations on governance teams to support oversight, risk management and informed decision-making.

In recent years, governance leaders within FTSE companies have begun experimenting with AI in controlled and practical ways. Adoption to date has largely focused on supporting administrative and information-heavy activities, rather than core decision-making.

The Chartered Governance Institute UK & Ireland has observed that AI is already being used informally in some organisations without fully defined policies in place, highlighting the need for clearer approaches to governing AI use in organisations. This suggests that the technology is entering day-to-day governance activity ahead of the formal governance frameworks required to support it.

Many company secretarial teams are beginning to use AI to streamline routine tasks such as formatting board papers, proofreading, summarising content and producing visual or executive summaries. These activities are often time‑consuming but relatively low risk, making them a natural starting point for adoption.

This often involves using generative AI tools to standardise formatting, improve clarity and consistency across board packs and reduce manual reworking. Some teams also using AI to generate first-draft minutes or transcripts, which are then reviewed and refined by governance professionals where judgement and nuance are required.

Beyond board meetings, governance teams are also beginning to explore AI for broader administrative automation. Use cases include organising statutory filings, managing document libraries, formatting formal correspondence and supporting routine compliance workflows.

Some teams are experimenting with AI to assist in reviewing accounts or long‑form reports for consistency, completeness and alignment with prior disclosures, helping governance professionals focus on areas requiring judgement rather than manual checking.

Across FTSE companies, organisations are increasingly referencing AI in corporate reporting, principal risk disclosures and strategy discussions. Governance teams are using AI to support horizon scanning by summarising regulatory updates, monitoring developments such as the UK Corporate Governance Code and the European Union Artificial Intelligence Act (EU AI Act), and identifying emerging risks more efficiently.

This helps teams to support proactive oversight, rather than relying solely on periodic or manual reviews of regulatory change.

AI’s ability to generate transcripts or first‑draft minutes has attracted growing interest across FTSE governance teams, especially for lengthy or complex meetings.

In most cases, AI is used to generate an initial draft, which is then reviewed, refined and finalised by governance professionals. Concerns around nuance, confidentiality and judgement mean that AI is not seen as a replacement for experienced company secretarial judgement, particularly when capturing tone, challenge and decision making in sensitive board discussions.

Despite growing interest, most FTSE governance teams are still taking a measured and incremental approach.

Boards routinely discuss highly sensitive information, including privileged legal advice and strategic decision-making. Governance professionals remain cautious about introducing tools that could compromise confidentiality or misinterpret board discussions. Many organisations prioritise the use of approved enterprise AI tools, such as internally governed platforms, while restricting access to public or unvetted tools.

Many organisations have yet to establish clear policies governing the use of AI within governance functions.

Professional bodies have noted that informal experimentation is often outpacing the development of formal oversight structures. This creates uncertainty for governance teams and highlights the need for clearer governance frameworks as adoption increases, with leading organisations introducing internal AI usage policies, defined approval processes and clear guidance on appropriate use cases within governance teams.

AI is unlikely to replace governance professionals. Instead, its adoption reinforces the importance of human judgement, discretion and a deep understanding of board dynamics.

As AI use continues to expand, governance teams will play a central role in ensuring that adoption is responsible, well controlled and aligned with regulatory and board expectations.

For company secretaries in FTSE organisations, the direction of travel is becoming clearer: AI is moving from informal use into a structured part of the governance toolkit, requiring clearer policies, defined oversight and the right capability within governance teams.

Frequently asked questions

This section provides clear, concise answers to the most common queries about hiring for AI security or AI governance

Do organisations need to hire separately for AI security and AI governance?

Not always. Many organisations can extend existing security and privacy roles initially, but gaps emerge as AI systems scale and become business‑critical. The decision depends on maturity, risk exposure and how AI is deployed.

Can existing privacy teams realistically own AI governance?

In many cases, yes, particularly in early stages. However, AI governance introduces technical and operational challenges that often require additional expertise as programmes mature.

When does AI security become a dedicated requirement?

AI security becomes more critical when organisations deploy customer‑facing or high‑risk systems, rely on third‑party models, or use generative AI at scale. At that point, traditional cyber controls may no longer be sufficient.

What does an AI governance officer add that existing teams cannot?

The role brings clear ownership, accountability and oversight across the AI lifecycle. It helps organisations move from fragmented responsibility to structured governance aligned with regulatory expectations.

Featured content