Falcon Windsor and Insig AI, technology companies that provide advanced data infrastructure and AI-driven ESG research tools, and expert reporting and advisory firms for Insig AI, have published practical models for the responsible use of AI in precocious internships, research papers and corporate reporting.
Based on an analysis of 40 FTSE companies engagement and all FTSE 350 reports published between 2020 and 2024, the report reveals an increase in the use of generative AI among UK companies, often without training, policy or oversight.
Investors believe that AI adoption is inevitable and look forward to the benefits and efficiency it may bring, but are increasingly concerned about the truthfulness of corporate reporting and the impact it has on the authors.

Practitioners and investors agree that the report must remain a direct expression of management’s opinion, and without guidance, AI will be used to report risks that undermine the accuracy, reliability and accountability that underpins market confidence.
With the growing momentum in AI adoption, only a short window of opportunity remains in preparation to mitigate risks representing the financial system.
This report is not by chance, but provides clear recommendations on how companies can implement AI in their reporting process by design. At the heart of it is the guidelines: treat generative AI like a precocious intern: convenient, fast, capable, but inexperienced, prone to overconfident, and never to be left unsupervised.
AI usage is growing, but requires more training
Research shows that most of the companies interviewed were investigating the generation AI, ranging from ad hoc use of chatbots and testing Microsoft Copilot to adopting very closely planned recruitments.
In general, only large companies were implementing formal projects for use in the financial team, particularly in the financial team. There was a clear lack of formal training on how to effectively use the generator AI.
The number of FTSE 350 companies mentioning AI in their annual report more than doubled between 2021 and 2024, but the average number of mentions increased five times over that period. In 2024, 68% of FTSE 350 companies mentioned AI in their annual report (including 76% of FTSE 100).
It is not yet a single report pointing to the generated AI related to the reporting process. This suggests that there is a window of opportunity to develop practical models for its use.
Investors want accuracy and author peace of mind
All focus groups in the study recognized the appeal of AI to reduce growing workloads driven by changing disclosure requirements. The average length of the FTSE annual report has increased over the past decade.
Changes in incoming calls to UK and EU sustainability reporting requirements suggest that trends continue and generative AI continues
The reporting process is more productive and efficient.
But they fear risk. Even assuming that the “enterprise” version of AI is used and the data will ring firmly, the interview revealed the following concerns:
All output will sound the same as the company uses. It seems as if leadership reports “not bothered.” Generic AI makes it easier to “game the system” by including tickbox buzzwords and phrases, especially with the initial analysis of reports that often come from machines. The report contains insufficient information given that the generator AI is a “black box.”
Investors are open to using generated AI to process large amounts of information, but it is clear that the voices of reports must remain human. They look forward to opinions, judgments and future-looking stories that come from the management and board, not from the machine.
The concern is that outsourcing these factors risks weakening the trust that reports intend to build. Investors are
They also ask businesses to clearly state how AI is used in their reporting process.
There are narrow windows to make it right
The research suggests that the general usage in creating company reports is still low, suggesting that most companies are in the early stages of AI adoption.
This gives businesses important opportunities. It’s about using proper checks and balance to put AI thoughtfully into the reporting process.
Acting now allows organizations to enjoy the benefits of efficiency while ensuring that reporting continues to meet its fundamental objectives. Build trust through accurate, clear and authentic communication.
There is scope of guidance from regulatory authorities
Although study participants do not expect or want more regulations, they believe that clear signals from the Financial Reporting Council or the FCA are welcome.
Use of generative AI will not change the existing obligations of companies or directors, but will be a gentle reminder that they can have a significant impact on how they emit them.
With more generative AI built into it, such a sense of security helps businesses to balance innovation and accountability.
Recommendations
This study suggests that “precocious interns” are bright, capable, enthusiastic, but inexperienced and prone to overconfident.
Treating AI like such an intern means checking its work, giving it clear boundaries and allowing it to work without supervision.
According to the report, everything about how companies use generated AI in reports should flow from this idea.
To make the most of the new military of precocious interns, the report supports:
It introduces formal training programs with modules for reporting sensitive information, requests that people involved in corporate reports be involved, track participating learning, and practice how to create great prompts, but while there is no forgetful of generative AI, it cannot provide true answers to become a better reader.
The report almost recommends managing disclosures using:
In the short term, either as a memo that general AI is not being used, or in the long term, a general statement that it is discussed in the governance report, explains certain disclosures that the company describes the use of generated AI in its reporting, and includes negative use statements in the section covering issues of forward-looking information and opinions.
The commonly used generation AI can support management and process-intensive tasks. We help you summarize meetings, clean copies, and draft regular disclosures.
I recommend reading
It is also useful for research, visual creation, and early editing. However, it should not be used to write opinion-driven sections, strategic messages, CEO letters, or commentary on future prospects.
The report asserts that final sign-offs are unreliable and that you should not be allowed to manipulate sensitive data other than secure systems.
Companies need to ensure that training is in place, establish simple governance guidelines, and clearly disclose whether and how AI has been used.
A report to maintain both the integrity of the report and the trust it is intended to build.