Recently, the pros and cons of Artificial Intelligence (AI) has been topic of much discussion in the media. Many have claimed it represents a quantum leap in information technology, whilst other highlight the risks of autonomous machine learning and decision making.

With regulatory requirements for company-level ESG (Environment Social Governance) assessment and reporting already substantial, and more on the way, the collection of large volumes of data, its analysis and interpretation, followed by compliance reporting may seem to be an ideal task for AI.

AI can improve management efficiency and effectiveness of a company’s governance through helping to implement and track various ESG initiatives and accurately measure progress. The technology can monitor compliance with regulation or stated company ESG objectives, and report and potentially mitigate risk from issues as they arise. This can improve ESG transparency and accountability both internally and externally.

However, integration of AI into existing governance frameworks is challenging on both a technical and practical level. Most companies would lack the necessary staff who have a deep understanding of both AI and governance issues leading to potential AI decision-making without expert human oversight and sign-off. This can challenge the concept of ‘due diligence’ which is an indispensable pre-requisite for demonstrating compliance to ESG regulations. Clearly, a company’s management cannot be demonstrating ‘due diligence’ if key decisions are taken by a machine.

Useable AI has finally become mainstream with internet-based algorithms such as ChatGPT[1] freely available. ChatGPT is a generative AI able to generate original text which may be ‘fiction’. A US lawyer has admitted to using ChatGPT to write a case research brief which was found to include six ‘fictious’ citations, which the software itself, assured the lawyer were real.

Governments around the world are coming to grips with the risks of AI and developing guidance material for both their own agencies, as well as others. In Australia the Department of Artificial Intelligence Ethics Framework as develop a series of AI Ethics Principles, whilst in the USA a comprehensive Artificial Intelligence Risk Management Framework(AI RMF) has been published by the US Department of Commerce. Common to both is the principle that AI systems should benefit individuals, society and the environment. The goal of the AI RMF is to “offer a resource to the organisations designing, developing, deploying, or using AI systems to help manage the many risks of AI and promote trustworthy and responsible development and use of AI”.

The lesson for all is that AI is here to stay, but until Asimov’s[2] Three Laws of Robotics are built into all forms of AI, use with caution.


ChatGPT is built on several state-of-the-art technologies, including Natural Language Processing (NLP), Machine Learning, and Deep Learning. These technologies are used to create the model’s deep neural networks and enable it to learn from and generate text data.

Three Laws of Robotics – Wikipedia.

The National Retail Association continues to upweigh its activities in the ESG to assist members in understanding and navigating the ESG landscape. For more information, please contact Dr Geoffrey Annison.

Contact ESG Consultant, Dr Geoffrey Annison here