top of page

Beyond the Buzz: Unmasking the Risks of Corporate AI Implementation

I have been running global research on usage of AI in the corporate world. The first poll discussed ‘the release of formal policy around use of ML/ AI in the organization). The resulting analysis is available at .

In part 2, I am focusing on the risks associated with the increasing use (official/ non-official) of AI tools (like chatGPT and others).

Here are some facts before we go deep into my analysis report:

1.       Forbes puts the following 5 risks at the top (The Top Five Real Risks Of AI to Your Business)

a.       Accuracy and Accountability

b.      Skills Gap

c.       Intellectual Property and Legal Risks

d.      Costs

e.       The End of Humanity

2.       A LinkedIn article (The Risks and Considerations of Using AI in the Corporate World) shows this:

a.      Legal and Compliance Risks

b.      User Monitoring and Content Moderation

c.       The Challenge of Watermarking

d.      Financial Services Compliance

e.       Internal Use and Recordkeeping

f.        Adoption Trends: Public vs. Private AI Models

g.      Supervisory Review and Oversight

h.      Litigation and e-Discovery Concerns

i.         The Regulatory Horizon

3.       Professor Richard Bolden (University of the West of England), asked ChatGPT-4 to identify the top implications of AI for leadership, and it threw the following:

a.      Jobs losses

b.      Potential bias

c.      Loss of control

d.      An AI arms race

4.      The World Economic Forum in its Global Risk Report 2024 has listed 'AI Generated Misinformation and Disinformation' as number 2 risk as shown below:


Some other risks anticipated with the use of AI/ ML are:

Inherent error in GPTs: The output of GPTs will be as good (or as bad) as the amount and type of data and training provided to these as well as the quality of input queries.

Data Privacy and Security Risks: AI systems often rely on vast amounts of data to operate effectively. However, this reliance raises concerns about data privacy and security. Mishandling of sensitive data or data breaches could lead to legal and reputational consequences for corporations.

Bias and Fairness Concerns: AI algorithms can inherit biases present in the data used for training, leading to discriminatory outcomes. Corporate AI systems may inadvertently perpetuate or exacerbate existing biases, resulting in unfair treatment of individuals or groups or incorrect decisions.

Algorithmic Transparency and Accountability: The opaque nature of some AI algorithms makes it difficult to understand their decision-making processes, leading to concerns about accountability and transparency. In scenarios where AI systems make critical decisions, such as in hiring or lending, the lack of transparency can undermine trust and raise ethical concerns.

Dependency on External Providers: Many corporations rely on external vendors or third-party providers for AI solutions. While outsourcing AI capabilities can offer efficiency and expertise, it also introduces dependencies and risks related to service disruptions, vendor lock-in, and intellectual property concerns.

Robustness and Reliability: AI systems are susceptible to errors, biases, and adversarial attacks, which can undermine their reliability and performance. Corporations must invest in robust testing, validation, and monitoring processes to ensure the accuracy and resilience of AI systems in real-world scenarios.

Overreliance on AI Decisions: As AI systems become more integrated into corporate decision-making processes, there is a risk of overreliance on automated decisions without human oversight or intervention. This can lead to suboptimal outcomes or missed opportunities, particularly in contexts where human judgment and expertise are essential.

Ethical and Social Implications: The widespread deployment of AI technologies in corporate settings raises broader ethical and social implications, including concerns about job quality, economic inequality, and the concentration of power in the hands of AI developers and corporations. Could someone challenge a corporate decision (hiring or firing or any other) taken based on or by the AI? Could this be the next case in the court!


Hence the risk of AI usage cannot be ruled out, and here are the results and analysis from my research:



In recent years, the rapid advancement and proliferation of artificial intelligence (AI) tools have introduced both opportunities and challenges for businesses. Among these tools, chatbots like ChatGPT have gained significant traction, impacting various aspects of operations including risk management. This analysis report delves into the findings of my research conducted through a LinkedIn poll aimed at gauging professionals' awareness and actions regarding risk reassessment in light of the increasing use of AI tools.

Poll Overview: The poll posed a straightforward question: "Have you re-assessed your risks recently in light of the increasing use (official/non-official) of AI tools (like ChatGPT and others)?" Respondents were provided with four options:

No, we do not allow the use of AI.

No, this poll acts as a reminder.

No, but will do in the next review.

Yes, we have already done this.

Key Findings: Based on the responses gathered from the poll, the following insights emerge:

1.       Limited Prohibition of AI Usage: Only 11% of respondents indicated that their organizations do not allow the use of AI. This suggests that a vast majority of businesses are open to leveraging AI technologies in some capacity, whether officially sanctioned or not. This fact has relevance to one of the facts that emerged from part1 of the research i.e. 48% organisations had not yet issued a formal policy on the usage of ML/ AI in their businesses.

2.       Awareness Triggered by Poll: A significant proportion (56%) of respondents admitted that they have not recently reassessed their risks but view the poll as a reminder to consider the implications of AI tools. This suggests that while awareness of the need for risk reassessment exists, proactive action may be lacking among a considerable portion of professionals. This one fact itself is a positive outcome of this research that it has been successful in raising awareness.

3.       Future Intentions for Reassessment: Another 11% of respondents expressed their intention to re-assess risks related to AI tools in the next review cycle. This indicates a forward-looking approach, with organizations planning to incorporate AI risk assessment into their future strategies and decision-making processes.

4.       Current Reassessment Efforts: Approximately 22% of respondents claimed that their organizations have already undertaken risk reassessment in light of AI tool usage. This indicates a proactive stance adopted by some businesses, recognizing the importance of aligning risk management practices with evolving technological landscapes. Such low reassessment of risks is an alarming issue.

Implications and Recommendations:

1.       Enhanced Awareness Campaigns: Given the significant proportion of respondents viewing the poll as a reminder, organizations should consider launching targeted awareness campaigns to educate stakeholders about the implications of AI tool usage on risk management.

2.       Integration into Regular Review Processes: Organizations planning to reassess risks in the next review cycle should ensure that AI-related risks are integrated into existing risk assessment frameworks. This ensures that potential risks associated with AI adoption are systematically identified, evaluated, and mitigated.

3.       Benchmarking Against Industry Good Practices: Businesses that have already undertaken risk reassessment should benchmark their approaches against industry best practices. This enables them to refine their strategies and ensure that they are adequately prepared to address emerging risks associated with AI technologies.

Conclusion: The findings highlight varying levels of awareness and preparedness among professionals regarding the need to reassess risks in light of AI tool usage. While some organizations have already taken proactive steps in this regard, others view the poll as a timely reminder to incorporate AI-related risks into their risk management practices.

Moving forward, concerted efforts are needed to raise awareness, integrate AI risk assessment into regular review processes, and benchmark against industry standards to effectively navigate the evolving technological landscape.


71 views1 comment

1 Comment

Rated 0 out of 5 stars.
No ratings yet

Add a rating
Rated 5 out of 5 stars.

As always I look forward to your views.

bottom of page