One biased algorithm permanently damaged the reputation of global software business Workday.
Lawyers stood up in court on several different occasions, and confidently cited laws that quite simply didn’t exist.
Despite a glowing reference from a previous landlord, a prospective tenant’s application was automatically denied by SafeRent’s screening software, which deemed her “score” was too low to proceed to the next stage.
These are all examples of what happens when AI is misused. They span very different industries and circumstances, but they all end in devastating consequences. And that’s just the tip of the iceberg.
AI is an incredible tool, and even though it feels like we’ve all been discussing it for a lifetime it’s still relatively new. It’s tempting to rely on it; to let it do our thinking for us; to outsource our creativity and our imagination to it. After all, it can seemingly do anything… right?!
But AI is only effective with human intervention. The pay-off of using it irresponsibly is simply not worth what it will cost you in the long run. L&D and HR aren’t exempt from the negative implications of AI misuse; far from it.
You can’t afford to ignore these risks, so let’s explore exactly what they look like in practice.
Irresponsible AI use can quickly widen from a small misstep into a financial black hole.
Financial risks might not be the first example that comes to your head when you think of AI misuse, but there’s a few ways they can rear their head.
The first is wasted investment. Companies are keen to jump on the AI bandwagon without doing the work to actually make it… well, work. Deploying AI without a clear framework for governance leads to systems that simply don’t perform as they’re supposed to.
In practice, this looks like companies spending millions building models only to find that the data they were trained on was questionable. Then they pull the plug, watching the money they spent developing the AI software go straight down the drain.
But that’s not the only financial risk.
As we detailed at the start of this blog, AI misuse can lead straight to legal complications – and one thing about legal proceedings is they can be quite pricey.
Just look at the famous case of Anthropic, the ill-fated AI startup that was ordered to pay $1.5 billion in a copyright lawsuit after it was found that the company’s LLM was trained on 465,000 pirated texts without any permission from the authors.
If Anthropic had done their due diligence, they could have avoided not just the financial costs, but the reputational ones as well. Speaking of which…
Trust is one of the most fragile – and most valuable – assets that a business holds, and AI misuse can erode that trust fast.
Customers are justifiably becoming more mindful of how their data is used. They’re more sceptical of systems they can’t understand; more preoccupied with camouflaging themselves online so that data brokers can’t sneak into every aspect of their lives. And who can blame them?
AI is another big piece of this. A single scandal can undo years of brand-building… or kill your organisation before it’s even had a chance to get off the ground.
Just look at the example of Clearview AI. The company built a massive database comprising billions of images they’d scraped from social media – without the consent of the people actually in those photos. Along with the reputational fallout, they also faced regulatory fines under the GDPR in several EU countries (e.g. €30.5 million by the Dutch regulator) for collecting and processing biometric data unlawfully.
The reputational cost doesn’t just sit outside the business. Employees want to feel proud of the organisations they work for. If your staff feels that your organisation is deploying AI irresponsibly, both morale and engagement take a hit.
We’re well acquainted with the importance of compliance here at Thrive – it’s the starting point for most organisations when it comes to workplace learning, and with good reason.
AI presents a whole new host of compliance challenges. Governments around the world are moving to regulate it as quickly as possible, scrambling amongst the lawsuits before everything gets out of hand. From the EU’s AI Act to frameworks emerging in the UK and US, the message is clear: organisations must prove their systems are ethical.
In the AI age, compliance is a baseline expectation for businesses. The cost of failing to comply is significant enough to not even think about avoiding it:
Regulators are also starting to demand more than just technical compliance; they expect demonstrable accountability. That means making sure decisions made around AI are transparent, and issues are tackled before they spiral out of control.
Companies that don’t prepare for this new regulatory landscape will find themselves left behind.
This might seem a little ironic, given that AI and new technology both seem to go hand-in-hand with innovation. But when used irresponsibly, AI slows the very innovation it promises to accelerate.
When people don’t trust an AI system, the fallout tends to eat up more time and money than any gains ever could.
Take recent findings from Ipsos UK, reported in Diginomica. They show that lack of trust among employees is a major reason AI tools stall in organisations. In other words, if users aren’t confident an AI will work fairly and correctly, they just won’t use it.
In the UK, companies pushing ahead with AI without proper skills or oversight are starting to feel the pinch. A TechUK survey revealed that many face uncertainty around return on investment (ROI) and high costs, with lack of expertise cited among the biggest barriers.
All of this adds up: when AI experimentation lacks an ethical foundation, progress naturally stalls.
But when people believe in the technology and feel safe using it, AI starts shifting from a risky experiment to a genuine asset.
How can L&D support the responsible use of AI, so that it’s baked into every part of the organisation?
Most AI failures come down to one thing: misunderstanding. When teams don’t grasp what AI can and can’t do, they miss critical warning signs.
L&D can change that with tailored training that demystifies AI and shows what responsible use looks like in practice. Case studies like Anthropic’s $1.5 billion copyright lawsuit bring those risks to life, and as we explored in our blog The role of joy in learning design, storytelling is one of the most effective ways to bring compliance content to life.
Meanwhile, critical thinking programmes teach employees to interrogate data rather than blindly accepting it. In a world where AI can make even the most confident-sounding mistake, this skill is non-negotiable.
AI laws are tightening fast. As we detailed in our list of risks, non-compliance can lead to costs that range from bans to financial fallout.
L&D can help teams stay ahead by embedding compliance into everyday learning. From role-specific modules to audit simulations, they can ensure employees understand their responsibilities and keep accurate records regulators will expect to see.
This proactive approach saves organisations from panic… not to mention the expense of scrambling to fix problems after the fact.
Biased algorithms harm real people; everyone from job candidates to misdiagnosed patients.
In the workplace, L&D can tackle this head-on with bias recognition training and ethics workshops that encourage teams to think about the human impact of their work.
The result is fairer AI systems and a workforce that understands why doing the right thing matters.
L&D can prevent AI chaos and collapse by teaching:
This shifts AI from a risky experiment to a powerful driver of progress.
When it comes to AI, silence is far from golden. Keeping AI plans under wraps only fuels uncertainty, which in turn fuels confusion. People want to know how these systems work and, more importantly, why they’re being used.
L&D can turn that anxiety into understanding by helping teams explain AI in plain, human language. Why not take a page out of Thrive’s book? Last year, we held a company-wide training day designed to help every single person in the organisation get to grips with AI. This covered everything from use cases, to using AI responsibly, to understanding some of the more complex aspects of AI. We came away from it feeling more confident – and less overwhelmed!
But this isn’t just a one-off workshop. Training around AI should mirror the fact that it’s constantly evolving. Keep it consistent… which brings us to our next point.
AI doesn’t sit still, and your training shouldn’t either. A one-off workshop might build some excitement, but it won’t prepare your teams for the constantly shifting AI landscape.
The key is continuous, bite-sized learning that moves at the same pace as the technology. Microlearning updates keep everyone sharp without overwhelming them, while peer communities give employees a space to swap experiences and solve problems together.
AI is already reshaping industries, but the cost of irresponsibility is steep. Businesses that treat responsibility as an afterthought will face risks far greater than any reward.
Those that take it seriously will not only avoid these pitfalls, but also unlock AI’s full potential.
The question isn’t whether you can afford to use AI responsibly. It’s whether you can afford not to.
Looking for more insights? Check out our LinkedIn newsletter: AI helped us do it – or if you’d like to see how Thrive’s responsible AI features can benefit your organisation, book a demo today.
Explore what impact Thrive could make for your team and your learners today.
One biased algorithm permanently damaged the reputation of global software business Workday.
Lawyers stood up in court on several different occasions, and confidently cited laws that quite simply didn’t exist.
Despite a glowing reference from a previous landlord, a prospective tenant’s application was automatically denied by SafeRent’s screening software, which deemed her “score” was too low to proceed to the next stage.
These are all examples of what happens when AI is misused. They span very different industries and circumstances, but they all end in devastating consequences. And that’s just the tip of the iceberg.
AI is an incredible tool, and even though it feels like we’ve all been discussing it for a lifetime it’s still relatively new. It’s tempting to rely on it; to let it do our thinking for us; to outsource our creativity and our imagination to it. After all, it can seemingly do anything… right?!
But AI is only effective with human intervention. The pay-off of using it irresponsibly is simply not worth what it will cost you in the long run. L&D and HR aren’t exempt from the negative implications of AI misuse; far from it.
You can’t afford to ignore these risks, so let’s explore exactly what they look like in practice.
Irresponsible AI use can quickly widen from a small misstep into a financial black hole.
Financial risks might not be the first example that comes to your head when you think of AI misuse, but there’s a few ways they can rear their head.
The first is wasted investment. Companies are keen to jump on the AI bandwagon without doing the work to actually make it… well, work. Deploying AI without a clear framework for governance leads to systems that simply don’t perform as they’re supposed to.
In practice, this looks like companies spending millions building models only to find that the data they were trained on was questionable. Then they pull the plug, watching the money they spent developing the AI software go straight down the drain.
But that’s not the only financial risk.
As we detailed at the start of this blog, AI misuse can lead straight to legal complications – and one thing about legal proceedings is they can be quite pricey.
Just look at the famous case of Anthropic, the ill-fated AI startup that was ordered to pay $1.5 billion in a copyright lawsuit after it was found that the company’s LLM was trained on 465,000 pirated texts without any permission from the authors.
If Anthropic had done their due diligence, they could have avoided not just the financial costs, but the reputational ones as well. Speaking of which…
Trust is one of the most fragile – and most valuable – assets that a business holds, and AI misuse can erode that trust fast.
Customers are justifiably becoming more mindful of how their data is used. They’re more sceptical of systems they can’t understand; more preoccupied with camouflaging themselves online so that data brokers can’t sneak into every aspect of their lives. And who can blame them?
AI is another big piece of this. A single scandal can undo years of brand-building… or kill your organisation before it’s even had a chance to get off the ground.
Just look at the example of Clearview AI. The company built a massive database comprising billions of images they’d scraped from social media – without the consent of the people actually in those photos. Along with the reputational fallout, they also faced regulatory fines under the GDPR in several EU countries (e.g. €30.5 million by the Dutch regulator) for collecting and processing biometric data unlawfully.
The reputational cost doesn’t just sit outside the business. Employees want to feel proud of the organisations they work for. If your staff feels that your organisation is deploying AI irresponsibly, both morale and engagement take a hit.
We’re well acquainted with the importance of compliance here at Thrive – it’s the starting point for most organisations when it comes to workplace learning, and with good reason.
AI presents a whole new host of compliance challenges. Governments around the world are moving to regulate it as quickly as possible, scrambling amongst the lawsuits before everything gets out of hand. From the EU’s AI Act to frameworks emerging in the UK and US, the message is clear: organisations must prove their systems are ethical.
In the AI age, compliance is a baseline expectation for businesses. The cost of failing to comply is significant enough to not even think about avoiding it:
Regulators are also starting to demand more than just technical compliance; they expect demonstrable accountability. That means making sure decisions made around AI are transparent, and issues are tackled before they spiral out of control.
Companies that don’t prepare for this new regulatory landscape will find themselves left behind.
This might seem a little ironic, given that AI and new technology both seem to go hand-in-hand with innovation. But when used irresponsibly, AI slows the very innovation it promises to accelerate.
When people don’t trust an AI system, the fallout tends to eat up more time and money than any gains ever could.
Take recent findings from Ipsos UK, reported in Diginomica. They show that lack of trust among employees is a major reason AI tools stall in organisations. In other words, if users aren’t confident an AI will work fairly and correctly, they just won’t use it.
In the UK, companies pushing ahead with AI without proper skills or oversight are starting to feel the pinch. A TechUK survey revealed that many face uncertainty around return on investment (ROI) and high costs, with lack of expertise cited among the biggest barriers.
All of this adds up: when AI experimentation lacks an ethical foundation, progress naturally stalls.
But when people believe in the technology and feel safe using it, AI starts shifting from a risky experiment to a genuine asset.
How can L&D support the responsible use of AI, so that it’s baked into every part of the organisation?
Most AI failures come down to one thing: misunderstanding. When teams don’t grasp what AI can and can’t do, they miss critical warning signs.
L&D can change that with tailored training that demystifies AI and shows what responsible use looks like in practice. Case studies like Anthropic’s $1.5 billion copyright lawsuit bring those risks to life, and as we explored in our blog The role of joy in learning design, storytelling is one of the most effective ways to bring compliance content to life.
Meanwhile, critical thinking programmes teach employees to interrogate data rather than blindly accepting it. In a world where AI can make even the most confident-sounding mistake, this skill is non-negotiable.
AI laws are tightening fast. As we detailed in our list of risks, non-compliance can lead to costs that range from bans to financial fallout.
L&D can help teams stay ahead by embedding compliance into everyday learning. From role-specific modules to audit simulations, they can ensure employees understand their responsibilities and keep accurate records regulators will expect to see.
This proactive approach saves organisations from panic… not to mention the expense of scrambling to fix problems after the fact.
Biased algorithms harm real people; everyone from job candidates to misdiagnosed patients.
In the workplace, L&D can tackle this head-on with bias recognition training and ethics workshops that encourage teams to think about the human impact of their work.
The result is fairer AI systems and a workforce that understands why doing the right thing matters.
L&D can prevent AI chaos and collapse by teaching:
This shifts AI from a risky experiment to a powerful driver of progress.
When it comes to AI, silence is far from golden. Keeping AI plans under wraps only fuels uncertainty, which in turn fuels confusion. People want to know how these systems work and, more importantly, why they’re being used.
L&D can turn that anxiety into understanding by helping teams explain AI in plain, human language. Why not take a page out of Thrive’s book? Last year, we held a company-wide training day designed to help every single person in the organisation get to grips with AI. This covered everything from use cases, to using AI responsibly, to understanding some of the more complex aspects of AI. We came away from it feeling more confident – and less overwhelmed!
But this isn’t just a one-off workshop. Training around AI should mirror the fact that it’s constantly evolving. Keep it consistent… which brings us to our next point.
AI doesn’t sit still, and your training shouldn’t either. A one-off workshop might build some excitement, but it won’t prepare your teams for the constantly shifting AI landscape.
The key is continuous, bite-sized learning that moves at the same pace as the technology. Microlearning updates keep everyone sharp without overwhelming them, while peer communities give employees a space to swap experiences and solve problems together.
AI is already reshaping industries, but the cost of irresponsibility is steep. Businesses that treat responsibility as an afterthought will face risks far greater than any reward.
Those that take it seriously will not only avoid these pitfalls, but also unlock AI’s full potential.
The question isn’t whether you can afford to use AI responsibly. It’s whether you can afford not to.
Looking for more insights? Check out our LinkedIn newsletter: AI helped us do it – or if you’d like to see how Thrive’s responsible AI features can benefit your organisation, book a demo today.
Explore what impact Thrive could make for your team and your learners today.