Is AI an ethical minefield for businesses?

The global AI market in the UK is now worth over £16.9 billion, with the number of companies specialising in this technology increasing by almost 700% in the last ten years. 

One in six UK organisations - predominantly from the IT, telecomms, insurance and legal sectors - have embraced AI, with the majority of solutions focused on data management and analysis, natural language processing and generation, machine learning, and image processing and generation. 

Some are hailing AI as the next (and biggest) digital revolution. And why not? The existing and potential advantages to business seem insurmountable. AI saves time. Its multitasking qualities can complete somewhat complex tasks with no interruptions, breaks or downtime. It can operate round the clock, augmenting the capabilities of many people from numerous industries, and even facilitates smart and fast decision-making. 

The burning question… Will AI replace people? 

The reality is, developments in tech have been replacing people for years, with recent and emerging developments in AI forming part of this story. 

There has been a rapid rise in job adverts including ‘artificial intelligence’ in their job descriptions. At the time going to press, there were 408 such job ads on the UK job site, Indeed, so one could argue that developments in this technology are creating more jobs in data analytics, business intelligence, natural language processing, training and regulatory roles, and AI programming. 

But the fight against AI replacing humans is already on. The recent high-profile labour strike amongst actors and writers in Hollywood, which brought the film industry to a standstill, saw a landmark agreement between the Alliance of Motion Picture and Television Producers and the Writers Guild of America promising that Al, though still can be used as a tool, could not be used to replace workers. 

Like it or not, there is no doubt AI is going to change the working landscape. At this stage, it’s a brilliant tool to add value and make us better at our jobs - and it will continue to impact the way we think, work, act and experience the world. So much so, that the United Nations has just announced its creation of a 38-strong, expert advisory group that will exist to regulate and guide on the global risks and opportunities AI presents. 

But what about the ethics of AI in business? Is it a safe, secure, sustainable and fair playing field for all? Or do we need to ask ourselves some tough questions as we continue to invite this technology into our working lives? 

Does AI compromise security for ourselves and others? 

If someone uses ChatGPT to generate reports using confidential sales data, every piece of information they input into the portal becomes the property of OpenAI. Nobody knows what OpenAI may do with that data in the future, once ownership has been handed over, which is why some companies, including Samsung, are blocking ChatGPT within their internal networks.  

If you do want to use AI for business, Microsoft has tools where algorithms can be trained on your data which is segregated into your own ‘pot’. The upside is that you have control and ownership over this pot of data, which is more secure, but the downside is that it takes time and money to run this service - and since the data the AI is using is likely to be limited, so may be the outcome. 

In essence, effective and, most importantly, secure use of generative AI that’s trained using owned data can be achieved with time, patience and strong data sets. It’s not an instant solution. 

Could malicious use of generative AI cause unimaginable damage? 

Yes, according to the UN secretary-general, who said AI-enabled cyber-attacks were already targeting UN peacekeeping and humanitarian operations, with AI capable of producing convincing text, image and voice from human prompts which can easily be abused. 

There are alarming stories of malicious use of AI appearing in the press every week. From AI-generated sexual abuse material to a sophisticated scam advert for a rogue investment app falsely featuring Money Saving Expert, Martin Lewis - with potentially hugely damaging personal and commercial impacts for any person or company inadvertently targeted. 

It’s a minefield that has been described as an ‘arms race’ between AI abusers and protectors, something global leaders are taking very seriously, starting with November’s first-ever, much-publicised AI Safety Summit 2023 in the UK, which has brought together international governments, AI firms and experts to discuss and coordinate action against the risks of AI. 

Is the use of data already in the public domain a privacy issue? 

Creating a large language model, like ChatGPT, Microsoft Bing, Claude 2 or Bard, requires the gathering of large quantities of text through web scraping. This process takes details from open online sources, such as social media, which, contrary to popular belief, is most definitely a data protection offense in the eyes of the law. 

Which brings us to… 

Plagiarism - is it OK to use copy generated by AI for commercial gain? 

A number of authors are suing OpenAI and Microsoft because content from their books has been entered into their ChatGPT system without asking. The complainants say that “OpenAI unfairly profits from stolen writing and ideas, and calls for monetary damages on behalf of all US-based authors whose works were allegedly used to train ChatGPT”. 

To avoid plagiarism in business writing, it’s recommended that copywriters always verify all AI-generated content and sources, then incorporate that information into their own, original work to create an accurate - and legal - piece. 

When it comes to business leadership, AI versus human written copy is becoming harder to detect. To overcome this, academic institutions have started to issue guidance for staff to detect if a student’s paper has been written using generative AI, Amazon has limited the number of short stories self-publishing authors can upload per day, and numerous online AI plagiarism checkers have become available… whether or not these are reliable is yet to be seen. 

Can machines be biased? 

All AI engines are black boxes that rely on the information that they have been fed. In other words, human beings choose the data the algorithms harness and how the data will be used, making it easy for unconscious bias to enter machine learning models and be deployed at scale

Human judgment is still needed to ensure generative AI-supported decision-making is fair. No machine can be left to determine the right answers, it requires human judgment and processes, drawing on social sciences, law, and ethics to develop standards so users can achieve unbiased and fair AI outcomes. 

Is it ethical for businesses to use AI? 

Like any advancements in technology, from the birth of the internet to efficiencies in software developments, and now AI, businesses have always enjoyed the efficiencies and time-saving benefits that digital innovations bring. What’s unethical about that? 

AI is an excellent tool when used correctly, ideally with a segregated area to place your data. It’s not a golden ticket for solving all of your business challenges overnight, but it will do a good job of supplementing an already good system. 

However, there should always be checks and balances to ensure secure, private, unbiased and ethical outcomes. Anything automated should always be overseen and checked by a person, and AI is no different. 

When it comes to transparency on the use of AI, GDPR already covers this from a consumer POV. The rules clearly state that if you are making decisions about a person automatically, they must be informed. 

How can my business keep up with constant changes in regulations? 

It's essential that all leaders take responsibility for educating themselves and their employees on how to use AI to its full potential whilst keeping themselves and their stakeholders secure. Right now, AI is a largely untapped and exciting technology that could create remarkable opportunities for everyone - but it’s worth approaching with caution. Here are our tips for remaining responsible as you navigate the rocky road of artificial intelligence: 

  1. Employ or nominate multi-departmental individuals within a business to keep track of the legalities concerning the use of AI.
  2. Introduce stringent processes that check and balance the output of AI-generated results.
  3. Know where your data has come from and where it is going. 
  4. Set clear business standards and provide regular training on AI. 
  5. Have a healthy, business-wide AI mindset. Learn about and use new technology in a way that’s experimental, enjoyable and responsible. 
  6. Lean on the legal team for advice on how the law is working and how it’s constantly evolving. 
  7. Do your research, talk to experts, and if you’re not sure about the legalities or ethics of an AI process or outcome, don’t use it. 

Talk to Propel Tech 

This article is just the tip of the iceberg when it comes to many ethical discussions around the subject of AI. Please drop us a note with any input or feedback on this hot topic. 

Let's make possibilities happen


Make effortless software migration a reality! Download our essential free guide now to make successful migrations happen.

Get the Ebook
microsoft partner logo crown commercial service logo istqb partner logo aws partner logo cyber essentials plus logo iso 27001 logo iso 9001 logo