Chances are that you’re familiar with a movie or TV show based on ‘the robots’ taking over.
Summary
Generative AI is flawed: it has biases and is confident in its responses, even when it’s wrong.
You’re not the only one benefiting from increased productivity: scammers and hackers are using AI technology to create more sophisticated and convincing scams.
Being too dependent on AI tools like ChatGPT can jeopardize client relationships: think of it as a junior staff member and double-check all its work.
Whether it’s the murderous HAL in ‘2001: A Space Odyssey’ or the apocalyptic Skynet in the Terminator franchise, the quick recap is that it’s not a pretty picture.
When those films came out, the potential threat of AI was discussed in the same breath as hyperspace and time travel. While those technologies remain in the realm of science fiction, AI has come to penetrate nearly every aspect of our daily lives.
Despite sensationalist, misguided fears that AI will become sentient and rise up against mankind, it’s actually the humans we need to worry about more than anything.
The current risk is not that AI tools have the ability to think for themselves (they don’t—well at least not yet), but rather, it’s the risk that humans will blindly do anything that AI asks them to do. And this ‘all bets are off’ mindset is what presents a broad range of issues that must be taken seriously.
Engineer and tech expert, David Watson, says that the most significant risk he’s seen so far is “humans doing … whatever [AI] asks them to do.”
In fact, the lack of law and order that has emerged in the wake of ChatGPT conjures images from a different genre—the Wild West.
As Karbon Co-Founder and Chief Partnerships Officer, Ian Vacin, puts it, “Things are not going to slow down, so that's why you need to be more educated and more familiar with it so you know how to spot the right things and the wrong things.”
And with generative AI advancing at a rapid pace, with little sign of slowing, it’s essential for adopters of accounting AI tools to educate and familiarize themselves with the ethics and pitfalls.
Here are 3 key issues on the ethics of AI:
1. AI has biases
The way AI learns to speak naturally is by consuming vast amounts of online content. It trains on reference sites like Wikipedia for fact-based information, as well as conversational forums and platforms to understand how humans interact and talk to each other.
The problem with this is thatbiases related to race, gender, age, etc. can easily appear in training data.
Think of a site like Reddit. And now think about the implications of large language models (LLMs) like GPT-3.5 Turbo and GPT-4, which power ChatGPT and ChatGPT Plus respectively, studying millions of Subreddits on a broad range of topics—from the innocent to the sinister.
Some responses might have pretty sharp edges, to put it gently. While this may not affect strict accounting functions, it will certainly play a part in client-facing or conversational tasks.
That’s why it’s especially important for AI developers and users to guarantee that a human oversees what it produces. The tech community is increasingly aware of the issue and is working to develop more robust anti-bias training practices and algorithms.
While they catch up, humans need to remain in the loop at all stages to make sure bias in AI doesn’t go unchecked.
2. AI is helping everyone be more productive—even scammers
Misuse is a major ethical question in AI. It can be a powerful time- and money-saving tool, revolutionizing how we think about work, boosting productivity, and helping people do their jobs better. But that includes making criminal activity more efficient.
For some people, crime is a job, and [AI is] helping them do their jobs better, too.
Bad actors are using AI to run everything from phishing scams to voice fakes for man-in-the-middle (MITM) attacks, making the days of ‘Nigerian prince’ emails seem almost laughable in comparison.
In fact, AI-generated phishing emails are, on average, opened more frequently than manually crafted phishing emails.
Blindly depending on generative AI tools is a mistake.
While it’s great at a lot of things, it doesn’t actually know everything. And, perhaps most alarmingly, tools like ChatGPT are confident, even when they’re completely wrong (this is referred to as ‘hallucinating’).
This is why it’s important to always check the results that these tools give you.
Asking ChatGPT to do something as simple as drafting a client email can still result in an inaccurate output. It might, for example, misinterpret information and context.
A great way to think about ChatGPT (and similar tools) is to consider them as your most junior staff member. You wouldn’t entrust a junior staff member to communicate on behalf of the company without double-checking their work, and the same goes for AI.
Lean in, but tread cautiously
If the history of AI were a movie, the current moment would be a fast-paced montage where things are moving so quickly that no one can keep up. Today it’s the Wild West, and as for what it’ll be tomorrow—so far we’ve only seen the trailer.
While there is much more yet to unfold, most will agree on two things: AI is here to stay, and it’s not something to be scared of. The potential for good outweighs the bad, and the ethics may be murky at times, but remaining vigilant and staying briefed on emerging issues ensures responsible use.
Be skeptical and be careful. Make sure your data is secure, and make sure you're evaluating outputs before sending information to clients. Yet again, it’s the humans you need to worry about. Or, as behaviorist B.F. Skinner famously said, “The real question is not whether machines think, but whether men do.”