Mal Fletcher
AI To Boost Cyber-Crime

AI's New Generation of Criminals

Isaac Asimov once noted that “a nuclear blaster is a good weapon, but it can point both ways.” Most technologies that are capable of doing great good are also capable of doing great harm. 

In modern history, that’s proven true with nuclear energy and genetic engineering. Both have offered great benefits while at the same time presenting significant threats to human welfare and the environment. 

The most influential technologies represent a double-edged sword. That is certainly true with the most recent breakthrough branch of technology - artificial intelligence and machine learning.

Britain’s National Cyber-Security Centre (NCSC) has warned that artificial intelligence might equip novice criminals to launch quite sophisticated criminal activities (1). James Babbage of the National Crime Agency (NCA) adds this: “AI services lower barriers to entry [into crime], increasing the number of cyber criminals.” We might soon see people with no previous criminal inclination engaging in crime simply because AI makes it so easy. 

Generative AI is still evolving but already we can see ways in which it might be turned to nefarious ends - even by the least experienced users.

Deep Fakes
The AI revolution will lead to a massive increase in deep fake videos and audio files. This year will see major elections taking place in several corners of the world, not least in the USA and the UK. We will need to be vigilant, as some supporters of both sides of politics will doubtless engage in deepfake production aimed at bringing down their perceived nemeses. The perpetrators won’t necessarily be on any candidate’s payroll. Everyday voters, some of whom have only just learned to use apps, will effectively overnight become cyber-criminal fraudsters, hiding behind the anonymity of the socials.

Generative AI apps can already build convincing, realistic videos from nothing more than a few lines of text or a still image. Mobile apps that turn still images into video abound. The tech will quickly evolve to allow an image to morph into a thoroughly convincing video clip, based on AI predictive modelling and data analysis - with almost zero human input. 

The pace of advance with deepfaking tech is staggering. When the remaining Beatles released their “last ever” band recording, they called upon AI tools that isolated the voice of John Lennon from a creaky old analogue cassette home-recording. Had they wished to, they could also have sampled and manipulated the sound patterns of their friend’s voice to have him sing completely new lyrics. This kind of thing no longer surprises us, but it should concern us. 

Imagine there’s someone in your workspace who really takes a dislike to you. It’s relatively easy for them, using AI apps, to damage your reputation by, say, creating a video or an audio file that shows you committing some indiscretion or even crime. Once that’s disseminated online via ubiquitous social media, you can be tried in the court of public opinion long before you’ve had a chance to prove that it’s fake. In fact, proving a fake is often harder than making one - at least until fake-detecting apps are as available as video production software.

Phishing
AI also makes it possible for people with no criminal experience - and little tech savvy - to engage in identity fraud. Phishing is the use of false, personalised messages to encourage people to send money or personal data, through which criminals access bank accounts and commit ID theft. 

In 2022, identity fraud accounted for 63 percent of all crimes involving false documents and passports. It cost the British taxpayer £4 billion in that year alone (2).

Identity thieves who gain access to bank accounts often steal small amounts from many people at one time, which makes their crimes difficult to trace. Indeed, some victims don’t realise they’ve been robbed until quite a while after the event.

Until now, this type of activity has almost always been the work of criminal gangs. Today, AI makes it relatively simple for everyday people to get involved.

Hacking
AI can become even an amateur hacker's dream tool. Autonomous bots can use big data to crack online passwords and infiltrate even the strongest security systems. Imagine the social impact if/when automated armies of AI-powered bots siphon funds from online accounts, or cripple power grids or transport networks. 

Again, people with no history of crime could be drawn into this relatively inexpensive activity. Anyone with a gripe against a state authority, or a private company, and perhaps a modicum of tech know-how could wreak havoc. Of course, AI can help prevent crimes using data analysis, predictive modelling and sophisticated cyber-security tools. But even a small threat can create enormous problems and detection might come too late to prevent an attack. 

Stalking
Cyberstalking is no new phenomenon. It certainly predates the arrival of commercially available AI tools. Cyberstalking itself is the latest iteration of a long-time problem. Stalking has made the lives of many people seem almost unliveable. Targets of physical stalking exist under the constant shadow of insidious surveillance and the threat of physical, mental and emotional injury.

The cyber variety of stalking might not make the headlines too often - yet. But it is already a serious problem. A UK study found that cyberstalking lasts for longer than two years for more than 40 per cent of its victims. For 22 per cent of its victims, their experience lasted between one and two years. Seventy-five per cent of victims are women (3).

This crime is set for a major upward swing now that we all have at our fingertips the most powerful data analytics tools in history. Facial recognition algorithms can use data harvested from social media and CCTV to create detailed profiles of individuals, so their movements can be traced and their behaviour predicted ahead of time. Stalkers can get real-time location updates for their targets. 

In the age of internet addiction, some people hand over more and more of their time and resources to living as an online version of themselves. Relationships then suffer because no amount of digital wizardry can replace the mental and social benefits of physical friendship networks. The breakdown of connection - and the pace of societal change - leaves many feeling frustrated, bitter and angry about their lot in life. 

These people seek a way of redressing the imbalance between their aspirations, often fed by social media and conspiracy theories, and their reality. Hooked on the sense of control they experience online, some will go there to overcome the impotence they feel when they look at more successful people. Watch for it: cyberstalking will explode in the age of AI.

What can we do to protect ourselves?

There are steps we can take to protect ourselves from these threats. In some cases, AI tools will help us combat other forms of AI. Here are just a few ways to mitigate the impact of cybercrime.

1. Patch Up. It sounds like a catchcry for advocates of recycled clothing, but this is a techie term for updating computer security systems. This has never been more vital, given how much we rely on digital tools for work, play, travel, relationships, banking, buying and selling. 

You’ll need to include in your security checks any devices that are part of the now ubiquitous Internet of Things, such as Alexa-type voice-command devices. A few years ago, a study found that these devices can record private conversations and transcribe them using AI without the conversants being aware. In one famous case, a couple learned that a written record of a conversation in their living room was sent via email to a neighbour - who was the subject of the conversation!

Of course, checking online security can become an obsession - if you look hard enough, you’ll find an almost endless number of holes to plug. AI itself will provide the means for much more efficient, bullet-proof security systems. The bottom line, though, is that we must develop a default mindset that distrusts digital tech. We tend to believe that because we rely so much on digital tools, technology must be inherently trustworthy. 

We sometimes forget that technology platforms are owned by human beings. Owners of AI and other big tech organisations speak a lot about their altruistic, humanitarian motives. However, for the most part they do not operate charities. Their goal is to make money for their shareholders and other stakeholders. For most of them, the greatest driver of profits is user data. It’s important that we keep this in mind and continue to ask questions like: who ultimately owns the data I generate online? What might they do with that data - who might be able to access it? What do they hope to gain from it?

2. “What’s the password?” Once upon a time this was a question only children at play asked. Today, we face it almost every time we use a digital device. Switch on your phone or other connected device and you’ll need a fingerprint or other biometric trigger to go any further. Once you’re in, most of the online services you use require their own password - and they almost always encourage you to create one that’s unique to that service. 

Passwords represent one of the most tiresome aspects of using modern technology. It’s not just that they add a layer to online activities or the fact that there are only so many combinations of letters and numbers you can remember. They also demand of us an almost super-human ability to store and recall information. Perhaps AI will soon help us find better ways to ensure security, but for now, this is the price of entry into the cybersphere and access to AI. 

Thankfully, there are AI bots that can help you generate hard-to-crack passwords, but you’ll still need to record and store them in safe places, offline and online. Yes, you can store passwords in the wallet system offered by your web browser or operating system. However, it’s worth also creating an “old-fashioned” password-accessed document in Word or Excel listing all your passwords in abbreviated form - using a system of abbreviations only you recognise. 

Make a copy of that document on several online services such as password-accessed Dropbox or Google Cloud storage accounts. Then create at least one backup on a stand-alone hard disc drive. Almost everything you create online should be backed up offline, to an “air-gapped” device - that is, one that is not internet-connected. 

3. Scrutinise! An old proverb says, “Nobody ever built a statue of a cynic.” That’s true for much of life, but when it comes to AI and digital tech mistrust is worth its weight in gold.

Always approach with a sceptic’s eye any communication that comes from a source you don’t know. Scrutinise the content carefully before allowing your eye to be enticed by any featured links. Never action a link you’re not sure about. Be especially wary of emails that appear to use the logo of a bank or another service you trust. Before you answer or click anything, look carefully at the email address and any street address provided. Fake email addresses are sometimes blatantly wrong. If you’re suspicious, delete the message.

4. Block! Block! Block! Even in the age of text communication scammers often use phone calls to prey on the unsuspecting. Downloading a phone-blocking app will help protect you against phishers, ID thieves and hackers. 

It will flag an incoming call as “potential spam” if the number has been used for spam calls elsewhere. You can then refuse to take the call. If you repeatedly receive calls from the same number, type the number into Google search along with the words “Who called me?” You’ll often find that the same number has already been flagged as a regular source of suspicious calls. You can manually set your phone to block that number.

5. Fact-Double-Check Don’t give AI the benefit of the doubt. Be wary of news stories that don’t ring true, based on what you already know. Make sure you keep up with the news, particularly in areas that interest you, so that you can make informed judgements and spot fake news and videos at a glance. Fact-check what you read, especially if it sounds too good (or bad) to be true.

In time, AI will help us identify deep fake videos quickly. For now, detection is largely reliant on our ability to check the context of what is said and done.

Artificial intelligence and machine learning present huge opportunities across many aspects of life and fields of activity. AI 's data analysis will provide predictive models for everything from climate change to health emergencies. It can help us devise solutions before problems arise.

The same technologies also carry significant potential threats to mental health, human communication, relationships, and personal, corporate and national security.

Any technology is only as beneficial as humans commit to making it. Technology is not destiny. Human beings are the moral agents. We must choose how we will utilise the tools available to us. We cannot afford to abdicate our ethical responsibility for the machines we build or feed with our data. 

We must not sleepwalk our way into the age of AI, pretending that technology will devise the best way forward for its own development. We must not find ourselves on the wrong end of Asimov’s blaster!

 

JOIN THE FIGHT! Support our public campaign for Responsible AI. We urgently need your support. Ciick here for more. (Video below.)

Mal Fletcher (@MalFletcher) is the founder and chairman of 2030Plus. He is a respected keynote speaker, social commentator and social futurist, author and broadcaster based in London.

About us