Will humans become the hackers of machines?

Written by on March 2, 2017 in Opinion with 1 Comment

PowerUp / Shutterstock.com

You know how hackers are always one step ahead of us? How our defences are based on fast reaction times, not first reactions? Is it possible that human hackers can become even more dangerous?

Just by being human.

The digital infrastructure that we are busy building, and on which we rely to an enormous degree, is actually pretty fragile. The Amazon S3 outage was, in fact, so bad that Amazon itself could not get into its dashboard to alert and update users. As The Register has it, ‘its red warning icons were stranded, hosted on the broken-down side of the cloud’.

If organisations such as Amazon and others – defenders of our digital world – can be taken out, and badly, then why, you have to wonder, are we ignoring all this and hurtling into a more automated, digitally controlled world?

Take Facebook.

Pressured by human self-interest to do something about [insert name of issue – suicide, abuse, blackmail] it turns to more and more artificial intelligence rather than human intelligence to address it. And the artificial intelligence makes mistakes, constantly.

Twitter is another one. Abuse is rife on Twitter, so Twitter relies more and more on algorithms to spot the abuse and stop it. Presumably it compiles a list of abusive words and phrases and blocks them.

This is not new, it has simply escalated under our noses, without us really taking notice. Facebook is also rife with other sorts of bullying and abuse, and has been for years.

Surely, if you wanted to abuse people on Twitter or Facebook then all a human being needs to do is to disguise the abuse as something more subtle, more human, than AI and algorithms will notice or understand. The subtleties of human abuse and bullying are extraordinary. The bullies and abusers go to great lengths to disguise what they do. Friends and even close family end up being horrified when they discover what is going on.

And because we rely to such an extent on the ‘judgement’ of computers, the next step is that we will look at the evidence (if it ever reaches a human being’s consciousness) and say ‘ah, no, the algorithm says all is fine. So, all is fine’.

We will, of course, play catch up. We have seen and read the stories of Watson beating humans at poker and Go. But that is a long way from being able to apply that level of learning about humans, to monitoring billions of accounts that well, to stop abuse.

In the good old days, hackers would use computers to hack computers to hurt humans. In the near future, humans will just have to be more human than the AI based computers can understand, to continue to hack and hurt humans.

As Bob Moritz, chairman of PwC said at MWC17, in the 4th Industrial Revolution, do not forget humans. In the digital age, we should ask ourselves the age old question: ‘who will guard the guardians’?

Tags: , , ,

Alex Leslie

About the Author

About the Author:

Alex was Founder and CEO of the Global Billing Association (GBA), a trade body focused on the communications sector. He is a sought after speaker and chairman at leading industry conferences, and is widely published in communications magazines around the world. Until it closed, he was Contributing Editor, OSS/BSS for Connected Planet. He is publisher of DisruptiveViews and previously BillingViews.

.

Subscribe

If you enjoyed this article, subscribe now to receive more just like it.

Subscribe via RSS Feed

1 Reader Comment

Trackback URL Comments RSS Feed

  1. miranda says:

    Exactly! A computer cannot identify passive aggressive behaviour? What hurts a child more-“you are a fucking bastard” or “your mummy is always late because she doesn’t love you”. We know the answer, a computer does not.

Leave a Reply

Your email address will not be published. Required fields are marked *

Top
%d bloggers like this: