Can we trust software (let alone humans) to manage fake news?

Written by on November 22, 2016 in Opinion with 0 Comments
pathdoc / Shutterstock.com

pathdoc / Shutterstock.com

Fake news is big news at the moment. Presumably if D. Trump esq had lost the US election, he would have made it bigger. Whether or not Facebook’s fake news issue swung or influenced the result, the whole fake news fiasco brings a dose of real life to another very current issue.

Artificial intelligence. Machine learning.

The idea that machines can make those ‘fake news’ judgements now needs to be discussed.

Mark Zuckerberg is probably sick to death of news, fake or real. The real news reports that he is dealing in fake news and influencing people’s opinions (as if every newspaper in history hasn’t tried to do that on a daily basis). And he says the problem is much more complex than it seems. In fact, he said, ‘Facebook has been working on the issue of misinformation for a long time’, calling the problem complex both technically and philosophically.

Technically, the challenge is enormous because there is no black or white, no binary decision-making involved. And, clearly, computers like those black and white problems.

Philosophically, human beings need to judge whether something is fake or just twisted in such a way to make you believe something. (We also need to have the whole ‘freedom of speech/censorship’ debate as well, but let’s not).

Fake news is propaganda, then. So, not new. The only real difference is that, with social media, warped individuals can launch propaganda campaigns, not just warped regimes.

‘History’, said the brilliant French writer Voltaire, ‘is simply a lie, agreed upon’. Or was it Napoleon? History, it is fair to say, ‘is the dirty dish water of propaganda, created by the winner’. Or is it?

Or, as Abraham Lincoln famously said, ‘you should never believe anything you read on the internet’.

You see the problem.

It is very possible, probable even, that one day machines will have as much emotional intelligence as humans, and therefore be as equipped to make the kinds of decisions that Zuckerberg is in trouble for making or not making.

But we are not there yet.

As Marcus Weldon, President of Bell Labs, said at the Great Telco Debate, ‘at the moment machine learning is at the point where, based on a conversation with a customer, say, it can only get to a point of concluding the answer might be X, not is actually X’. We have a way to go.

And, some would say, it is simply too dangerous to take a software approach to making these judgments. We cannot risk the gap between now and then, and allow the world to be filled with fake news.

Or can we?

Should it not be the case that humans take the time to weigh up whether news is fake or false themselves? And take responsibility for their own views on it? And what they do with those views? Americans, according to a Pew survey, want news, and just the news – so maybe there is hope.

Whatever the rights and wrongs of the fake news issue, one thing is for sure.

Technology platforms like Facebook have the tendency to take away our ability to make sensible judgements. Essentially it makes us lazy. And therefore technology should not be used to make philosophical judgments.

Yet.

Tags: , ,

About the Author

About the Author: Alex was Founder and CEO of the Global Billing Association (GBA), a trade body focused on the communications sector. He is a sought after speaker and chairman at leading industry conferences, and is widely published in communications magazines around the world. Until it closed, he was Contributing Editor, OSS/BSS for Connected Planet. He is publisher of DisruptiveViews and previously BillingViews. .

Subscribe

If you enjoyed this article, subscribe now to receive more just like it.

Subscribe via RSS Feed

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Top