Questions to ask about technology – the hellven choice

Written by on November 11, 2015 in Guest Blog with 0 Comments

This is an excerpt of the chapter I wrote for the book “The Future of Business” edited and compiled by Rohit Talwar

Technology: it’s no longer about IF or HOW but about WHY

hellvenThe urgent need for clear man-machine ethics is amplified by the view that we should probably no longer be concerned whether technology can actually do something, but whether it should do something, i.e. the how is being replaced by the why (followed by who, when and where). For example, why would we want to be able to alter our DNA so that we can shape what our babies look like? And who should be able to afford or have access to such treatments? What would be the limits? In machine intelligence, should we go beyond mere deductive reasoning and allow smart software, robots, and artificial intelligence (AI) to advance to adductive reasoning (i.e. to make unique decisions based on new or incomplete facts and rules)? If autonomous machines are to be a part of our future (as is already a certainty in the military), will we need to provide them with some kind of moral agency, i.e. a human-like capacity to decide what is right or wrong even if the facts are incomplete?

Hellven” challenges

Tremendous scientic progress in sectors such as energy, transportation, water, environment, and food can be expected in the next 10-20 years. I believe most of these achievements will have an overall positive effect on humanity, and hopefully on human happiness (which I would suggest should be the ultimate goal) as well. This would clearly be the heavenly side of the coin.

At the same time, on the hell-side we are now approaching a series of complex intersections at very high speeds. Soon, every single junction we navigate could either lead to more human-centric gains or result in serious aberrations and grave dangers. It has often been said that, “technology is not good or evil – it just is”. It is now becoming clear that the good / bad part will probably be for us to decide, every day, globally and locally, collectively and individually. Clearly, if we assume that machines will be an inevitably large part of that future, we will need to decide both what we want them to be, and perhaps more importantly, what we want to be as humans – and we need to do it soon.

Artificial Intelligence (AI) is the most significant 
“hellven” challenge

Most technologies, software and hardware alike, are not only becoming much faster and cheaper but also increasingly intelligent. The spectrum of rapid recent advances runs the gamut from the kind of simple algorithmic intelligence it takes to win against a chess master, to the advent of thinking machines and IBM’s neuromorphic chips (i.e. chips that attempt to mirror our own neural networks) and their ambitious cognitive computing initiative. Buzzwords such as AI and Deep Learning are already making the headlines every single day, and this is just the tip of the iceberg. Looking at the investments by the leading venture capitalists and funds, AI has already become a top priority in Silicon Valley and in China, often a certain sign of what’s to come.

At the same time, almost every single major information and communications technology (ICT) company already has several initiatives in this man-machine convergence arena. Google and Facebook are busy acquiring small and large companies in a wide range of AI and robotics-related fields. They clearly realize that the future is not just about big data, mobile, and connected everything. They see the next horizon as embedding capability to make every process, every object, and every machine truly functionally intelligent, albeit not (yet) truly humanly intelligent as far as social or emotional traits are concerned. But maybe this is just a question of when rather than if? Just imagine what AI could do to our everyday activities such as searching the web (as we call it today), and you can get a glimpse of what’s at stake here. In the very near future, who will bother with typing a precise two-word search phrase into a box when the system already knows everything about you, your schedule, your location, your likes, your connections, your transactions, and much more? Based on the situational context, your external brain i.e. ‘the AI in the cloud’ will already know what you need, before you even think of it, and will propose the most desirable actions as easily as today’s Google maps propose walking directions. Hellven, once again, depends on your standpoint.

IBM, the creator of Watson Analytics, one of the leading commercially available AI products, appears to be betting the farm on this future. IBM is investing billions of dollars into neurosynaptic chips and cognitive computing – designed to emulate the human neural systems with the intention of creating a holistic computing experience, i.e. computing that feels as natural as breathing. Computing is no longer outside of us – a thought both scary and exhilarating. Apart from IBM, Google is working on its own Global Brain project and the École Polytechnique Fédérale de Lausanne (EPFL) in Switzerland is pushing the EU’s hotly contested Human Brain project. China’s Baidu has also signaled its ambitions to discover the holy grails of AI by hiring top-level researchers in that field including Stanford’s Andrew Ng, and by opening up a Silicon Valley AI center. The list goes on. Clearly, man-machine convergence is on top of the global agenda and investors smell enormous profits.

But: machines don’t have ethics

The AI gold-rush has only just started, and this is probably a very good time to be more concerned about whether Silicon Valley’s leading venture capital firms have enough foresight to consider more than their financial returns. After all, it is they who are funding commercial applications of man-machine technologies that might have potentially catastrophic side effects on humanity. In my view, the issue of how man and machine will inter-relate in the future should not be viewed from a profit-only perspective. Machines don’t have ethics and neither does money. The coming combination of these forces that operate beyond and above human values strikes me as even more dangerous.

Some futurist colleagues predict that we will soon reach a point where the capacity of thinking machines will exceed that of the human brain; a point that Ray Kurzweil, scientist and author of How to Create a Mind, calls the Singularity, with 2029 as the likely ETA. At this point, if not earlier, even larger and deeply wicked problems will emerge. For example, if we maintain that technology does not (and will not) have ethics, it would probably be downright stupid for anyone to expect that any current or future software-program, machine, or robot would be able to act based on human morals, values, or ethics. Thus, the morals of machines will emerge as a major factor in the future of humanity, and the issues around what I call Digital Ethics will quickly become more essential as technology spirals into the future.

Every algorithm will need a “humarithm

I coined the humarithm neologism in 2012 – a wordplay that riffs off algorithms – because I believe that the chains of logic, formulas and if-this-then-that rules urgently need to be paralleled with corresponding systems of ethics, values and assumptions, and new if-we-believe-this-we-must-do-that rules. I believe that every time we offload a task to an algorithm (a machine) we will also need to think about what kind of humarithm we need to offset the side-effects, i.e. how to best deal with the unintended consequences which are certain to arise.

For example, we may eventually come to the conclusion that commercial airliners can indeed be better piloted by software and robots than by human beings; most research already indicates that this is indeed the case. But if so, we must certainly think about how the passengers will feel about traveling inside a large metal tube that is steered entirely by a robot. This may well be a typical case of where efficiency should not trump humanity.

The whole chapter is available here: FINAL FOB Future of Business – Chapter Relationship Man & Machine Gerd Leonhard  PDF version 600k.

You can follow me @gleonhard and @futuresagency

Tags: , , ,

Gerd Leonhard

About the Author

About the Author: Gerd is a well-known Futurist and Author of 5 books, a highly influential Keynote Speaker, Think-Tank Leader & Advisor, the Founder of GreenFuturists and the Creator / Host of The Future Show. (http://www.futuristgerd.com/about/) .

Subscribe

If you enjoyed this article, subscribe now to receive more just like it.

Subscribe via RSS Feed

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Top
%d bloggers like this: