BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Is Black Box Human Better Than Black Box AI?

This article is more than 5 years old.

Andy Kelly on Unsplash enhanced by CogWorld

With current advances in technology and Artificial Intelligence, most major companies are going to great lengths to attract the right talent and showcase their expertise in the field. Today having a futurist among its top rank management is not an eccentric fad, but a competitive necessity. Google is well-known for its collaboration with Ray Kurzweil.

While we all know about the AI conquests of IBM, Amazon and Google, less is being revealed about machine learning projects in the “traditional” industry of telecom operators. Coming from a telecommunications background, I thought the balance should be restored. So recently, I met with a person shaping the direction of AI research in Europe’s biggest operator.

Deutsche Telekom is one of the most technologically advanced telecommunications companies in Europe and around the world. Interviewed here is Kim Larsen, SVP Group Development at Deutsche Telekom, who heads strategic projects in relation to AI and automation. Kim joined DTAG after having successfully won and launched a new mobile operator in Myanmar for Middle Eastern telco leader, Ooredoo. After returning to Europe from the Middle East and Asia, he was responsible for DTAG's Group Network Architecture and Technology Innovation, where he initiated DTAG’s push into AI and automation activities with a focus on applying AI to network infrastructure innovation.

Kim Larsen

Apart from working as a regional CTIO, Kim is currently engaged with AI research and implementations throughout the company. This role grew organically from Kim’s personal hobby and deep interest in the field. During the last 18 months he has been intensively engaged in supporting and developing an ethical framework covering corporate responsibility, bias, algorithmic fairness and transparency guidelines for the operator. In fact, Deutsche Telekom is one of the first European companies to publish comprehensive guidelines for AI development and guardrails. Within this work, Kim has conducted several surveys as to how we humans perceive and feel about AI.

We decided to delve deeper into what AI acceptance would mean for the majority of the human population and how we should interpret an upcoming “race for AGI." Here we go..

KateGoesTech: Let’s suppose AI is advanced enough to have encompassing information about you and our world. It is thus totally capable of making an informed decision to give solid advice. Would you trust a black box AI?

Kim Larsen: What we should be concerned about is not AI bias, but a bias inherent to human race. While rationally I would be inclined to listen to AI, I would still trust a friend more. Very often, the best decisions are made cold-blooded. But we are hard-coded to accept individuals of our own kind. My research shows that we hold decisions made by algorithms to much higher standards than those of our fellow humans. We accept that humans can be wrong but expect intelligent machines to be without fault.

After all, is black box AI different from black box human?

We take emotions into account and we are suspicious towards entities that do not do so. This is partially explained by the fact that we expect a human to be able to reason and explain its judgment. Current AI systems are still very limited in their capacity to give reasoning for their response. We should not be trapped in such a logical fallacy. A human can often come up with a plausible explanation after an answer is given. And since such human explanation will be constructed to protect one’s own self-interest, there is really little reason to toss aside a black box answer of a machine.

We already know quite a lot about the curiosities of the uncanny valley. Timeliness and future proof of our suspicion towards intelligence impostors is very much rooted in its nature. A big question here is whether we are biologically encoded to rely on human judgment, or are we simply used to do so. The answer to this question will be very important in understanding humanity’s acceptance of Artificial Intelligence in the decades and centuries to come.

In fact, the role of emotions and empathy are culture deterministic, with Eastern societies leaning more towards personal connection and what can be often described as nepotism and Western societies being much more rule-oriented and systematic. While not advocating one way or another, clearly, the acceptance of logical AI reasoning will be geographically uneven.

According to Kim, bias is not a uniquely human feature. It is a part of the AI creation process as well. At present, machine decisions reflect injected data as well as UATs and model validations. Due to historical and economic circumstances, data tends to reflect societal inequalities, gender and racial differences, which might take centuries to flatten out. Moreover, due to cultural dynamics and unequal access to education, it is mostly white or Asian male engineers who get to manage, develop and test AI applications. There is a pretty big likelihood that we will create AGI's (artificial general intelligence) in our own image. If we are to believe AGI will reflect its creators’ design and culturally rooted inequalities, the effects of such ethically stuck intelligence on the world order and humanity's progress may be truly disastrous.

The bias is already seen in current AI judging programs, where the system recommends immediate incarceration of a Black suspect, while allowing a white person to stay home until the sentence is given. A recent example of a misogynistic recruitment tool originally created by Amazon to streamline recruitment is a vivid example of AI discrimination. While such biases might reflect current reality, the goal is to keep AI up-to-date with a rapidly changing world, rather than lock up the lopsided statistics into the most powerful intelligence ever created.

KateGoesTech: A quest to create AGI is already reminiscent of a Cold War arms race. Currently we are talking about Silicon Valley and China being major contenders in this race. Having one agent in full control of AGI falls short of Orwellian 1984. How likely is such a nationalization of AGI?

Kim Larsen: The human species achieved a similar intellectual Singularity when we leaped all other humanoids and animals in development. Singularity for Artificial intelligence will most likely mean an uplift above human differences and current tribalism. Thus, it might not really matter too much who develops Superintelligence. What will matter is how this Superintelligence chooses to behave towards humanity as a whole; and whether we should strive to put it under control if that would at all be possible as pointed out by Max Tegmark in his recent book.

Many state that it would be better to have several competing outcomes of a Singularity event with the idea that this would ensure some degree of self-containment. This is a valid point if we are to believe AI creators have an ultimate influence on the development of AI after the point of Singularity. However, the number of AGIs and the potential competition among them should be completely irrelevant if we assume that it will undergo a complete separation from its creators.

A problem that can arise is AI’s cessation of development upon reaching human intelligence. If we believe that unique human qualities such as emotion, ambition and jealousy are paramount to the creation of intellect, we might find ourselves surrounded by multiple nationalistic entities with undefined life span and unlimited compute capacities. After all, the whole theory of creating a Superintelligence relies on the assumption that it is possible to create an intelligence whose power way exceeds the current potential of the human species. At the moment, however, such belief is no more grounded than a belief in heavenly forces. We have never experienced such an intelligence, and we don’t know if it's possible or if its emergence lies in the domain of science fiction.

KateGoesTech: Ethics of AI is an emerging field. The number of publications and specialists in the field has skyrocketed as people try to make sense of an ever-increasing machine intellect. Normative Ethics is one of the most popular schools of thought. What is your take on this?

Kim Larsen: I have spent quite some time on how to build and code ethics into AI-based systems and what such ethic-based AI architectures may look like. You quickly realize that while a cool challenge, this is no small technical challenge. Normative ethics does not just provide one clear framework for ethical or moral behavior of humans or machines for that matter. It covers a broad spectrum of ethical thinking, from Kantian or Asimovian strict moral or rule-based laws for the individual (human or robot) to utilitarian maximization of the majority’s common good or happiness. Most of the thinking is deeply rooted in Judeo-Christian values. As an example, Normative Ethics may put the utility function on the throne of logic and decision-making. A seemingly benign goal of maximizing “common good,” however, can have horrendous repercussions. Think of minority group bias, unfairness and prejudice. An AI led by utilitarian ethics might choose to improve the conditions of 51 % of the population, leaving the rest with very little if say, it chooses to leave them alive at all. Such an AI may calculate that overall, it is better for Germans to stay a homogeneous society and thus all refugees must be evicted from the country. By the way, similar ethical “loopholes” are present in other ethical frameworks within normative ethics and illustrates some of the challenges an AI developer may face.   

Equally, utilitarian ethics may make your autonomous car kill your sister or yourself rather than five strangers. While totally explainable from a utilitarian point of view, how likely would you be to agree to such an outcome if you were given a choice? Would you drive such a car? Would you use products where the ethical framework maximizes the common good over your individual rights and wants?

Humanity fought for freedom of speech and the right for choice for centuries. Thus, having AI deciding our fate will have a remarkable resemblance to values of authoritarianism. People should be very cautious in whether they are willing to give AI a final say. Having a qualified opinion appears to be a much more acceptable role for AI systems.

Advances in Artificial Intelligence open whole new avenues for humanity’s development and will give us a chance to eradicate centuries’ long problems of hunger, diseases and economic inequalities. These very advances also pose imminent ethical dilemmas and a call to have a say while your voice is still relevant.

Data-empowered unbiased Artificial Intelligence may eliminate the need for human decision-making once and for all. But will we want to live in such a formalized world? Can love, devotion and honor be quantifiable?

Max Tegmark’s “Future of Life” Institute is one of the main foundations for the discussion around Ethics of AI. Having people such as Kim Larsen in management positions in major corporations definitely helps to keep the discussion up-to-date. However, we all have a say. It is the decisions we make around AI today that will reflect the development of the human race centuries or millennia from now.

Follow me on Twitter or LinkedInCheck out my website