The recent developments in A.I. have been extraordinary. Starting in mid of 20th century, A.I. lived through couple of winters and had not given up. Thanks also to the development of the hardware, early 1990s the exponential growth had begun and moving on in unprecedented speed. OpenAIs GPT3 or Dall-E in natural language processing (NLP), Deepmind’s AlphaFold or MuZero (successor of AlphaGo & AlphaZero) in Reinforcement Learning, Tesla’s Self-Driving Autopilot in Deep Learning are the developments of only the last few years. The field is indeed growing very fast, however it is yet not the level jump we are expecting for, moving from narrow specific problem solving A.I. to general self-aware A.I. (AGI). There are many more, although not generally accepted, characteristics that we assign to AGI to evaluate it.
I was thinking recently about the possible risks of having a sentient, self-aware, conscious intelligence which predictably is going to be much more advanced than us, assuming that being advanced means, relative to us, cognitively more performant. I do divide the risks of A.I. into two (sequential) categories: 1) Humans use A.I. for destructive purposes (like Nukes); 2) AGI’s self interest goes against humans (Principal-Agent dilemma).
“We can tentatively define a superintelligence as any intellect that greatly exceeds the cognitive peformance of humans in virtually all domains of interest.”
Bostrom
In recent future, with the speed of development A.I. has, it will be able to support solving problems in very efficient manner being solely goal-oriented. Narrow A.I. depends heavily on data. Without a prior guidance finding a pattern and predicting next steps upon it would be impossible. Reinforcement learning has slightly overcome the data dependency, however it has not leaped to the next dimension. Main issue with the data dependency is that the data usually comes from humans, us. Thus any cognitive bias we have, including any ethical dilemmas, is fed to the A.I. as an extension of ours. Self-driving cars, self-controlling combat drone technology, and many other implementations of A.I. signify the ethical challenges we face already today.
When it comes to ethics of artificial intelligence, we should first understand the meaning of ethics here. How do we define what is ethical, moral and what is not? This type of moral theory merry-go-round proposes that we do not have any universal moral theory. We are judging A.I. on making decisions that may be immoral, but we should as well identify the fact that we are as immoral as A.I. We should definitely question and evaluate the decisions of an artificial intelligence, since it is the only way to bring more potentials out of it. The issue is not just about intrinsic or unintentional biases or dilemmas, but as well the explicit self-interests of big corporations. As the biggest developments are coming from those corporations, they would never bring innovations that do not serve their interests. Thus, any product brought by them I consider as biased.
In long term, assuming that we’ll see the artificial general intelligence become reality, things get more interesting. Imagine one day we create an artificia super-intelligence that is self-aware and conscious. It’s very probable that this artificial intelligence will be able itself create a higher intelligence than itself, which in turn would create yet another higher intelligence. This type of ‘intelligence explosion’ may result into so called “Singularity”. Another look at it is by Bostrom, whose orthogonality thesis suggest that the utility (goal) of an A.I. is independent of any level of intelligence it has. It means that more intelligent A.I. does not automatically turn into an evil creature who wishes to extinct humanity. AGI employs its intelligence to achieve its goals.
It’s hard to evaluate A.I., or at least I do find it hard. Main reason is because we are trying to assess a machine based on our anthropomorphic bias, in other words on our human-like characteristics, needs or goals. Even when we call it more intelligent, we assume that intelligence is one-dimensional, but it is not. For example Chimpanzees have way better short-term memory than humans. Assuming that short-term memory is an important part of intelligence, should we than claim that humans are less intelligent than them? It’s non-sense, but I guess I could explain what I am thinking. Intelligence, ethics, or morals are not single dimensional and are evolving even among human civilizations through time. Comparing a machine intelligence based on what we know from ourselves is probably meaningless. Nonetheless, that does not mean that we should not look after it and try to predict how it will look like. What should A.I. maximize when it has to make decision? Whose preferences? Based on which dimensions?