T
h
e
C
o
l
d
R
o
o
m
Facebook Google plus Twitter RSS Feed
The Cold Room
"Stop looking at the walls, look out the window"
Home
Philosophy
Opinion
Creative
Media
Dutch
Other

Philosophy > On a possible digital apocalypse

Posted by Aston E. on 01-27-2015

Since this topic seems to come up quite often these days, and the knee-jerk reaction seems to be fear -- quite understandably, but perhaps not entirely justifiable --, I wanted to write down some of my thoughts. It is not as much a reaction to any other piece as it is an attempt to nuance the debate, but it was triggered by Sam Harris' article on a digital apocalypse.

On a possible digital apocalypse

First of all, let us consider how restricted our brain is -- especially on its own. Most simply put, it takes some form of input and produces some form of output. The only way to verify the reliability of these inputs and outputs is... well... through these inputs and outputs. We have to probe into the realm these inputs and outputs give us access to, but it could very well be obscured or warped in any number of ways without us ever knowing. Among the most extreme examples of this is the "brain in a vat" idea, where all our inputs and outputs are hooked up to an imaginary "reality". 

It is quite obvious that any Artificial General Intelligence (AGI) would face a similar dilemma: we provide it with inputs and output, so we are entirely in control of what it can and cannot perceive. In fact, I will promise you that -- were I ever to design and build an AGI -- I would test it on an entirely virtual world; in other words, I would create an "AGI in a vat". Like the matrix, except we would be keeping the robots in there, instead of the other way around. If properly implemented, this would bear no risks at all. 

In all honesty, I think any group of people intelligent enough to build such a being would understand that scientifically researching its behaviour is an integral part of the project; the first nuclear reactions weren't tested by trying to detonate an atomic bomb in a research facility either, but instead using carefully designed experiments. Before we knew how exactly these processes worked, it would have been nearly impossible to design a safe nuclear reactor. Similarly, we need to study the behaviour of any AGI "in vitro" before we can reliably design a safe real-world version. Or, of course, a destructive one -- like the atomic bomb.

If we were to extend our AGI's inputs -- that is, provide it with real information --, we still have an immense control over what information it has access to: if we feed it statistics and measurements of the weather, then that will be all it can know.

More importantly, however, we control what outputs our AGI has: as long as it only takes in and returns data, the risk it poses to us is very limited. It would not be able to alter itself and it would not be able to directly alter the world, so its only way to do so would be to trick us into doing so. There are many other ways in which we can restrict an AGI: if we build its brain so that any attempted measurement of it would either be incomplete or dangerous and damaging, like it is the case in our own brain, the AGI could never "upload" itself to other machines, clone itself or modify its cognitive capabilities. Again, it would have to trick us into doing so. If we give our AGI a body but no direct access to data (e.g. the internet), it will be bound by the same restrictions as any human being -- although I will admit that, looking at our past, this might not at all be a reassurance. But at least, however incredible its mental speed, its physical speed of interaction would be the same as our own.

Even if we take this epistemological dilemma for granted, however, we are left with another one: what is the point of the life of an AGI, and what would its motivations be? Assuming we are not brains in a vat, we know our brains' motivation is mostly controlled by its reward system. As all of us have come to experience in our daily lives, motivation is very hard to manually "control"; It is one of the brain's most basic mechanisms, because our survival crucially depends on it. Thus, evolutionarily, all humans share certain motivational traits -- like survival and reproductive instincts.

I think, however, that it makes little sense to project these traits onto an AGI, or to assume it would automatically exhibit such traits. In fact, if we do not provide our AGI with the proper equivalence of a reward center, we might end up with an extremely lazy and nihilistic AGI: its existence would be the most severe existential crisis imaginable. However, unless we ensure our AGI has human-like emotions (which would arguably qualify as a kind of reward system), it would also be the least crisis-like existential crisis ever.

So, then, if we incorporate a reward system into our AGI, it seems clear that survival and reproductive instincts would not be among the obvious options. On top of that, if we're redesigning our reward system anyway, we could cherry-pick the human-like emotions that are least likely to cause any problems. And, perhaps, incorporate some more specific goals, like the preservation of all human life. Or something simpler, like curing illnesses. Or playing chess.

And to address anyone pointing at doomsday-plots where humanity is destroyed as an unintentional by-product of an AGI pursuing its otherwise well-formulated goals, the way we unintentionally kill many bugs and insects while walking from A to B, or where the destruction of humanity is the means to an end, for example to "minimize human suffering": it seems obvious that, if we are to control the reward system, that actually implies we control their "reward and punish" system, since the human brain does not just encourage "positive" actions, but also inhibits "negative" ones. After all, if we are to reproduce something as complex as a brain, it is unlikely that we won't take the time to ensure that the reward system has no loopholes.

In conclusion, to come back to a point I made at the start: of course there are a lot of matters to be taken into account and a lot of things that could go wrong, but I am sure that any group of people capable of developing an AGI will carefully think what they are doing through. If they do not consider at least the above points, it would be unlikely that they would get their AGI to work at all. Of course there could be mistakes, but those mistakes might just as well lead an AGI to destroy itself, rather than posing any significant danger to us -- like how most mentally ill and disabled people are more likely to harm themselves than pose a significant threat to others. Like with any scientific progress, it is only made "good" or "bad" by mankind. Therefore, I think the "best-case" scenario (what that means is up to debate) might actually be a very positive one: for example, a world where synthetic non-biological forms of life can co-exist with us, in a way that benefits everyone. I do agree that there are as many bad scenarios as there are good ones -- as is usually the case --, but I would argue that if an AGI is to lead us to an apocalypse, it would likely do so with human-programmed intent -- like many causes of death and destruction today. If anything, the ultimate cause of such an apocalypse would be humans proneness to error, corruption and emotion. It would be a human apocalypse, not a digital one.

apocalypse digital AI AGI artificial intelligence

Return to the overview


Name:


Copy the above code to continue:

This website was developed by Styx ICT ♦ View Sitemap ♦ Contact us ♦ Administrator login ♦ All rights reserved © 2014