lördag 20 april 2019

The fear of AI is completely misleading

Many people are raising a fear for AI and that a super intelligence might not be aligned with the interests of humans. There are many reasons to be skeptical against such a view. Some have been covered in this blog before. Here is another one.

Say that is turns out artificial super intelligence is the outcome of the processes we see now. Say that such ASI outperform humans in most tasks. Why is it then a bad thing that they supersede us? If they are the next step in evolution, is it not a good thing that it materializes?

Very many of the arguments against ASI are really grounded in fear. But fear is not a good basis for making decisions.

About AGI

Alison Gopnik pointed out something very important about AGI and general capabilities. As you increase certain capabilities, e.i. you strengthen the priors, you at the same time reduce a capability of generalizing.
This could be used to derive some theorem about limits of capability. Widening it somewhere will be necessity narrow it down somewhere else.

So creating a general intelligence like a human will forcibly reduce some other capabilities, e.g. in playing chess.

This is an indication of that there is a limit in how intelligent an entity can be and also that AGI might not supersede us after all.

Integrated functional co-evolution

This might be a completely well understood and known thing, but I just realized that there is co-evolution within a single organism. Easiest is probably to illustrate with an example. Why do humans have arms and hands? They are really not very impressive compared to e.g. a cat's legs - sharp claws and strong so that they can jump 4-5 times their own height.
But the hands and arms have a completely unique feature - they are very generic. We can do many things with them. This would not be a very useful feature for a cat since they are fairly dump, and would thus not know how to use these generic tools. So, to be useful, they also have to go with a very large brain. Just as it happens, humans do have large brains. This is not a coincidence. Our brains have evolved together with our generic arms and hands. As we became smarter, new advanced things could be made with our hands, and then it would be beneficial to have more generic and even better hands and arms. E.g. with finer resolution.

This kind of integrated functional co-evolution is a different kind of evolution compared to how spices evolve in relation to it's surrounding. It is also different compared to ordinary co-evolution where different organisms co-evolve so e.g. be more and more specialized.

It would be interesting to hear if anyone can tell if this is well know process, or something novel. It would also be nice to test this in some kind to computer simulation.