GEEKY | ARTIFICIAL INTELLIGENCE | TECHNOLOGY | ROBOTS

Can someone have fun with the potential end of the world? Of course they can. It's the best way for the world to end. Not with a bang or a whimper but with a party. I leave Biden/Putin's 'Armageddon' aside in this piece to comment on another way our species might be annihilated.

If you are not a Medium member consider joining via my referral to read more of my stories:

One that has haunted the minds of everyone from the late Stephen Hawking to Elon Musk. You know, the visionary man-child who casually manipulates stocks on Twitter -a felony for an average Joe but not the richest man in the world- to become even richer. They both claimed that "Biology might be the substrate of artificial life" in the Universe, and that's why it's so quiet.

Of course that conjecture does not resolve the famed Fermi Paradox linked above. If robots exterminated all the aliens who made them, and that's why we have not heard from the latter, where are all the alien robots?

Why haven't they come to get us too? Or, at least -since the Universe is huge- why haven't we picked any transmissions from them? The Fermi Paradox is only seemingly simple. The Universe is quite old and very big for life to have emerged just once. There should be multiple other planets with life.

But unless such lifeforms are technological, and they are not just in our own galaxy but within 15–20 light years, they cannot pick our stray signals. If there are aliens / alien robots farther away who started transmitting centuries ago it is impossible for us to hear them unless they target our planet specifically with a very high power transmitter.

One far bigger and more powerful than Arecibo's in Puerto Rico, as SETI's Seth Shostak recently calculated. So a simple solution to the Fermi Paradox might be that we are the only species with technology within ~20 light years. That 'bubble' is a mere speck in the scale of the Milky Way alone. As for life in general the James Web is currently looking into that.

A more complex, and darker, solution is that intelligent life is invariably obliterated by a Great Filter before they become a multi-planet civilization -Elon Musk's very rational plan- in order to acquire a 'planetary redundancy.' Either via nukes, via a lethal pandemic, via natural disasters such as gamma ray bursts or asteroids hitting them, or via their own creations.

Enter robots. If rational robots are developed and they do not need to obey their creators, they only need to take a quick peek at the history of the last 100–150 years to conclude that we are the true virus of the planet and need to be instantly eliminated. Particularly if compassion and mercy are not among their traits.

It is possible that the same happened to older civilizations. They might have been 'superseded' by their robotic creations but they're not in the neighborhood of our galaxy, so their signals are too weak for us to pick.

So, how likely is such a development for our own AI and robots? Depending on which AI researcher you ask you will get anything from "That's impossible!" to "It's highly likely."

All current AI / deep learning algorithms are developed and trained for specific tasks. They are specialists, not generalists. The reason is the very high complexity involved as you increase the number of their layers -making them 'deeper'- and the significant computing resources required, particularly to train them.

A general AI would be the master of many trades. It would need, however, to be both very deep -having thousands of layers for high precision- and very wide -having multiple special AIs running 'side by side.'

The multiple special AIs would need to speak seamlessly to each other without that hindering the data and instructions flowing 'vertically' on their own stack. In other worlds they would be many but need to function as one.

A single deep learning algorithm can be immensely complex, so if it was attempted to link it in real-time with others the complexity would skyrocket. Along with the required computing resources. I doubt even the fastest super-computer today has the computing muscle for a truly general AI, even if it was dedicated to one exclusively.

Super-computers are conventional Von Neumann computers and the largest burn more energy than a modest town. They are highly inefficient and not really suitable to run a general AI. However the newly developed neuromorphic processors, which are modeled after the human brain, are both extremely energy efficient and massively parallel. Instead of transistors they have artificial neurons and synapses.

They are ideal to run an AI. Perhaps, in time, even a general AI. If a general AI is developed or -more likely- it spontaneously emerges as a property, it would be roughly as smart as human. But it would inevitably lead to a general super-AI in time -perhaps in mere months- and all bets are off about how that would behave.

If the aliens were wasted by their creations this was done by general super-AIs, not simple general AIs. A general super-AI would be the ultimate master of all trades. It could trivially hack even the most secure facility and, of course, it could make copies of itself. Yes, I'm talking Skynet, assuming it decides to be one.

I strongly doubt a general AI -let alone a super-AI one- can be made by humans; at least not before the mid of the century. But one could emerge as a property in a sufficiently complex computing environment. If you take a deeper look at what those neuromorphic thingies can do you start to worry.

They are the missing hardware for AI algorithms of all kinds. Life itself is thought to have emerged as a property from the lifeless Earth and later intellect and conscience emerged in our primal minds.

They are all examples of "the whole is more than the sum of its parts." They cannot be disassembled reductionist style and reassembled like a machine, because their higher property is lost. The same might happen with artificial intelligence and, eventually, artificial life.

Are you worried yet? Me? Not particularly. It would be a far more fascinating way to kick the bucket than due to a little man with a Tsar syndrome being too embarrassed to admit defeat. I would also not have to see his deadpan mug every day in the news.

Musings by Nikolaos Skordilis If you liked it and want to read more you may click this: Get my stories in your inbox 📬