Abû Hurayrah relates that Allah’s Messenger (peace be upon him) said: “Islam began strange, and it will become strange again just like it was at the beginning, so blessed are the strangers.” [Sahîh Muslim (1/130)]

Thursday, July 9, 2015

Are we approaching Singularity?

And now, on a completely different note...

As a media guy, I make it a point to notice certain trends that appear in television and cinema, which I feel are broader reflections of globalised society. Something that has struck me in the past few months has been a new resurgence in AI (artificial intelligence)-themed movies and literature. Last year's release of Transcendence, the latest Avengers movie, and the new Terminator all feature antagonists created as a result of AI gone awry, of machines taking a life and intelligence of their own with disastrous results. Could this be a pushback against recent developments in the field of robotics and computers? Or is simply Hollywood's regurgitating themes of the 50s and 60s involving self-aware computers?

Singularity 

Research on artificial intelligence for computers has been happening for decades. The gist of AI is to finally create a machine with intelligence capable of any task of a human being. More than that, the machine should be capable of 'recursive self-improvement', to be able to build better robotic versions of themselves without outside assistance.

Through successive cycles of self-improvement, the idea is that one day the machines will reach a tipping point, a sudden quantum leap in intelligence capacity. This phenomenon is called 'technological singularity' and is the fodder for half of the sci-fi movies before and after The Matrix appeared.

This isn't some fringe quack science though. Scores of futurists and scientists have dedicated themselves to the task of realising AI by the mid of the 21st century. There is even an annual conference dedicated just to exploring this issue. Very rudimentary AI-based software is already in the market used by sites such as Google and Facebook.



Mad Science

The thrust of the pursuit behind AI is a form of technological utopianism, a belief that somehow computers can and will tackle all of humanity's dilemmas. If you think that's a stretch, NASA scientist Richard Terrile said recently, "The benefits of AI are that it could solve all the world's problems. All of them. Seriously. Technology could probably solve all of them in one form or another."

Not all scientists are so sure. Among the doubters are the famous Stephen Hawkings and even Bill Gates, who feel that once the machines reach a critical point of intelligence, they move beyond human control. Hawkings speculates whether AI could be the last human invention, ever. Especially if machine are put in the decision makers' seat to do crucial tasks. Many countries are already developing battlefield robots, for example. Any plan of putting safeguards to inhibit the actions of the machine is as limited as our own intelligence.

Problems with AI

AI is based around bio-mimicry, using machines to copy biological patterns in human beings. To achieve intelligence, researchers work on various algorithms based off the neurological activities of our brains to simulate what is manifested as intelligence.

The obvious problem is that defining what exactly constitutes intelligence is difficult. According to Gary Marcus, Professor of Psychology and Neuroscience at New York University, "intelligence is not a single-dimensional trait (like height or weight, something that can be measured with one number) but a complex amalgam of many different cognitive traits."

Another problem is that a machine replicating our patterns in the end is just a simulacrum, a replication, a photocopy. This is a far way from a living and thinking entity capable of self-conscious. Duplicating certain neural impulses in a computer doesn't mean that this computer is aware of its own identity the way we do.

Professor Noam Chomsky dismisses the possibility of singularlity altogether, citing our limitations in understanding the human brain as the biggest hurdle. For obvious reasons, we don't perform live testing of human brains the way researchers carve into mice and frogs, and there will still be a glass ceiling to our level of knowledge anyways.

As much fun or dread we may gain from the thought of living in an AI world, I myself remain doubtful that it is even remotely feasible. I see consciousness as a unique gift from the Almighty that we experience, a cognitive acceptance of our material and spiritual existence. The thought that this can be duplicated on a hard drive offends my sensibilities somewhat. But then, it's a crazy world, so few things can be completely ruled out.