Hype vs feasibility: The fear of AI taking over humans
A couple of years back, I came across a great TED talk by chief data scientist of Meta (erstwhile Facebook) Yann LeCun — about Deep learning, neural networks and the future of AI. Yann talks at length about the fear of AI destroying mankind and if this stems from the human tendency of evaluating every situation from the lens of human emotions i.e. are we applying the human emotions of anger, jealousy, competition, the joy of success and fear of failure etc. to AI since our realm of consciousness is limited by our emotions?
Recently, I have been reading a great book called The Extended Mind: The Power of Thinking Outside the Brain by Annie Murphy Paul. The focal point of this book is how learning or thinking is not limited to the central organ of our body, the brain and the melange of neurons and synapses. Annie brings out some critical factors that contribute to the human thinking process beyond the brain (hence “extended mind”):
- Thinking with our gestures, movements and senses
- Thinking with natural and built-up spaces
- Thinking with experts and peers
While reading this book, I remembered the TED talk by Yann and specifically the point about applying human “umwelt” of consciousness (the world as it is experienced by a particular organism) to machines — how relevant and feasible it is.
With great power comes great responsibility
This ancient adage is very relevant when it comes to how destructive AI can be if not used constructively. Let’s take the example of Deepfake, AI-generated videos and images. It can cause havoc if hackers plaster any sensitive, religious or confidential message on a (fake) video of a politician or a powerful persona. Deepfake is being considered the biggest threat to democracy. A research done last year reported a 47% increase in deepfakes from the previous year. A deepfake detection challenge by Meta ( Facebook) in 2020 concluded that the best available ML model could detect deepfake with only 65% accuracy. The proliferation of deep fakes in the coming years will make this situation even worse. However, this problem is not AI created but human-assisted. Humans (hackers) are using the technology irresponsibly to their advantage. The model generates deepfakes because humans provide unethical data to it.
Another fear concerning AI is the loss of jobs due to automation. According to research by McKinsey, Automation will displace around 15 percent of the global workforce, or about 400 million workers in the period 2016–2030. However, the research also predicts that new jobs will be created due to rising incomes, increased spending on healthcare, and investment in infrastructure, energy, technology etc. Consequently, the displaced workforce will need to invest in re-skilling to offset the demand and thrive in the new workplace. The job description will change more rapidly as partial automation will become commonplace in labour-intensive industries.
Will AI be destructive on its own?
Some “what-if” concerns about AI are relevant. Stuart Russel, a leading computer scientist asks a pertinent question in his book Human compatible: AI and the problem of control -
“A question could be asked to the leading figures in AI: “What if you succeed?” The field’s goal had always been to create human-level or superhuman AI, but there was little or no consideration of what would happen if we did.”
But the questions I want to discuss in the following article are more fundamental than those above. I am posting a few facts about the process of human thinking, learning and cognition and then contrasting them with how AI learns. For AI to become destructive on its own and take over humans, it first needs to achieve basic tenets of human intelligence: sensory perception, emotional intelligence, creativity, social engagement etc. In the 2-part series, I will discuss the following 4 areas and how it contrasts with AI.
- How emotional intelligence and evolutionary advantage are critical to human cognition
- How the brain’s model of learning is based on embodied cognition and not just data
- Why the brain’s ability as a general-purpose computing machine is unmatched
- What limitations are posed by every organisms’ umwelt and how to extrapolate it to AI
1. Emotional Intelligence and evolutionary advantage are critical to human cognition
One of the most celebrated intellectuals and historians of our times, Yuval Noah Harari quotes in his seminal book, Sapiens:
“You could never convince a monkey to give you a banana by promising him limitless bananas after death in monkey heaven.”
But we, homo sapiens bond over such common myths or beliefs, emotions and shared identities. Our ability to come together as a community and share a common understanding about culture, religion, god, devil, heaven and hell, friendships, enemies, families, etc. differentiates us from other animals. Our feelings about security, companionship, rivalry, jealousy, control etc. have their roots back in the savannah where our ancestors were trying to bond and survive as a community. Our emotions are an integral part of who we are and play an important role in decision making at a subconscious level. They reflect the meaning of information to us. E.g. The smell of freshly baked muffins evokes a different emotional response as opposed to the foul smell of vomit.
AI algorithms learn based on the data we provide. In the case of supervised learning, we teach the machine how to identify patterns. In the case of an unsupervised one, the machine starts identifying the pattern on its own. The case of reinforced learning is a little different where the algorithm learns based on the feedback. In any case, the realm of decision making is solely based on the data and there is no sense of emotion (however, bias gets introduced in an algorithm because of the bias in underlying data). How would an emotion-deprived AI develop feelings and make emphatic decisions without any sense of “how it felt the last time I did this”?
2. Brain’s model of learning is based on embodied cognition
Famous neuroscientist and technologist, David Eagleman quotes in his book, Brain — the story of you:
“You become who you are not because of what grows in your brain, but because of what is removed!”
Yes, you read that correctly! The number of brain cells is the same in adults and children. By age 2, a child has 1 hundred trillion+ synapses (the connection between neurons), which is double the no. of synapses in adults. As you grow, 50% of those will be pruned or pared back. When synapses are not used, they weaken and connections are lost, just like unused paths in a forest. A child develops and learns based on the cues, mimics and feedback from her environment, family, and community. The paths she learns stay and the unused ones are pruned back.
Now contrast this with how AI learns. The principle of learning by reduction simply does not make sense in the case of neural networks. With every epoch, the connections strengthen in the artificial neural network; with more meaning getting added in the learning process e.g. in the case of face recognition, the contours of the face starts to become more clear — from the round shape of the face to identification of nose and cheeks.
Secondly, As Anne Murphy Paul emphasizes in her book, The Extended Mind — human learning is not limited to the brain only but is an experience of Embodied Cognition. I.e. how our thinking is extended by our bodies (gestures, movements), by the spaces around us (natural and built ones) as well as our interaction with others (community, family, peers, friends, teachers, experts and so on). How will AI ever get the advantage of this embodied cognition when it solely depends on data.
Can we create AI bot farms where they can learn from each other? Maybe we can. A discipline of multi-task learning in deep neural networks takes the motivation from how babies learn and incorporates that into a series of parallel tasks by models and sharing parameters among them. Another discipline of generative adversarial learning is where a pair of generator and discriminator models train each other.
However, the fundamental question remains — how will AI surpass the advantage bestowed upon us by evolution?
3. Brain is a general-purpose computing machine
In his recent book, Livewired, David Eagleman presents a very interesting concept of the brain as a general computing machine. He debates that the brain regions only care about problem-solving irrespective of the sensory channel by which the information arrives. The photon falling on the retina, air compression waves at the ears, pressure sensation on the skin — all of this gets converted to a common currency of electrical signal at the neuron level. As long as these incoming signals represent something about the external world, the brain will learn how to interpret them. E.g. It figures out how to extract an object’s shape from incoming signals regardless of the path it takes, whether by eyes or skin.
The human brain is liveware (a term coined by Eagleman) and not software. It learns, interprets, defines patterns, creates correlations based on any type of input it receives via any channel. Eagleman’s company Neosensory builds on this idea and creates devices like wearable vests or a band that help deaf people to “hear” sounds via a concept of sensory substitution. I.e. convert sound waves to a series of vibrations on skin with the help of a vest or a band. The brain of a completely deaf person starts learning to interpret these signals and understand their meaning of it! Refer to this amazing TED talk by David Eagleman to understand how simply ingenious this concept is!
Coming back to the topic of AI, we are still very far from AGI, Artificial General Intelligence. As per Rodney Brooks, an MIT roboticist and co-founder of iRobot, AGI won’t arrive until the year 2300! However, there are tiny steps being taken in the direction to make machine learning more human-like. Zero-shot learning is one such method which enables a model to recognize what it hasn’t seen before. When enough data is not available for every possible classification, zero-shot or one-shot (or few-shot) learning methods try to classify objects based on object similarity, prior knowledge or learning of constraints and structure of the dataset.
However, the fundamental question still remains — even if AI develops the ability to process any type of data and learn from it, it will still lack the emotional feedback from sensory perception, ability to ‘feel’ from the correlation of senses and most importantly, the dopamine surge after successfully achieving the skill!
4. Every organisms’ umwelt has its limitations
Now, let’s talk about a more fundamental and hard problem of consciousness. One of the most engaging and thought-provoking podcasts I have come across on this subject is On Consciousness with Annaka Harris by host Rob Reid (After ON podcast). Both Rob and Annaka delve deeper on the subject of what it means to have consciousness? How certain combinations of matter in the brain seem to have consciousness?
They debate some interesting theories. E.g. plants don’t have brains or nervous systems but they respond to sound and light. So are they conscious? Rob refers to an interesting study done by Suzanne Simard which explains how the plants communicate with each other via sharing minerals, sending signals about poisonous substances etc. Annaka refers to claims by some scientists that even transistors have consciousness — but it is very different from human consciousness, their umwelt is limited to switch ON and OFF only.
So if we apply the same theory to AI, it may also have an umwelt or realm of consciousness which is different from human consciousness. Would we ever be able to understand it or train it to “feel” or “perceive” like humans? Try imagining a different color outside the RGB spectrum and you will understand this argument. Or try to explain purple color to a color blind person with red-green color deficiency. You Have to experience the color to know what it is. No matter how advanced our brains are, our perceptions are still limited by our umwelt. And the same may apply to future AGI — its perceptive abilities may be limited to its realm consciousness which is built around streams of data and an artificial feedback from surroundings minus the emotional intelligence.
While presenting all above arguments, I do not want to belittle any research in the field of artificial neural networks or progress towards AGI. I have always been very curious about this field and have worked briefly in it. In fact, my journey of understanding how the human brain works stems from the quest of understanding the basics of artificial neural networks back in 2018. In the process, I went deeper in the areas of neurons and synapses, intelligence, sensory substitution, robotics etc. and came across wonderful work by leading intellectuals and scientists like Dr. David Eagleman, Ray Curzwell, Rob Reid, Andrew Huberman, Stuart Russel and so on.
Please comment if you have similar or counter arguments, or have resources which can throw more light on this subject, or even to simply criticise this article; all comments are welcome!
*Acknowledgements: Sincere thanks to Bhushan Garware, AI expert for peer-reviewing this article and providing valuable feedback and references to incorporate!