A.I. The Trouble With It and Human Intelligence

Artificial Intelligence is becoming more and more intriguing. The human brain – intelligent and unique – is challenging psychologists, who are determined to unravel its complexities and unlock its possibilities to enhance human life.

Utilizing Artificial Intelligence (AI), scientist have already made breakthroughs in human-machine interactions through Siri, self-driving cars, chat-bots, tools which can aid in diagnosing patients of mental and medical conditions, and even pushed forward artificial womb lab facilities in an attempt to replace standard pregnancies.

The human brain has evolved over time to respond to survival instincts, harnessing intellectual curiosity, and managing the demands of our surroundings. When humans got an inkling about the orders of the environment, we began our journey to replicate and emulate nature.

AI and The Human Brain

Our success in imitating nature has been related to advances in science and technology. While the human brain finds ways to exceed our physical capabilities, the combination of mathematics, chemistry, algorithms, computational methods, and statistical models, is accelerating our scientific pursuit.

Today, AI has grown from data models for problem-solving to artificial neural networks – a computational model, based on the structures and functions of the human biological neural network.

Enter Dr. Matthew Botvinick – DeepMind

Dr. Matthew Botvinick, a professor at Princeton University and the head of the Neuroscience Laboratory at DeepMind, a company which works closely with Google, aims to solve intelligence challenges by developing more general, and capable problem-solving systems utilising artificial intelligence.

More specifically, Dr. Matthew Botvinick focuses on uncovering the complex neuronal networks of the human brain to gain an understanding of how that knowledge can be used in order to replicate the natural system onto artificial advanced systems.

This pioneering approach to the field brings together machine learning, neuroscience, engineering, mathematics, stimulation and computer infrastructure. DeepMind systems are now capable of diagnosing eye diseases more effectively, and can even predict protein shapes in ways that may change how drugs are created.

Brain Reward System 

Prior to DeepMind research, scientists looked at how singular values represented in traditional reinforcement learning, a system that tells our brains ‘how well something is going’ by accumulating rewards, works better instead of taking a singular number – you generalise that into various distributional representations.

For example, traditional reinforcement learning can be thought of as a gambler calculating their probability of winning a casino game. Usually, that is portrayed by a single number symbolising the average chance of winning. What the new reinforcement learning system proposes is for the gambler to think of more values instead of a singular average; a probability to win, and to lose, and not reduce that distribution to a single average number.

It was found by Neuroscientists at DeepMind in a paper ‘Distributional code for value in dopamine-based reinforcement learning’, that this method significantly accelerated reinforcement learning due to the fact that it drives richer representation.

Their research helped understand how our brain system of reinforcement learning works. This enables AI researchers in replicating that onto computational algorithms, thus creating intelligent programmes.

The Missing Human Element

There are certainly discussions regarding conscious machines, and their potential to outstrip us of our control. But as well as tackling the idea of how computers can become more human, we have to think how humans can work like computers.

Significant amounts of research into AI focuses on development and risk factors. However, little to no research considers how humans can develop along with technology.

There are phone apps such as ‘’Replica’’, a tool that can create any style of art within seconds of typing what you want it to produce. Recent advances in generative AI mean the threat of being replaced by a computer has extended beyond art to other types of creative and knowledge workers.

ChatGPT, a new text-based chatbot built by OpenAI; can pen essays, teach users on various complex subjects, and even act as a therapist. ‘’Deep learning’’ is developing so rapidly it’s hard to imagine at what point it ends.

What if AI is capable of producing music, lyrics, and sounds better than humans? Will we want an AIl concert singing robot just because it may do it better? Neuroscientists are building intelligent systems based on how our brain works, ignoring what humans actually want. Ironically, this is easier said than done, because usually – humans want different things.

Global Regulation and Ethical Considerations of AI

‘’If you have an agent that can do something the human cannot do, under what conditions do you want them to do it?’’. Because artificial intelligence does things that would normally require human intelligence, moral guidelines should be just as considered as decision-making.

Experts who expressed their worries invoked governance concern. Whose ethical systems should be applied? Who gets to make that decision? Who is even responsible for implementing ethical AI? Geopolitical and economic competition are main drivers for AI technology development, while moral concerns take a back seat.

AI technology undertakes sophisticated tasks doing the ‘heavy lifting’, but if artificial intelligence is programmed and trained by the human to undertake certain tasks, it could also replicate the human bias.

For example, if predominantly white male scientists collect data on predominantly white males, the AI that is being designed would replicate their biases and prejudice.

AI presents three major areas of ethical concern; privacy, bias and discrimination, and the most difficult area – the role of human judgement. Given the power of AI, some argue it should be tightly regulated, but there is little to no consensus on how that should be done and who makes the rules.

Conclusion

Currently AI is a technological feast for scientists and engineers, but AI leads to questions beyond technology and engineering, and moves on to social, political and global concerns – what type of society do we want?.

As AI technology continues to develop, scientists must ensure that AI-systems are governable, transparent and understandable; that they can work effectively among humans, and that their work and output will remain consistent with human values and morals.

What are your thoughts on the rapidly evolving arena of artificial intelligence? Let us know in the comments below!

Natalia Bednarz

Natalia Bednarz is a 23-year old first class honors student at Staffordshire University, studying Forensic Psychology. Invested in the human condition, she has a variety of psychology research interests into therapy methods, gender stereotypes, schizophrenia and major depression. She has a passion for delving into the human psyche. Her goal is to become a licensed psychologist and achieve a PhD in Clinical Psychology. She believes everyone deserves help and it is far more exciting to find the good in people rather than dwell on the bad in them.

We will be happy to hear your thoughts

Leave a reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Keep Fit Kingdom
Logo