29 Matthew Cummings – Artificial Intelligence: Costs and Benefits of Associations with Human Cognitive Function

Matthew Cummings

English 102

Artificial Intelligence: Costs and Benefits of Associations with Human Cognitive Function

By far the greatest danger of Artificial Intelligence is that people conclude too early that they understand it.

— Eliezer Yudkowsky, 2008

 

The field of artificial intelligence (AI) is a broad spectrum of research and speculation with a singular unifying goal: to recreate human cognitive abilities in man-made systems. One of the original definitions of computer-based artificial intelligence was coined by John McCarthy in 1955, where he describes it as “the science and engineering of making intelligent machines that have the ability to achieve goals like humans do” (as cited in McClelland, 2017). Since then the field of artificial intelligence has made massive advances, prompting widespread speculation at its potential applications. AI’s applications are becoming increasingly integral to the normal function of everyday life with companies like Google, Amazon, and Facebook utilizing AI to increase the functionality and productivity of their users’ experience. With such a powerful effect on the way humans operate in such a short span of time, it is becoming more and more crucial to take a closer look at what AI is, what it is capable of, and how humanity can best control its effects. Most of the recent and major advances in the field of AI have occurred in branches of AI which derive the neural architecture and processes from human cognition. However, although the fundamental idea behind artificial intelligence is to recreate human cognitive abilities in man-made systems, perfectly recreating human cognition may produce unsafe or undesirable AI systems, making beneficial coexistence between AI and society a remote possibility.

To achieve human-like cognitive functions, researchers have taken a variety of approaches. Guoyin Wang (2018) notes, in his article “DGCC: A Case for Integration of Brain Cognition and Intelligence Computation”, that some functions of the human brain are very difficult to recreate with AI. Conversely, Wang explains that some functions, such as logic and mathematical functions, are much more compatible with AI architecture than the human brain. The first major advances in AI were in logical applications, which are relatively easy to implement in a binary environment. This sub-branch of AI is generally referred to as Symbolic AI which explicitly defined problems and subjects for AI to decipher in its code. In contrast, to mitigate issues in emulating certain human processes in Symbolic AI, researchers began to look towards different research methodologies. This movement yielded the idea of sub-symbolic systems, which process problems without the need for definite variables. Bhatia (2017) describes sub-symbolic systems as systems that make up the majority of the AI that the general public interfaces with every day, including machine learning, neural network, and deep learning systems. In reference to this sub-category of AI, Hassabis, Kumaran, Summerfield, & Botvinick (2017) state that “[s]ome key AI advances have been inspired by neuroscience and psychology, reinforcement learning and deep learning being prime examples” (as cited in Lieder & Griffiths, 2019). That is to say that the sub-symbolic method of AI more closely resembles the way the human brain performs a given cognitive process.

Artificial Intelligence is currently capable of a multitude of functions, ranging from human-like functions like voice recognition and locomotion to more computer-efficient functions like data analysis and pattern recognition. AI has surpassed human capability in numerous singular and narrowly defined tasks. However, for all the advancements of current AI, its abilities have still not yet matched the general level of intelligence displayed by the human mind. As such, the entirety of AI systems and research can be defined under the term Artificial Narrow Intelligence (ANI). With the foundational idea of AI being to recreate human intelligence in artificial platforms, current research has yet to realize the definition to its fullest extent. This next milestone in AI is referred to as Artificial General Intelligence (AGI), where AI is capable of performing any mental task at least as well as that of the average human. The final level of AI, or Artificial Super Intelligence (ASI), would be capable of surpassing the collective cognitive capabilities of the human race as a whole. The idea of AGI and ASI are theoretical capabilities of AI which allow researchers to hypothesize the potential outcomes and directions for the field of artificial intelligence. At these levels of AI the potential for unknown outcomes and danger becomes exponentially greater without proper regulations on AI research practices. The distinction between how an AI platform performs its cognitive process plays a crucial role in the transparency and safety of AI system implementations.

The ability to produce safe AI lies in understanding how it functions and being able to accurately predict the outcomes prior to implementation. Currently, the understanding of human cognition has many gaps, and as such, its implementation into AI constructs creates a degree of uncertainty in the outcome of the AI’s behavior. Because of the potential capacity of artificial systems and the differing perspectives, the debate to define the boundaries for safe AI has become just as perplexing to researchers as the creation of human level AI. In his book Life 3.0: Being Human in the Age of Artificial Intelligence, Max Tegmark (2018) describes the intelligence explosion, often times referred to as the singularity, and the possible futures that could result from it. By citing the recent advancements and current research methodologies, Tegmark explains how the creation of AGI could impact the world in positive ways if AI remains docile enough to remain loyal to its creators. It could also take a destructive turn if the AI decides to take control of its own destiny. The underlying idea that Tegmark continually refers to is that with the successful creation of AGI, the system would feasibly be capable of exponential recursive improvement of its own systems which would only be inhibited by the laws of physics. If this singularity of AI occurs, Tegmark proposes that control of the AI system would be impossible, and the only possibility of control would have been preventative measures to ensure a docile demeanor. This perspective provides researchers with a thoughtfully prepared view of the possible futures of the human race, should AGI be achieved. While the discussion of future possibilities resulting from AGI can be helpful in the proper research of current AI, it may also be helpful to understand how negligent fictional depictions can also lead to obstructive outcomes in the advancement of AI.

Much of the general perception of what AI is can be attributed to media from science-fiction entertainment and could have a negative impact on the design of safe AI systems. The Royal Society (2018) warns that, for the safe implementation of AI, it is important to properly guide general perception by stating, “[f]alse expectations can mean that a sector is allowed to grow without further intervention by governments. . . . As a result, a sector might grow slowly, reducing potential benefit. Or, it might grow fast, but in ways that are not aligned with social values, or in ways that lead to a bubble that will cause harm when it bursts”.  One of the first depictions of AI in fiction dates back to an article written by Samuel Butler in 1863 titled Darwin among the Machines. This apocalyptic view saw robotic entities as a force capable of taking over or wiping out the human race. While this is not the first example of inanimate objects being imbued with human-like intelligence, it is one of the first to imply a human-constructed consciousness. According to Brundage & Hwang (2018) in a Y Combinator interview, despite the recent insurgence of AI into mainstream applications and into most people’s daily lives, many people think of AI as a human-like machine capable of everything from benevolent servitude to maniacal domination. This effect could be due to entertainment’s portrayal of AI. Blockbuster cinematic depictions like Terminator, Avengers: Age of Ultron, Westworld, and The Matrix explore the apocalyptic perspective. Others show AI under a more positive light with depictions like Data from Star Trek or TARS and CASE from Interstellar. While most depictions in media tend to focus on the more surface-level and fantastical elements of AI, the representations have a powerful effect on the way the general public perceives AI and can hinder progress by serving as a false rubric for how to approach safe AI systems.

The safety of an AI system ultimately relies on researcher’s ability to adequately explain the process of how the AI reached its conclusions in order to accurately predict the behavioral outcome of the system. During a phone interview, I spoke with Dr. James Crowder (personal communication, 2019) to investigate the safety concerns researchers face in developing AI systems. Dr. Crowder, an AI researcher with over 20 years of experience, responded,

Real artificial intelligence can’t be predictable. When DARPA [Defense Advanced Research Projects Agency] keeps telling me they want systems that can learn and think and reason like people, my first response is, you don’t know some of the people I know then. Do you want it to learn and think and reason like Stephen Hawking or Charles Manson? They’re both people[,] but they learn to think really differently.

Dr. Crowder proposes that human cognitive abilities are a very wide spectrum and perhaps not the best model to create AI on. Charles F Stevens (2011), an American neurobiologist at the Salk Institute in La Jolla, explains that to completely recreate the human mind, computer programmers would have to recreate human irrationality in their programs. This proposes a paradoxical relationship between AGI and safe implementation. True human-like AGI would have to be capable of unexplainable and irrational thought processes similar to the current understanding of human thought mechanisms. Unfortunately this would put researchers at a disadvantage in being able to accurately predict an acceptably safe and trustworthy AGI system. Current AI systems, capable of only narrow human-like abilities, are already powerful enough to be dangerous if used for the wrong purposes, according to Crowder (2019).

For AI systems to be beneficial to humanity they must not only be functionally optimized but also be created and used with societal and economic implications in mind. A functionally optimal AI system must utilize algorithms which make the best use of resources while also accounting for bias in the software. One example where biological cognition can improve AI systems was found in a neurological study conducted by Xie et. al. (2016), where the group found evidence to suggest that computation in biological brain function could be defined by the simple algorithm N = 2^i-1. Xie et. al. describe it as a “wiring logic that illustrates how neural networks go from specific to general”, meaning as the types of information (i) increase, the number of neural connections (N) increase exponentially. Such improvements in the interdisciplinary knowledge between neuroscience and AI allow researchers in AI to more confidently and efficiently implement future neural networks. Despite the recent advancements in understanding the physical mechanisms of brain function, human-like AI systems still face other obstacles that reduce their potential for beneficial use.

In real world applications, creating intentional barriers or removing functionality in an AI’s programming is sometimes necessary to favorably achieve goals. Whittaker (2019) describes the OpenAI text generator as an AI system capable of comprehensive and convincing human-like continuations of text, given a prompt to follow. Because of the AI’s ability to compile and construct coherent responses, Whittaker applauded the company who decided to restrict the project’s functionality for fear of the potential abuses in “generating fake news, impersonating people, or automating abusive or spam comments on social media”. Unfortunately, current regulatory standards do not require this kind of restraint in releasing AI systems, and not all companies will be so cautious in their approach to ethical AI systems. The potential for unfavorable outcomes would only multiply in trusting an emulated human AGI or ASI system to utilize these abilities morally.

For AI to remain beneficial in real world applications, AI must be able to co-exist with society. Many AI systems like autonomous driving, medical diagnosis, and financial tools are already in widespread application with relatively low negative impact on society. Self-driving vehicles have been able to integrate into society with decidedly more benefits than not. Max Tegmark (2017) explains this by stating that “[b]ecause almost all crashes are caused by human error, it’s widely believed that AI-powered self-driving cars can eliminate at least 90% of road deaths”. However, Erdélyi & Goldsmith (2018) note that many of the uses of AI technologies are, “already becoming hazardous”. Imperfect AI applications can produce “discriminatory biases” and highly efficient AI can increase security breaches with pattern recognition applications. Erdélyi & Goldsmith posit further “[s]ome instantiations of AI are ethically questionable (e.g., child-like sex bots (Strikwerda 2017)), potentially dangerous (e.g., autonomous kill decisions by machines), or raise broader systemic challenges (e.g., labor displacement through AI, impugnment of existing ethical, legal, and social paradigms).” Currently most of the negative effects resulting from AI arise from the possibility for individuals, organizations, or governments to utilize the powerful abilities of ASI systems in unethical practices. At the core of this concern is the fear that a human-like intelligence would be capable of the nefarious use of powerful technologies. Should AI achieve a true human-like AGI or ASI level of intelligence, the uncertainty and threat of misuse would only be exacerbated.

True human intelligence is a cacophony of impulses, mechanisms, and imperfections. mental illness, negative emotions, destructive behavior, and reproductive drive are just a few processes that define how humans interact with their surroundings. According to Preidt (2018), in recent years, research has demonstrated that the physical and chemical makeup of the brain plays a critical role in the potential for “psychiatric diseases such as schizophrenia, bipolar disorder and autism”. These problems may never occur in the hardware of an AI system, meaning the mild chemical imbalances that affect mood would have to be artificially emulated. This modification, if done incorrectly, could further increase the possibility for unintended behaviors in the AI’s cognitive process. The modification could also work perfectly, potentially creating cognitive disorders in the AI as well as an ethical dilemma. AI with reproductive drive would be capable of recursive reproduction of its own software as fast as downloading a YouTube video. While the research of human cognition in AI could be potentially insightful to psychological sciences, the negative effects could outweigh any possible benefits.

Even the quest alone, to create human-like AI, could be dangerous when unexplainable cognitive mechanisms display unintended behaviors. In the field of AI, some researchers refer to this phenomena of unintended behaviors in AI systems as specification gaming. Krakovna (2018) describes “[o]ne interesting type of unintended behavior is finding a way to game the specified objective: generating a solution that literally satisfies the stated objective but fails to solve the problem according to the human designer’s intent”. In one such case a Global Hawk UAV crashed due to unforeseen consequences of its programming. Woods (2006) explains that, in this situation,

[l]iteral-mindedness creates the risk that a system can’t tell if its model of the world is the world it is actually in (Wiener, 1950). As a result, the system will do the right thing [in the sense that the actions are appropriate given its model of the world], when it is in a different world [producing quite unintended and potentially harmful effects].

In pursuing human cognitive function, AI systems will sometimes circumvent the programmer’s intended process to achieve the goal. While not inherently negative, if the process is absolutely goal oriented, the results can be catastrophic. Furthermore, in hypothetical AGI or ASI systems, Barrat (2015) describes how some scenarios might unfold, writing:

A powerful AI system tasked with ensuring your safety might imprison you at home. If you asked for happiness, it might hook you up to a life support and ceaselessly stimulate your brain’s pleasure centers. If you don’t provide the AI with a very big library of preferred behaviors or an ironclad means for it to deduce what behavior you prefer, you’ll be stuck with whatever it comes up with. And since it’s a highly complex system, you may never understand it well enough to make sure you’ve got it right.

The severity of these unintended consequences may be avoidable to some degree with fully explainable AI systems.

In the event that the understanding of human cognitive processes and the recreation of human-like AI becomes explainable to the point of safe implementation of AGI systems, humanity would be faced with the new problem of how to coexist with a new sentient being.  Theoretical AGI or ASI is the type of AI that fictional representations generally explore and that most people think of when AI is mentioned. If AI should achieve a certain level of intelligence and self-awareness it would be entitled to certain rights and protections. Max Tegmark (2018) describes how humanity might be forced to decide between allowing the AI freedom with hopes of continued service, or otherwise become slavers. Newly sentient AI constructs may call on the courts to force the removal of certain docility programming traits citing an oppression of civil liberties. Tegmark describes a total of twelve AI aftermath scenarios, of which only four leave humans in control of their society. Computer programming is a tool to solve problems. While AI is a powerful tool, the negative implications of AGI and ASI place a clear barrier on the level of usable and safe AI. However, with a better understanding of human cognition and artificial systems, the event horizon of the AI singularity may yet become traversable.

AI research directed towards emulating human intelligence has already displayed a remarkable ability to produce highly beneficial results in various applications. Improvements in the interdisciplinary knowledge between AI and neurological sciences have proven the scientific benefits that AI research presents, with respect to human cognitive function. However, as AI computational models improve towards achieving more multifaceted and adaptive capabilities, the dangers for misuse and unforeseen outcomes increase. Implementing an AI system without careful regard to the ramifications of potential misuse and the system’s ability to operate as intended could lead to disastrous consequences. As it stands, the application for adding more complex human psychological mechanisms to this predicament appears to provide only further convolution in researchers’ ability to adequately explain the process by which an AI system achieves it goal. By and large, the field of AI research provides an unprecedented possibility for the improvement of humanity’s problem-solving capabilities. With proper research direction and understanding, in conjunction with setting proper expectations in general AI perception to ensure proper regulation, the future of AI research may acquire a reasonable degree of certainty and safety. To produce the beneficial outcomes it is suggested that the varying perceived undesirable and unpredictable traits exhibited in humans be excluded in AI systems. Compounding the issues researchers face with human-like AI are the ethical issues they face with defining AI consciousness and the subsequent quandary of sentient AI treatment and coexistence. Until such a time that the applications of human-like AI are more clearly defined, more analysis is needed to consider the costs and benefits. To develop AI that is both safe and beneficial to humanity, AI systems must be controlled and understandable and as such should avoid perfectly recreating human consciousness.

References

Barrat, J. (2015). Four basic drives, Our final invention: Artificial intelligence and the end of the human era (p. 96).  New York, NY: Thomas Dunne Books, St. Martins Griffin.

Bhatia, R. (2017, December 27). Understanding the difference between Symbolic AI & Non Symbolic AI. Retrieved April 12, 2019, from https://www.analyticsindiamag.com/understanding-difference-symbolic-ai-non-symbolic-ai/

Butler, S. (1863, June 13). Darwin among the Machines. The Press.

Crowder, J. (2019, March 18th). Phone interview.

Crowder, J. (2014). Psychological Constructs for AI Systems: The Information Continuum. presented at International Conference on Artificial Intelligence, Las Vegas, 2014. Las Vegas, NV

Erdélyi, O. J., & Goldsmith, J. (2018). Regulating Artificial Intelligence. Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society – AIES 18. doi:10.1145/3278721.3278731

Krakovna, V. (2018, June 05). Specification gaming examples in AI. Retrieved April 12, 2019, from https://vkrakovna.wordpress.com/2018/04/02/specification-gaming-examples-in-ai/

Lieder, F., & Griffiths, T. L. (2019). Resource-rational analysis: Understanding human cognition as the optimal use of limited computational resources. Behavioral and Brain Sciences,1-85. doi:10.1017/s0140525x1900061x

McClelland, C. (2017, December 04). The Difference Between Artificial Intelligence, Machine Learning, and Deep Learning. Retrieved March 14, 2019, from https://medium.com/iotforall/the-difference-between-artificial-intelligence-machine-learning-and-deep-learning-3aa67bff5991

Preidt, R. (2018, December 13). New Brain Research Sheds Light on Mental Illness. Retrieved from https://www.webmd.com/mental-health/news/20181213/new-brain-research-sheds-light-on-mental-illness

Royal Society, The. (2018, December 11th). Portrayals and perceptions of AI and why they matter. Retrieved April 12, 2019, from https://royalsociety.org/-/media/policy/projects/ai-narratives/AI-narratives-workshop-findings.pdf

Stevens, Charles F. (2011) “What is the brain basis of intelligence?” PLoS Biology, 9, 6, Opposing Viewpoints in Context, Retrieved March 14, 2019, from https://link-galegroup-com.ezproxy.scottsdalecc.edu/apps/doc/A260873425/OVIC?u=mcc_sctsd&sid=OVIC&xid=7eda4369.

Tegmark, M. (2018). Life 3.0: Being human in the age of artificial intelligence. London: Penguin Books.

Wang, G. (2018). DGCC: A Case for Integration of Brain Cognition and Intelligence Computation. 2018 IEEE International Conference on Data Mining Workshops (ICDMW). doi:10.1109/icdmw.2018.00076

Whittaker, Z. (2019, February 17). OpenAI built a text generator so good, it’s considered too dangerous to release. Retrieved March 14, 2019, from https://techcrunch.com/2019/02/17/openai-text-generator-dangerous/

Woods, David. (2006). Chapter 11 On People and Computers in JCSs at Work.

Xie, K., Fox, G. E., Liu, J., Lyu, C., Lee, J. C., Kuang, H., . . . Tsien, J. Z. (2016). Brain Computation Is Organized via Power-of-Two-Based Permutation Logic. Frontiers in Systems Neuroscience, 10. Retrieved April 12, 2019, from doi:10.3389/fnsys.2016.00095

Y Combinator (2018, April 25). A.I. Policy and Public Perception – Miles Brundage and Tim Hwang [Video file]. Retrieved April 14, 2019, from https://www.youtube.com/watch?v=be0NSfPRoWg#action=share

Yudkowsky, E. (2008). Artificial Intelligence as a Positive and Negative Factor in Global Risk. Retrieved April 13, 2019, from https://intelligence.org/files/AIPosNegFactor.pdf

 

License

Two Waters Review, Volume One - 2016 to 2019 Copyright © by Matthew Bloom. All Rights Reserved.

Share This Book