Introduction to Psychology

Introduction to Psychology

Maricopa Edition

Julie Lazzara

MCCCD

Phoenix, AZ

Contents

1

About This Text

Introduction to Psychology: Maricopa Edition

The content for this textbook is from Openstax Psychology 2e. This text was uploaded from Openstax into Pressbooks by Julie Lazzara on 6-25-2020. This Maricopa Edition of the text was remixed in April 2021 as a part of a MOD Press Grant to meet Maricopa’s Course Competencies for PSY 101 and to be considered for the Gold Seal Quality Assurance. The book cover was designed was by Sam Fraulino with the image nature butterfly by cocoparisienne is licensed under a CC0 license.

Enhancements and Revisions to the Original Openstax Psychology 2e Text

Diversity, Equity, and Inclusion (DEI) Alignment

With the help of researchers and teachers who focus on diversity- and identity-related issues, OpenStax has engaged in detailed diversity reviews to identify opportunities to improve the textbook. Reviewers were asked to follow a framework to evaluate the book’s terminology, research citations, key contributors to the field, photos and illustrations, and related aspects, commenting on the representation and consideration of diverse groups. Significant additions and revisions were made in this regard, and the review framework itself is available among the OpenStax Psychology 2e instructor resources.

Universal Design for Learning (UDL) Alignment

As with all OpenStax books, the first edition of Psychology was created with a focus on accessibility. We have emphasized and improved that approach in the second edition. Our goal is to ensure that all OpenStax websites and the web view versions of our learning materials follow accessible web design best practices, so that they will meet the W3C-WAI Web Content Accessibility Guidelines (WCAG) 2.0 at Level AA and Section 508 of the Rehabilitation Act. The WCAG 2.0 guidelines explain ways to make web content more accessible for people with disabilities and more user-friendly for everyone.

Media Attributions

In Psychology 2e, most art contains attribution to its title, creator or rights holder, host platform, and license within the caption. Because the art is openly licensed, anyone may reuse the art as long as they provide the same attribution to its original source.

To maximize readability and content flow, some art does not include attribution in the text. If you reuse art from Psychology 2e that does not have attribution provided, use the following attribution: Copyright Rice University, OpenStax, under CC BY 4.0 license.

MCCCD Guidelines

MCCCD Course Description for PSY 101

Overview of the study and methods of psychological science. Includes an introduction to subfields such as biopsychology, learning, memory, development, social, and psychological disorders.

MCCCD Course Competencies

  1. Describe the scientific method and how it is used to answer psychological questions about human thought and behavior. (I, II, III, IV, V, VI)
  2. Distinguish between the science of psychology and parapsychological, pseudoscientific, or popular representations of psychology that fall outside the scope of science. (I)
  3. Critically evaluate information to help make evidence-based decisions. (I, II, III, IV, V, VI)
  4. Apply biopsychosocial principles to real-world situations. (I, II, III, IV, V, VI)
  5. Use psychological principles to explain the diversity and complexity of the human experience. (I, II, III, IV, V, VI)
  6. Identify brain structures and how neuroscientific processes play a role in human thought and behavior. (II, III)
  7. Describe basic principles of consciousness, sensation, and perception. (II)
  8. Define personality and identify some of the fundamental debates in the study of personality, including the person-situation debate. (II, IV)
  9. Recognize and define three basic forms of learning—classical conditioning, operant conditioning, and observational learning
  10. Describe cognitive processes including those related to learning, language, and intelligence. (III, IV)
  11. Analyze and explain how motivation and emotion affect, and are affected by, human behavior.
  12. Demonstrate an understanding of human development across the lifespan. (IV, V)
  13. Identify the major categories of psychological disorders and therapeutic approaches to their treatment. (V, VI)
  14. Discuss how the behavior of an individual is directly influenced by other people, groups, and social environments. (VI)
  15. Explain gender identity, sexual orientation, sexual functioning, and sexual behavior. (VI)

MCCCD Course Outline

  1. Science of psychology
    1. History of psychology
    2. Research methods
  2. Biological foundations of behavior
    1. Brain and nervous system
    2. Sensation and perception
    3. States of consciousness
  3. Behavioral learning
    1. Classical conditioning
    2. Operational conditioning
    3. Observational learning
  4. Cognitive processes
    1. Memory
    2. Language, thought, and intelligence
    3. Motivation and emotion
  5. Development and individual differences
    1. Human development
    2. Personality
  6. Psychological health
    1. Health psychology
    2. Psychological disorders
    3. Therapeutic approaches
  7. Social identities and interaction with others
    1. Interpersonal relations
    2. Social interactions
    3. Gender and sexuality
    4. Diversity and culture

About OpenStax

OpenStax is a nonprofit based at Rice University, and it’s our mission to improve student access to education. Our first openly licensed college textbook was published in 2012, and our library has since scaled to over 35 books for college and AP® courses used by hundreds of thousands of students. OpenStax Tutor, our low-cost personalized learning tool, is being piloted in college courses throughout the country. Through our partnerships with philanthropic foundations and our alliance with other educational resource organizations, OpenStax is breaking down the most common barriers to learning and empowering students and instructors to succeed.  This textbook was written to increase student access to high-quality learning materials, maintaining the highest standards of academic rigor at little to no cost.

About Psychology 2e

Psychology 2e is designed to meet scope and sequence requirements for the single-semester introduction to psychology course. The book offers a comprehensive treatment of core concepts, grounded in both classic studies and current and emerging research. The text also includes coverage of the DSM-5 in examinations of psychological disorders. Psychology 2e incorporates discussions that reflect the diversity within the discipline, as well as the diversity of cultures and communities across the globe.  Psychology 2e is licensed under a Creative Commons Attribution 4.0 International (CC BY) license, which means that you can distribute, remix, and build upon the content, as long as you provide attribution to OpenStax and its content contributors.

The first edition of Psychology has been used by thousands of faculty and hundreds of thousands of students since its publication in 2015. OpenStax mined our adopters’ extensive and helpful feedback to identify the most significant revision needs while maintaining the organization that many instructors had incorporated into their courses. Specific surveys, pre-revision reviews, and customization analysis, as well as analytical data from OpenStax partners and online learning environments, all aided in planning the revision.

The result is a book that thoroughly treats psychology’s foundational concepts while adding current and meaningful coverage in specific areas. Psychology 2e retains its manageable scope and contains ample features to draw learners into the discipline.  Structurally, the textbook remains similar to the first edition, with no chapter reorganization and very targeted changes at the section level.

OpenStax only undertakes second editions when significant modifications to the text are necessary. In the case of Psychology 2e, user feedback indicated that we needed to focus on a few key areas. The revision plan varied by chapter based on need. Some chapters were significantly updated for conceptual coverage, research-informed data, and clearer language. In other chapters, the revisions focused mostly on the currency of examples and updates to statistics.

Over 210 new research references have been added or updated in order to improve the scholarly underpinnings of the material and broaden the perspective for students. Dozens of examples and feature boxes have been changed or added to better explain concepts and/or increase relevance for students.

To engage students in stronger critical analysis and inform them about research reproducibility, substantial coverage has been added to the research chapter and strategically throughout the textbook whenever key studies are discussed. This material is presented in a balanced way and provides instructors with ample opportunity to discuss the importance of replication in a manner that best suits their course.

Pedagogical foundation

Psychology 2e engages students through inquiry, self-reflection, and investigation. Features in the second edition have been carefully updated to remain topical and relevant while deepening students’ relationship to the material. They include the following:

  • Everyday Connection features tie psychological topics to everyday issues and behaviors that students encounter in their lives and the world. Topics include the validity of scores on college entrance exams, the opioid crisis, the impact of social status on stress and healthcare, and cognitive mapping.
  • What Do You Think? features provide research-based information and ask students their views on controversial issues. Topics include “Brain Dead and on Life Support,” “Violent Media and Aggression,” and “Capital Punishment and Criminals with Intellectual Disabilities.”
  • Dig Deeper features discuss one specific aspect of a topic in greater depth so students can dig more deeply into the concept. Examples include discussions on the distinction between evolutionary psychology and behavioral genetics, recent findings on neuroplasticity, the field of forensic psychology, and a presentation of research on strategies for coping with prejudice and discrimination.
  • Connect the Concepts features revisit a concept learned in another chapter, expanding upon it within a different context. Features include “Emotional Expression and Emotional Regulation,” “Tweens, Teens, and Social Norms,” and “Conditioning and OCD.”
  • Link to Learning features direct students to online interactive exercises and animations that add a fuller context to core content and provide an opportunity for application.

Student and Instructor Resources

We have compiled additional resources for both students and instructors, including Getting Started Guides, an instructor solution guide, a test bank, and PowerPoint slides. Instructor resources require a verified instructor account, which you can apply for when you log in or create your account on openstax.org. Student resources can be accessed here. 

About the authors

Senior contributing authors

Rose M. Spielman (Content Lead), Quinnipiac University
William J. Jenkins, Mercer University
Marilyn D. Lovett, Spelman College

Contributing Authors

Mara Aruguete, Lincoln University
Laura Bryant, Eastern Gateway Community College
Barbara Chappell, Walden University
Kathryn Dumper, Bainbridge State College
Arlene Lacombe, Saint Joseph’s University
Julie Lazzara, Paradise Valley Community College
Tammy McClain, West Liberty University
Barbara B. Oswald, Miami University
Marion Perlmutter, University of Michigan
Mark D. Thomas, Albany State University

1

Introduction to Psychology

An illustration shows the outlines of two human heads facing toward one another, with several photographs of people spread across the background.
Figure 1.1 Psychology is the scientific study of mind and behavior.

Clive Wearing is an accomplished musician who lost his ability to form new memories when he became sick at the age of 46. While he can remember how to play the piano perfectly, he cannot remember what he ate for breakfast just an hour ago (Sacks, 2007). James Wannerton experiences a taste sensation that is associated with the sound of words. His former girlfriend’s name tastes like rhubarb (Mundasad, 2013). John Nash is a brilliant mathematician and Nobel Prize winner. However, while he was a professor at MIT, he would tell people that the New York Times contained coded messages from extraterrestrial beings that were intended for him. He also began to hear voices and became suspicious of the people around him. Soon thereafter, Nash was diagnosed with schizophrenia and admitted to a state-run mental institution (O’Connor & Robertson, 2002). Nash was the subject of the 2001 movie A Beautiful Mind. Why did these people have these experiences? How does the human brain work? And what is the connection between the brain’s internal processes and people’s external behaviors? This textbook will introduce you to various ways that the field of psychology has explored these questions.

MCCCD Course Competencies

After reading this textbook and successfully completing PSY 101, Maricopa Community Colleges hopes that you have gained important knowledge and skills within the field of psychology. Think of the Course Competencies as goals to work towards and master as you work through the course. Some of the course competencies are chapter-specific and others can be applied to several different chapters.  You will see the primary course competencies to keep in mind listed in a shaded purple box at the beginning of each chapter and listed again at the end of each chapter for review. Here are the primary goals for you to make sure that you learn by the end of this chapter.

  • Distinguish between the science of psychology and parapsychological, pseudoscientific, or popular representations of psychology that fall outside the scope of science.

(Hint: You will be working on the following three Course Competencies throughout each chapter. You will see them often!)

  • Critically evaluate information to help make evidence-based decisions.
  • Apply biopsychosocial principles to real-world situations.
  • Use psychological principles to explain the diversity and complexity of the human experience.

What is Psychology Learning Objectives

These green shaded boxes break down the text by section. They provide more specific learning objectives of what you will learn. By the end of this section, you will be able to:

  • Define psychology
  • Understand the merits of an education in psychology

What is creativity? Why do some people become homeless? What are prejudice and discrimination? What is consciousness? The field of psychology explores questions like these. Psychology refers to the scientific study of the mind and behavior. Psychologists use the scientific method to acquire knowledge. To apply the scientific method, a researcher with a question about how or why something happens will propose a tentative explanation, called a hypothesis, to explain the phenomenon. A hypothesis should fit into the context of a scientific theory, which is a broad explanation or group of explanations for some aspect of the natural world that is consistently supported by evidence over time. A theory is the best understanding we have of that part of the natural world. The researcher then makes observations or carries out an experiment to test the validity of the hypothesis. Those results are then published or presented at research conferences so that others can replicate or build on the results.

Scientists test that which is perceivable and measurable. For example, the hypothesis that a bird sings because it is happy is not a hypothesis that can be tested since we have no way to measure the happiness of a bird. We must ask a different question, perhaps about the brain state of the bird, since this can be measured. However, we can ask individuals about whether they sing because they are happy since they are able to tell us. Thus, psychological science is empirical, based on measurable data.

In general, science deals only with matter and energy, that is, those things that can be measured, and it cannot arrive at knowledge about values and morality. This is one reason why our scientific understanding of the mind is so limited, since thoughts, at least as we experience them, are neither matter nor energy. The scientific method is also a form of empiricism. An empirical method for acquiring knowledge is one based on observation, including experimentation, rather than a method based only on forms of logical argument or previous authorities.

It was not until the late 1800s that psychology became accepted as its own academic discipline. Before this time, the workings of the mind were considered under the auspices of philosophy. Given that any behavior is, at its roots, biological, some areas of psychology take on aspects of a natural science like biology. No biological organism exists in isolation, and our behavior is influenced by our interactions with others. Therefore, psychology is also a social science.

WHY STUDY PSYCHOLOGY?

Often, students take their first psychology course because they are interested in helping others and want to learn more about themselves and why they act the way they do. Sometimes, students take a psychology course because it either satisfies a general education requirement or is required for a program of study such as nursing or pre-med. Many of these students develop such an interest in the area that they go on to declare psychology as their major. As a result, psychology is one of the most popular majors on college campuses across the United States (Johnson & Lubin, 2011). A number of well-known individuals were psychology majors. Just a few famous names on this list are Facebook’s creator Mark Zuckerberg, television personality and political satirist Jon Stewart, actress Natalie Portman, and filmmaker Wes Craven (Halonen, 2011). About 6 percent of all bachelor degrees granted in the United States are in the discipline of psychology (U.S. Department of Education, 2016).

An education in psychology is valuable for a number of reasons. Psychology students hone critical thinking skills and are trained in the use of the scientific method. Critical thinking is the active application of a set of skills to information for the understanding and evaluation of that information. The evaluation of information—assessing its reliability and usefulness— is an important skill in a world full of competing “facts,” many of which are designed to be misleading. For example, critical thinking involves maintaining an attitude of skepticism, recognizing internal biases, making use of logical thinking, asking appropriate questions, and making observations. Psychology students also can develop better communication skills during the course of their undergraduate coursework (American Psychological Association, 2011). Together, these factors increase students’ scientific literacy and prepare students to critically evaluate the various sources of information they encounter.

In addition to these broad-based skills, psychology students come to understand the complex factors that shape one’s behavior. They appreciate the interaction of our biology, our environment, and our experiences in determining who we are and how we will behave. They learn about basic principles that guide how we think and behave, and they come to recognize the tremendous diversity that exists across individuals and across cultural boundaries (American Psychological Association, 2011).

Media Attributions

(credit "background": modification of work by Nattachai Noogure; credit "top left": modification of work by U.S. Navy; credit "top middle-left": modification of work by Peter Shanks; credit "top middle-right": modification of work by "devinf"/Flickr; credit "top right": modification of work by Alejandra Quintero Sinisterra; credit "bottom left": modification of work by Gabriel Rocha; credit "bottom middle-left": modification of work by Caleb Roenigk; credit "bottom middle-right": modification of work by Staffan Scherz; credit "bottom right": modification of work by Czech Provincial Reconstruction Team)

2

Psychological Research

Children sit in front of a bank of television screens. A sign on the wall says, “Some content may not be suitable for children.”
Figure 2.1 How does television content impact children’s behavior? 

Have you ever wondered whether the violence you see on television affects your behavior? Are you more likely to behave aggressively in real life after watching people behave violently in dramatic situations on the screen? Or, could seeing fictional violence actually get aggression out of your system, causing you to be more peaceful? How are children influenced by the media they are exposed to? A psychologist interested in the relationship between behavior and exposure to violent images might ask these very questions.

Since ancient times, humans have been concerned about the effects of new technologies on our behaviors and thinking processes. The Greek philosopher Socrates, for example, worried that writing—a new technology at that time—would diminish people’s ability to remember because they could rely on written records rather than committing information to memory. In our world of rapidly changing technologies, questions about their effects on our daily lives and their resulting long-term impacts continue to emerge. In addition to the impact of screen time (on smartphones, tablets, computers, and gaming), technology is emerging in our vehicles (such as GPS and smart cars) and residences (with devices like Alexa or Google Home and doorbell cameras). As these technologies become integrated into our lives, we are faced with questions about their positive and negative impacts. Many of us find ourselves with a strong opinion on these issues, only to find the person next to us bristling with the opposite view.

How can we go about finding answers that are supported not by mere opinion, but by evidence that we can all agree on? The findings of psychological research can help us navigate issues like this.

MCCCD Course Competencies

  • Describe the scientific method and how it is used to answer psychological questions about human thought and behavior.
  • Critically evaluate information to help make evidence-based decisions.
  • Apply biopsychosocial principles to real-world situations.
  • Use psychological principles to explain the diversity and complexity of the human experience.

 

Why is Research Important Learning Objectives

By the end of this section, you will be able to:

  • Explain how scientific research addresses questions about behavior
  • Discuss how scientific research guides public policy
  • Appreciate how scientific research can be important in making personal decisions

Scientific research is a critical tool for successfully navigating our complex world. Without it, we would be forced to rely solely on intuition, other people’s authority, and blind luck. While many of us feel confident in our abilities to decipher and interact with the world around us, history is filled with examples of how very wrong we can be when we fail to recognize the need for evidence in supporting claims. At various times in history, we would have been certain that the sun revolved around a flat earth, that the earth’s continents did not move, and that mental illness was caused by possession (Figure 2.2). It is through systematic scientific research that we divest ourselves of our preconceived notions and superstitions and gain an objective understanding of ourselves and our world.

A skull has a large hole bored through the forehead.
Figure 2.2 Some of our ancestors, across the world and over the centuries, believed that trephination—the practice of making a hole in the skull, as shown here—allowed evil spirits to leave the body, thus curing mental illness and other disorders. 

The goal of all scientists is to better understand the world around them. Psychologists focus their attention on understanding behavior, as well as the cognitive (mental) and physiological (body) processes that underlie behavior. In contrast to other methods that people use to understand the behavior of others, such as intuition and personal experience, the hallmark of scientific research is that there is evidence to support a claim. Scientific knowledge is empirical: It is grounded in objective, tangible evidence that can be observed time and time again, regardless of who is observing.

While behavior is observable, the mind is not. If someone is crying, we can see behavior. However, the reason for the behavior is more difficult to determine. Is the person crying due to being sad, in pain, or happy? Sometimes we can learn the reason for someone’s behavior by simply asking a question, like “Why are you crying?” However, there are situations in which an individual is either uncomfortable or unwilling to answer the question honestly, or is incapable of answering. For example, infants would not be able to explain why they are crying. In such circumstances, the psychologist must be creative in finding ways to better understand behavior. This chapter explores how scientific knowledge is generated, and how important that knowledge is in forming decisions in our personal lives and in the public domain.

Use of Research Information

Trying to determine which theories are and are not accepted by the scientific community can be difficult, especially in an area of research as broad as psychology. More than ever before, we have an incredible amount of information at our fingertips, and a simple internet search on any given research topic might result in a number of contradictory studies. In these cases, we are witnessing the scientific community going through the process of reaching a consensus, and it could be quite sometime before a consensus emerges. For example, the explosion in our use of technology has led researchers to question whether this ultimately helps or hinders us. The use and implementation of technology in educational settings has become widespread over the last few decades. Researchers are coming to different conclusions regarding the use of technology. To illustrate this point, a study investigating a smartphone app targeting surgery residents (graduate students in surgery training) found that the use of this app can increase student engagement and raise test scores (Shaw & Tan, 2015). Conversely, another study found that the use of technology in undergraduate student populations had negative impacts on sleep, communication, and time management skills (Massimini & Peterson, 2009). Until sufficient amounts of research have been conducted, there will be no clear consensus on the effects that technology has on a student’s acquisition of knowledge, study skills, and mental health.

In the meantime, we should strive to think critically about the information we encounter by exercising a degree of healthy skepticism. When someone makes a claim, we should examine the claim from a number of different perspectives: what is the expertise of the person making the claim, what might they gain if the claim is valid, does the claim seem justified given the evidence, and what do other researchers think of the claim? This is especially important when we consider how much information in advertising campaigns and on the internet claims to be based on “scientific evidence” when in actuality it is a belief or perspective of just a few individuals trying to sell a product or draw attention to their perspectives.

We should be informed consumers of the information made available to us because decisions based on this information have significant consequences. One such consequence can be seen in politics and public policy. Imagine that you have been elected as the governor of your state. One of your responsibilities is to manage the state budget and determine how to best spend your constituents’ tax dollars. As the new governor, you need to decide whether to continue funding early intervention programs. These programs are designed to help children who come from low-income backgrounds, have special needs, or face other disadvantages. These programs may involve providing a wide variety of services to maximize the children’s development and position them for optimal levels of success in school and later in life (Blann, 2005). While such programs sound appealing, you would want to be sure that they also proved effective before investing additional money in these programs. Fortunately, psychologists and other scientists have conducted vast amounts of research on such programs and, in general, the programs are found to be effective (Neil & Christensen, 2009; Peters-Scheffer et al., 2011). While not all programs are equally effective, and the short-term effects of many such programs are more pronounced, there is reason to believe that many of these programs produce long-term benefits for participants (Barnett, 2011). If you are committed to being a good steward of taxpayer money, you would want to look at the research. Which programs are most effective? What characteristics of these programs make them effective? Which programs promote the best outcomes? After examining the research, you would be best equipped to make decisions about which programs to fund.

LINK TO LEARNING: Watch this video about early childhood program effectiveness to learn how scientists evaluate effectiveness and how best to invest money into programs that are most effective.

An interactive H5P element has been excluded from this version of the text. You can view it online here:
https://open.maricopa.edu/intropsychme/?p=21#h5p-7

Ultimately, it is not just politicians who can benefit from using research in guiding their decisions. We all might look to research from time to time when making decisions in our lives. Imagine you just found out that a close friend has breast cancer or that one of your young relatives has recently been diagnosed with autism. In either case, you want to know which treatment options are most successful with the fewest side effects. How would you find that out? You would probably talk with your doctor and personally review the research that has been done on various treatment options—always with a critical eye to ensure that you are as informed as possible.

In the end, research is what makes the difference between facts and opinions. Facts are observable realities, and opinions are personal judgments, conclusions, or attitudes that may or may not be accurate. In the scientific community, facts can be established only using evidence collected through empirical research.

NOTABLE RESEARCHERS

Psychological research has a long history involving important figures from diverse backgrounds. While the introductory chapter discussed several researchers who made significant contributions to the discipline, there are many more individuals who deserve attention in considering how psychology has advanced as a science through their work (Figure 2.3). For instance, Margaret Floy Washburn (1871–1939) was the first woman to earn a PhD in psychology. Her research focused on animal behavior and cognition (Margaret Floy Washburn, PhD, n.d.). Mary Whiton Calkins (1863–1930) was a preeminent first-generation American psychologist who opposed the behaviorist movement, conducted significant research into memory, and established one of the earliest experimental psychology labs in the United States (Mary Whiton Calkins, n.d.).

Francis Sumner (1895–1954) was the first African American to receive a PhD in psychology in 1920. His dissertation focused on issues related to psychoanalysis. Sumner also had research interests in racial bias and educational justice. Sumner was one of the founders of Howard University’s department of psychology, and because of his accomplishments, he is sometimes referred to as the “Father of Black Psychology.” Thirteen years later, Inez Beverly Prosser (1895–1934) became the first African American woman to receive a PhD in psychology. Prosser’s research highlighted issues related to education in segregated versus integrated schools, and ultimately, her work was very influential in the hallmark Brown v. Board of Education Supreme Court ruling that segregation of public schools was unconstitutional (Ethnicity and Health in America Series: Featured Psychologists, n.d.).

Figure a is a portrait of Margaret Floy Washburn. Figure b is the front page of the Implementation Decree from the Supreme Court for the Brown vs. Board of Education case.
Figure 2.3 (a) Margaret Floy Washburn was the first woman to earn a doctorate degree in psychology. (b) The outcome of Brown v. Board of Education was influenced by the research of psychologist Inez Beverly Prosser, who was the first African American woman to earn a PhD in psychology.

Although the establishment of psychology’s scientific roots occurred first in Europe and the United States, it did not take much time until researchers from around the world began to establish their own laboratories and research programs. For example, some of the first experimental psychology laboratories in South America were founded by Horatio Piñero (1869–1919) at two institutions in Buenos Aires, Argentina (Godoy & Brussino, 2010). In India, Gunamudian David Boaz (1908–1965) and Narendra Nath Sen Gupta (1889–1944) established the first independent departments of psychology at the University of Madras and the University of Calcutta, respectively. These developments provided an opportunity for Indian researchers to make important contributions to the field (Gunamudian David Boaz, n.d.; Narendra Nath Sen Gupta, n.d.).

When the American Psychological Association (APA) was first founded in 1892, all of the members were white males (Women and Minorities in Psychology, n.d.). However, by 1905, Mary Whiton Calkins was elected as the first female president of the APA, and by 1946, nearly one-quarter of American psychologists were female. Psychology became a popular degree option for students enrolled in the nation’s historically black higher education institutions, increasing the number of black Americans who went on to become psychologists. Given demographic shifts occurring in the United States and increased access to higher educational opportunities among historically underrepresented populations, there is reason to hope that the diversity of the field will increasingly match the larger population and that the research contributions made by the psychologists of the future will better serve people of all backgrounds (Women and Minorities in Psychology, n.d.).

The Process of Scientific Research

Scientific knowledge is advanced through a process known as the scientific method. Basically, ideas (in the form of theories and hypotheses) are tested against the real world (in the form of empirical observations), and those empirical observations lead to more ideas that are tested against the real world, and so on. In this sense, the scientific process is circular. The types of reasoning within the circle are called deductive and inductive. In deductive reasoning, ideas are tested in the real world; in inductive reasoning, real-world observations lead to new ideas (Figure 2.4). These processes are inseparable, like inhaling and exhaling, but different research approaches place different emphasis on the deductive and inductive aspects.

A diagram has a box at the top labeled “hypothesis or general premise” and a box at the bottom labeled “empirical observations.” On the left, an arrow labeled “inductive reasoning” goes from the bottom to top box. On the right, an arrow labeled “deductive reasoning” goes from the top to the bottom box.
Figure 2.4 Psychological research relies on both inductive and deductive reasoning.

In the scientific context, deductive reasoning begins with a generalization—one hypothesis—that is then used to reach logical conclusions about the real world. If the hypothesis is correct, then the logical conclusions reached through deductive reasoning should also be correct. A deductive reasoning argument might go something like this: All living things require energy to survive (this would be your hypothesis). Ducks are living things. Therefore, ducks require energy to survive (logical conclusion). In this example, the hypothesis is correct; therefore, the conclusion is correct as well. Sometimes, however, an incorrect hypothesis may lead to a logical but incorrect conclusion. Consider this argument: all ducks are born with the ability to see. Quackers is a duck. Therefore, Quackers was born with the ability to see. Scientists use deductive reasoning to empirically test their hypotheses. Returning to the example of the ducks, researchers might design a study to test the hypothesis that if all living things require energy to survive, then ducks will be found to require energy to survive.

Deductive reasoning starts with a generalization that is tested against real-world observations; however, inductive reasoning moves in the opposite direction. Inductive reasoning uses empirical observations to construct broad generalizations. Unlike deductive reasoning, conclusions drawn from inductive reasoning may or may not be correct, regardless of the observations on which they are based. For instance, you may notice that your favorite fruits—apples, bananas, and oranges—all grow on trees; therefore, you assume that all fruit must grow on trees. This would be an example of inductive reasoning, and, clearly, the existence of strawberries, blueberries, and kiwi demonstrate that this generalization is not correct despite it being based on a number of direct observations. Scientists use inductive reasoning to formulate theories, which in turn generate hypotheses that are tested with deductive reasoning. In the end, science involves both deductive and inductive processes.

For example, case studies, which you will read about in the next section, are heavily weighted on the side of empirical observations. Thus, case studies are closely associated with inductive processes as researchers gather massive amounts of observations and seek interesting patterns (new ideas) in the data. Experimental research, on the other hand, puts great emphasis on deductive reasoning.

We’ve stated that theories and hypotheses are ideas, but what sort of ideas are they, exactly? A theory is a well-developed set of ideas that propose an explanation for observed phenomena. Theories are repeatedly checked against the world, but they tend to be too complex to be tested all at once; instead, researchers create hypotheses to test specific aspects of a theory.

hypothesis is a testable prediction about how the world will behave if our idea is correct, and it is often worded as an if-then statement (e.g., if I study all night, I will get a passing grade on the test). The hypothesis is extremely important because it bridges the gap between the realm of ideas and the real world. As specific hypotheses are tested, theories are modified and refined to reflect and incorporate the result of these tests Figure 2.5.

A diagram has seven labeled boxes with arrows to show the progression in the flow chart. The chart starts at “Theory” and moves to “Generate hypothesis,” “Collect data,” “Analyze data,” and “Summarize data and report findings.” There are two arrows coming from “Summarize data and report findings” to show two options. The first arrow points to “Confirm theory.” The second arrow points to “Modify theory,” which has an arrow that points back to “Generate hypothesis.”
Figure 2.5 The scientific method involves deriving hypotheses from theories and then testing those hypotheses. If the results are consistent with the theory, then the theory is supported. If the results are not consistent, then the theory should be modified and new hypotheses will be generated.

To see how this process works, let’s consider a specific theory and a hypothesis that might be generated from that theory. As you’ll learn in a later chapter, the James-Lange theory of emotion asserts that emotional experience relies on the physiological arousal associated with the emotional state. If you walked out of your home and discovered a very aggressive snake waiting on your doorstep, your heart would begin to race and your stomach churn. According to the James-Lange theory, these physiological changes would result in your feeling of fear. A hypothesis that could be derived from this theory might be that a person who is unaware of the physiological arousal that the sight of the snake elicits will not feel fear.

A scientific hypothesis is also falsifiable or capable of being shown to be incorrect. Recall from the introductory chapter that Sigmund Freud had lots of interesting ideas to explain various human behaviors (Figure 2.6). However, a major criticism of Freud’s theories is that many of his ideas are not falsifiable; for example, it is impossible to imagine empirical observations that would disprove the existence of the id, the ego, and the superego—the three elements of personality described in Freud’s theories. Despite this, Freud’s theories are widely taught in introductory psychology texts because of their historical significance for personality psychology and psychotherapy, and these remain the root of all modern forms of therapy.

(a)A photograph shows Freud holding a cigar. (b) The mind’s conscious and unconscious states are illustrated as an iceberg floating in water. Beneath the water’s surface in the “unconscious” area are the id, ego, and superego. The area just below the water’s surface is labeled “preconscious.” The area above the water’s surface is labeled “conscious.”
Figure 2.6 Many of the specifics of (a) Freud’s theories, such as (b) his division of the mind into id, ego, and superego, have fallen out of favor in recent decades because they are not falsifiable. In broader strokes, his views set the stage for much of psychological thinking today, such as the unconscious nature of the majority of psychological processes.

In contrast, the James-Lange theory does generate falsifiable hypotheses, such as the one described above. Some individuals who suffer significant injuries to their spinal columns are unable to feel the bodily changes that often accompany emotional experiences. Therefore, we could test the hypothesis by determining how emotional experiences differ between individuals who have the ability to detect these changes in their physiological arousal and those who do not. In fact, this research has been conducted and while the emotional experiences of people deprived of an awareness of their physiological arousal may be less intense, they still experience emotion (Chwalisz et al., 1988).

Scientific research’s dependence on falsifiability allows for great confidence in the information that it produces. Typically, by the time information is accepted by the scientific community, it has been tested repeatedly.

Learning Objectives

By the end of this section, you will be able to:

  • Describe the different research methods used by psychologists
  • Discuss the strengths and weaknesses of case studies, naturalistic observation, surveys, and archival research
  • Compare longitudinal and cross-sectional approaches to research
  • Compare and contrast correlation and causation

There are many research methods available to psychologists in their efforts to understand, describe, and explain behavior and the cognitive and biological processes that underlie it. Some methods rely on observational techniques. Other approaches involve interactions between the researcher and the individuals who are being studied—ranging from a series of simple questions to extensive, in-depth interviews—to well-controlled experiments.

Each of these research methods has unique strengths and weaknesses, and each method may only be appropriate for certain types of research questions. For example, studies that rely primarily on observation produce incredible amounts of information, but the ability to apply this information to the larger population is somewhat limited because of small sample sizes. Survey research, on the other hand, allows researchers to easily collect data from relatively large samples. While this allows for results to be generalized to the larger population more easily, the information that can be collected on any given survey is somewhat limited and subject to problems associated with any type of self-reported data. Some researchers conduct archival research by using existing records. While this can be a fairly inexpensive way to collect data that can provide insight into a number of research questions, researchers using this approach have no control on how or what kind of data was collected. All of the methods described thus far are correlational in nature. This means that researchers can speak to important relationships that might exist between two or more variables of interest. However, correlational data cannot be used to make claims about cause-and-effect relationships.

Correlational research can find a relationship between two variables, but the only way a researcher can claim that the relationship between the variables is cause and effect is to perform an experiment. In experimental research, which will be discussed later in this chapter, there is a tremendous amount of control over variables of interest. While this is a powerful approach, experiments are often conducted in very artificial settings. This calls into question the validity of experimental findings with regard to how they would apply in real-world settings. In addition, many of the questions that psychologists would like to answer cannot be pursued through experimental research because of ethical concerns.

Clinical or Case Studies

In 2011, the New York Times published a feature story on Krista and Tatiana Hogan, Canadian twin girls. These particular twins are unique because Krista and Tatiana are conjoined twins, connected at the head. There is evidence that the two girls are connected in a part of the brain called the thalamus, which is a major sensory relay center. Most incoming sensory information is sent through the thalamus before reaching higher regions of the cerebral cortex for processing.

The implications of this potential connection mean that it might be possible for one twin to experience the sensations of the other twin. For instance, if Krista is watching a particularly funny television program, Tatiana might smile or laugh even if she is not watching the program. This particular possibility has piqued the interest of many neuroscientists who seek to understand how the brain uses sensory information.

These twins represent an enormous resource in the study of the brain, and since their condition is very rare, it is likely that as long as their family agrees, scientists will follow these girls very closely throughout their lives to gain as much information as possible (Dominus, 2011).

Over time, it has become clear that while Krista and Tatiana share some sensory experiences and motor control, they remain two distinct individuals, which provides tremendous insight into researchers interested in the mind and the brain (Egnor, 2017).

In observational research, scientists are conducting a clinical or case study when they focus on one person or just a few individuals. Indeed, some scientists spend their entire careers studying just 10–20 individuals. Why would they do this? Obviously, when they focus their attention on a very small number of people, they can gain a tremendous amount of insight into those cases. The richness of information that is collected in clinical or case studies is unmatched by any other single research method. This allows the researcher to have a very deep understanding of the individuals and the particular phenomenon being studied.

If clinical or case studies provide so much information, why are they not more frequent among researchers? As it turns out, the major benefit of this particular approach is also a weakness. As mentioned earlier, this approach is often used when studying individuals who are interesting to researchers because they have a rare characteristic. Therefore, the individuals who serve as the focus of case studies are not like most other people. If scientists ultimately want to explain all behavior, focusing attention on such a special group of people can make it difficult to generalize any observations to the larger population as a whole. Generalizing refers to the ability to apply the findings of a particular research project to larger segments of society. Again, case studies provide enormous amounts of information, but since the cases are so specific, the potential to apply what’s learned to the average person may be very limited.

Naturalistic Observation

If you want to understand how behavior occurs, one of the best ways to gain information is to simply observe the behavior in its natural context. However, people might change their behavior in unexpected ways if they know they are being observed. How do researchers obtain accurate information when people tend to hide their natural behavior? As an example, imagine that your professor asks everyone in your class to raise their hand if they always wash their hands after using the restroom. Chances are that almost everyone in the classroom will raise their hand, but do you think hand washing after every trip to the restroom is really that universal?

This is very similar to the phenomenon mentioned earlier in this chapter: many individuals do not feel comfortable answering a question honestly. But if we are committed to finding out the facts about handwashing, we have other options available to us.

Suppose we send a classmate into the restroom to actually watch whether everyone washes their hands after using the restroom. Will our observer blend into the restroom environment by wearing a white lab coat, sitting with a clipboard, and staring at the sinks? We want our researcher to be inconspicuous—perhaps standing at one of the sinks pretending to put in contact lenses while secretly recording the relevant information. This type of observational study is called naturalistic observation: observing behavior in its natural setting. To better understand peer exclusion, Suzanne Fanger collaborated with colleagues at the University of Texas to observe the behavior of preschool children on a playground. How did the observers remain inconspicuous over the duration of the study? They equipped a few of the children with wireless microphones (which the children quickly forgot about) and observed while taking notes from a distance. Also, the children in that particular preschool (a “laboratory preschool”) were accustomed to having observers on the playground (Fanger, Frankel, & Hazen, 2012).

It is critical that the observer be as unobtrusive and as inconspicuous as possible: when people know they are being watched, they are less likely to behave naturally. If you have any doubt about this, ask yourself how your driving behavior might differ in two situations: In the first situation, you are driving down a deserted highway during the middle of the day; in the second situation, you are being followed by a police car down the same deserted highway (Figure 2.7).

A photograph of a police car in the street.
Figure 2.7 Seeing a police car behind you would probably affect your driving behavior. 

It should be pointed out that naturalistic observation is not limited to research involving humans. Indeed, some of the best-known examples of naturalistic observation involve researchers going into the field to observe various kinds of animals in their own environments. As with human studies, the researchers maintain their distance and avoid interfering with the animal subjects so as not to influence their natural behaviors. Scientists have used this technique to study social hierarchies and interactions among animals ranging from ground squirrels to gorillas. The information provided by these studies is invaluable in understanding how those animals organize socially and communicate with one another. The anthropologist Jane Goodall, for example, spent nearly five decades observing the behavior of chimpanzees in Africa (Figure 2.8). As an illustration of the types of concerns that a researcher might encounter in naturalistic observation, some scientists criticized Goodall for giving the chimps names instead of referring to them by numbers—using names was thought to undermine the emotional detachment required for the objectivity of the study (McKie, 2010).

A photograph of a chimpanzee sitting and thinking.
Figure 2.8  Jane Goodall made a career of conducting naturalistic observations of chimpanzee behavior. 

The greatest benefit of naturalistic observation is the validity, or accuracy, of information collected unobtrusively in a natural setting. Having individuals behave as they normally would in a given situation means that we have a higher degree of ecological validity, or realism, than we might achieve with other research approaches. Therefore, our ability to generalize the findings of the research to real-world situations is enhanced. If done correctly, we need not worry about people or animals modifying their behavior simply because they are being observed. Sometimes, people may assume that reality programs give us a glimpse into authentic human behavior. However, the principle of inconspicuous observation is violated as reality stars are followed by camera crews and are interviewed on camera for personal confessionals. Given that environment, we must doubt how natural and realistic their behaviors are.

The major downside of naturalistic observation is that they are often difficult to set up and control. In our restroom study, what if you stood in the restroom all day prepared to record people’s handwashing behavior and no one came in? Or, what if you have been closely observing a troop of gorillas for weeks only to find that they migrated to a new place while you were sleeping in your tent? The benefit of realistic data comes at a cost. As a researcher, you have no control of when (or if) you have behavior to observe. In addition, this type of observational research often requires significant investments of time, money, and a good dose of luck.

Sometimes studies involve structured observation. In these cases, people are observed while engaging in set, specific tasks. An excellent example of structured observation comes from Strange Situation by Mary Ainsworth (you will read more about this in the chapter on lifespan development). The Strange Situation is a procedure used to evaluate attachment styles that exist between an infant and caregiver. In this scenario, caregivers bring their infants into a room filled with toys. The Strange Situation involves a number of phases, including a stranger coming into the room, the caregiver leaving the room, and the caregiver’s return to the room. The infant’s behavior is closely monitored at each phase, but it is the behavior of the infant upon being reunited with the caregiver that is most telling in terms of characterizing the infant’s attachment style with the caregiver.

Another potential problem in observational research is observer bias. Generally, people who act as observers are closely involved in the research project and may unconsciously skew their observations to fit their research goals or expectations. To protect against this type of bias, researchers should have clear criteria established for the types of behaviors recorded and how those behaviors should be classified. In addition, researchers often compare observations of the same event by multiple observers, in order to test inter-rater reliability: a measure of reliability that assesses the consistency of observations by different observers.

Surveys

Often, psychologists develop surveys as a means of gathering data. Surveys are lists of questions to be answered by research participants, and can be delivered as paper-and-pencil questionnaires, administered electronically, or conducted verbally (Figure 2.9). Generally, the survey itself can be completed in a short time, and the ease of administering a survey makes it easy to collect data from a large number of people.

Surveys allow researchers to gather data from larger samples than may be afforded by other research methods. A sample is a subset of individuals selected from a population, which is the overall group of individuals that the researchers are interested in. Researchers study the sample and seek to generalize their findings to the population. Generally, researchers will begin this process by calculating various measures of central tendency from the data they have collected. These measures provide an overall summary of what a typical response looks like. There are three measures of central tendency: mode, median, and mean. The mode is the most frequently occurring response, the median lies at the middle of a given data set, and the mean is the arithmetic average of all data points. Means tend to be most useful in conducting additional analyses like those described below; however, means are very sensitive to the effects of outliers, and so one must be aware of those effects when making assessments of what measures of central tendency tell us about a data set in question.

A sample online survey reads, “Dear visitor, your opinion is important to us. We would like to invite you to participate in a short survey to gather your opinions and feedback on your news consumption habits. The survey will take approximately 10-15 minutes. Simply click the “Yes” button below to launch the survey. Would you like to participate?” Two buttons are labeled “yes” and “no.”
Figure 2.9 Surveys can be administered in a number of ways, including electronically administered research, like the survey shown here. (credit: Robert Nyman)

There is both strength and weakness of the survey in comparison to case studies. By using surveys, we can collect information from a larger sample of people. A larger sample is better able to reflect the actual diversity of the population, thus allowing better generalizability. Therefore, if our sample is sufficiently large and diverse, we can assume that the data we collect from the survey can be generalized to the larger population with more certainty than the information collected through a case study. However, given the greater number of people involved, we are not able to collect the same depth of information on each person that would be collected in a case study.

Another potential weakness of surveys is something we touched on earlier in this chapter: People don’t always give accurate responses. They may lie, misremember, or answer questions in a way that they think makes them look good. For example, people may report drinking less alcohol than is actually the case.

Any number of research questions can be answered through the use of surveys. One real-world example is the research conducted by Jenkins et al. (2012) about the backlash against the US Arab-American community following the terrorist attacks of September 11, 2001. Jenkins and colleagues wanted to determine to what extent these negative attitudes toward Arab-Americans still existed nearly a decade after the attacks occurred. In one study, 140 research participants filled out a survey with 10 questions, including questions asking directly about the participant’s overt prejudicial attitudes toward people of various ethnicities. The survey also asked indirect questions about how likely the participant would be to interact with a person of a given ethnicity in a variety of settings (such as, “How likely do you think it is that you would introduce yourself to a person of Arab-American descent?”). The results of the research suggested that participants were unwilling to report prejudicial attitudes toward any ethnic group. However, there were significant differences between their pattern of responses to questions about social interaction with Arab-Americans compared to other ethnic groups: they indicated less willingness for social interaction with Arab-Americans compared to the other ethnic groups. This suggested that the participants harbored subtle forms of prejudice against Arab-Americans, despite their assertions that this was not the case (Jenkins et al., 2012).

Archival Research

Some researchers gain access to large amounts of data without interacting with a single research participant. Instead, they use existing records to answer various research questions. This type of research approach is known as archival research. Archival research relies on looking at past records or data sets to look for interesting patterns or relationships.

For example, a researcher might access the academic records of all individuals who enrolled in college within the past ten years and calculate how long it took them to complete their degrees, as well as course loads, grades, and extracurricular involvement. Archival research could provide important information about who is most likely to complete their education, and it could help identify important risk factors for struggling students (Figure 2.10).

(a) A photograph shows stacks of paper files on shelves. (b) A photograph shows a computer.
Figure 2.10 A researcher doing archival research examines records, whether archived as a (a) hardcopy or (b) electronically. (credit “paper files”: modification of work by “Newtown graffiti”/Flickr; “computer”: modification of work by INPIVIC Family/Flickr)

In comparing archival research to other research methods, there are several important distinctions. For one, the researcher employing archival research never directly interacts with research participants. Therefore, the investment of time and money to collect data is considerably less with archival research. Additionally, researchers have no control over what information was originally collected. Therefore, research questions have to be tailored so they can be answered within the structure of the existing data sets. There is also no guarantee of consistency between the records from one source to another, which might make comparing and contrasting different data sets problematic.

Longitudinal and Cross-Sectional Research

Sometimes we want to see how people change over time, as in studies of human development and lifespan. When we test the same group of individuals repeatedly over an extended period of time, we are conducting longitudinal research. Longitudinal research is a research design in which data-gathering is administered repeatedly over an extended period of time. For example, we may survey a group of individuals about their dietary habits at age 20, retest them a decade later at age 30, and then again at age 40.

Another approach is cross-sectional research. In cross-sectional research, a researcher compares multiple segments of the population at the same time. Using the dietary habits example above, the researcher might directly compare different groups of people by age. Instead of studying a group of people for 20 years to see how their dietary habits changed from decade to decade, the researcher would study a group of 20-year-old individuals and compare them to a group of 30-year-old individuals and a group of 40-year-old individuals. While cross-sectional research requires a shorter-term investment, it is also limited by differences that exist between the different generations (or cohorts) that have nothing to do with age per se, but rather reflect the social and cultural experiences of different generations of individuals make them different from one another.

To illustrate this concept, consider the following survey findings. In recent years there has been significant growth in the popular support of same-sex marriage. Many studies on this topic break down survey participants into different age groups. In general, younger people are more supportive of same-sex marriage than are those who are older (Jones, 2013). Does this mean that as we age we become less open to the idea of same-sex marriage, or does this mean that older individuals have different perspectives because of the social climates in which they grew up? Longitudinal research is a powerful approach because the same individuals are involved in the research project over time, which means that the researchers need to be less concerned with differences among cohorts affecting the results of their study.

Often longitudinal studies are employed when researching various diseases in an effort to understand particular risk factors. Such studies often involve tens of thousands of individuals who are followed for several decades. Given the enormous number of people involved in these studies, researchers can feel confident that their findings can be generalized to the larger population. The Cancer Prevention Study-3 (CPS-3) is one of a series of longitudinal studies sponsored by the American Cancer Society aimed at determining predictive risk factors associated with cancer. When participants enter the study, they complete a survey about their lives and family histories, providing information on factors that might cause or prevent the development of cancer. Then every few years the participants receive additional surveys to complete. In the end, hundreds of thousands of participants will be tracked over 20 years to determine which of them develop cancer and which do not.

Clearly, this type of research is important and potentially very informative. For instance, earlier longitudinal studies sponsored by the American Cancer Society provided some of the first scientific demonstrations of the now well-established links between increased rates of cancer and smoking (American Cancer Society, n.d.) (Figure 2.11).

A photograph shows a cigarette, an ashtray, and a surgeon's general warning.
Figure 2.11 Longitudinal research like the CPS-3 help us to better understand how smoking is associated with cancer and other diseases. 

As with any research strategy, longitudinal research is not without limitations. For one, these studies require an incredible time investment by the researcher and research participants. Given that some longitudinal studies take years, if not decades, to complete, the results will not be known for a considerable period of time. In addition to the time demands, these studies also require a substantial financial investment. Many researchers are unable to commit the resources necessary to see a longitudinal project through to the end.

Research participants must also be willing to continue their participation for an extended period of time, and this can be problematic. People move, get married and take new names, get ill, and eventually die. Even without significant life changes, some people may simply choose to discontinue their participation in the project. As a result, the attrition rates, or reduction in the number of research participants due to dropouts, in longitudinal studies are quite high and increases over the course of a project. For this reason, researchers using this approach typically recruit many participants fully expecting that a substantial number will drop out before the end. As the study progresses, they continually check whether the sample still represents the larger population, and make adjustments as necessary.

Learning Objectives

By the end of this section, you will be able to:

  • Explain what a correlation coefficient tells us about the relationship between variables
  • Recognize that correlation does not indicate a cause-and-effect relationship between variables
  • Discuss our tendency to look for relationships between variables that do not really exist
  • Explain random sampling and assignment of participants into experimental and control groups
  • Discuss how experimenter or participant bias could affect the results of an experiment
  • Identify independent and dependent variables

Did you know that as sales in ice cream increase, so does the overall rate of crime? Is it possible that indulging in your favorite flavor of ice cream could send you on a crime spree? Or, after committing crime do you think you might decide to treat yourself to a cone? There is no question that a relationship exists between ice cream and crime (e.g., Harper, 2013), but it would be pretty foolish to decide that one thing actually caused the other to occur.

It is much more likely that both ice cream sales and crime rates are related to the temperature outside. When the temperature is warm, there are lots of people out of their houses, interacting with each other, getting annoyed with one another, and sometimes committing crimes. Also, when it is warm outside, we are more likely to seek a cool treat like ice cream. How do we determine if there is indeed a relationship between two things? And when there is a relationship, how can we discern whether it is attributable to coincidence or causation?

Correlational Research

Correlation means that there is a relationship between two or more variables (such as ice cream consumption and crime), but this relationship does not necessarily imply cause and effect. When two variables are correlated, it simply means that as one variable changes, so does the other. We can measure correlation by calculating a statistic known as a correlation coefficient. A correlation coefficient is a number from -1 to +1 that indicates the strength and direction of the relationship between variables. The correlation coefficient is usually represented by the letter r.

The number portion of the correlation coefficient indicates the strength of the relationship. The closer the number is to 1 (be it negative or positive), the more strongly related the variables are, and the more predictable changes in one variable will be as the other variable changes. The closer the number is to zero, the weaker the relationship, and the less predictable the relationships between the variables becomes. For instance, a correlation coefficient of 0.9 indicates a far stronger relationship than a correlation coefficient of 0.3. If the variables are not related to one another at all, the correlation coefficient is 0. The example above about ice cream and crime is an example of two variables that we might expect to have no relationship to each other.

The sign—positive or negative—of the correlation coefficient indicates the direction of the relationship (Figure 2.12). A positive correlation means that the variables move in the same direction. Put another way, it means that as one variable increases so does the other, and conversely, when one variable decreases so does the other. A negative correlation means that the variables move in opposite directions. If two variables are negatively correlated, a decrease in one variable is associated with an increase in the other and vice versa.

The example of ice cream and crime rates is a positive correlation because both variables increase when temperatures are warmer. Other examples of positive correlations are the relationship between an individual’s height and weight or the relationship between a person’s age and number of wrinkles. One might expect a negative correlation to exist between someone’s tiredness during the day and the number of hours they slept the previous night: the amount of sleep decreases as the feelings of tiredness increase. In a real-world example of negative correlation, student researchers at the University of Minnesota found a weak negative correlation (r = -0.29) between the average number of days per week that students got fewer than 5 hours of sleep and their GPA (Lowry et al., 2010). Keep in mind that a negative correlation is not the same as no correlation. For example, we would probably find no correlation between hours of sleep and shoe size.

As mentioned earlier, correlations have predictive value. Imagine that you are on the admissions committee of a major university. You are faced with a huge number of applications, but you are able to accommodate only a small percentage of the applicant pool. How might you decide who should be admitted? You might try to correlate your current students’ college GPA with their scores on standardized tests like the SAT or ACT. By observing which correlations were strongest for your current students, you could use this information to predict relative success of those students who have applied for admission into the university.

Three scatterplots are shown. Scatterplot (a) is labeled “positive correlation” and shows scattered dots forming a rough line from the bottom left to the top right; the x-axis is labeled “weight” and the y-axis is labeled “height.” Scatterplot (b) is labeled “negative correlation” and shows scattered dots forming a rough line from the top left to the bottom right; the x-axis is labeled “tiredness” and the y-axis is labeled “hours of sleep.” Scatterplot (c) is labeled “no correlation” and shows scattered dots having no pattern; the x-axis is labeled “shoe size” and the y-axis is labeled “hours of sleep.”
Figure 2.12Scatterplots are a graphical view of the strength and direction of correlations. The stronger the correlation, the closer the data points are to a straight line. In these examples, we see that there is (a) a positive correlation between weight and height, (b) a negative correlation between tiredness and hours of sleep, and (c) no correlation between shoe size and hours of sleep.

Correlation Does Not Indicate Causation

Correlational research is useful because it allows us to discover the strength and direction of relationships that exist between two variables. However, correlation is limited because establishing the existence of a relationship tells us little about cause and effect. While variables are sometimes correlated because one does cause the other, it could also be that some other factor, a confounding variable, is actually causing the systematic movement in our variables of interest. In the ice cream/crime rate example mentioned earlier, temperature is a confounding variable that could account for the relationship between the two variables.

Even when we cannot point to clear confounding variables, we should not assume that a correlation between two variables implies that one variable causes changes in another. This can be frustrating when a cause-and-effect relationship seems clear and intuitive. Think back to our discussion of the research done by the American Cancer Society and how their research projects were some of the first demonstrations of the link between smoking and cancer. It seems reasonable to assume that smoking causes cancer, but if we were limited to correlational research, we would be overstepping our bounds by making this assumption.

Unfortunately, people mistakenly make claims of causation as a function of correlations all the time. Such claims are especially common in advertisements and news stories. For example, recent research found that people who eat cereal on a regular basis achieve healthier weights than those who rarely eat cereal (Frantze et al., 2013; Barton et al., 2005). Guess how the cereal companies report this finding. Does eating cereal really cause an individual to maintain a healthy weight, or are there other possible explanations, such as, someone at a healthy weight is more likely to regularly eat a healthy breakfast than someone who is obese or someone who avoids meals in an attempt to diet (Figure 2.13)? While correlational research is invaluable in identifying relationships among variables, a major limitation is the inability to establish causality. Psychologists want to make statements about cause and effect, but the only way to do that is to conduct an experiment to answer a research question. The next section describes how scientific experiments incorporate methods that eliminate, or control for, alternative explanations, which allow researchers to explore how changes in one variable cause changes in another variable.

A photograph shows a bowl of cereal.
Figure 2.13Does eating cereal really cause someone to be a healthy weight? (credit: Tim Skillern)

Illusory Correlations

The temptation to make erroneous cause-and-effect statements based on correlational research is not the only way we tend to misinterpret data. We also tend to make the mistake of illusory correlations, especially with unsystematic observations. Illusory correlations, or false correlations, occur when people believe that relationships exist between two things when no such relationship exists. One well-known illusory correlation is the supposed effect that the moon’s phases have on human behavior. Many people passionately assert that human behavior is affected by the phase of the moon, and specifically, that people act strangely when the moon is full (Figure 2.14).

A photograph shows the moon.
Figure 2.14Many people believe that a full moon makes people behave oddly. (credit: Cory Zanker)

There is no denying that the moon exerts a powerful influence on our planet. The ebb and flow of the ocean’s tides are tightly tied to the gravitational forces of the moon. Many people believe, therefore, that it is logical that we are affected by the moon as well. After all, our bodies are largely made up of water. A meta-analysis of nearly 40 studies consistently demonstrated, however, that the relationship between the moon and our behavior does not exist (Rotton & Kelly, 1985). While we may pay more attention to odd behavior during the full phase of the moon, the rates of odd behavior remain constant throughout the lunar cycle.

Why are we so apt to believe in illusory correlations like this? Often we read or hear about them and simply accept the information as valid. Or, we have a hunch about how something works and then look for evidence to support that hunch, ignoring evidence that would tell us our hunch is false; this is known as confirmation bias. Other times, we find illusory correlations based on the information that comes most easily to mind, even if that information is severely limited. And while we may feel confident that we can use these relationships to better understand and predict the world around us, illusory correlations can have significant drawbacks. For example, research suggests that illusory correlations—in which certain behaviors are inaccurately attributed to certain groups—are involved in the formation of prejudicial attitudes that can ultimately lead to discriminatory behavior (Fiedler, 2004).

Causality: Conducting Experiments and Using the Data

As you’ve learned, the only way to establish that there is a cause-and-effect relationship between two variables is to conduct a scientific experiment. Experiment has a different meaning in the scientific context than in everyday life. In everyday conversation, we often use it to describe trying something for the first time, such as experimenting with a new hairstyle or a new food. However, in the scientific context, an experiment has precise requirements for design and implementation.

The Experimental Hypothesis

In order to conduct an experiment, a researcher must have a specific hypothesis to be tested. As you’ve learned, hypotheses can be formulated either through direct observation of the real world or after careful review of previous research. For example, if you think that the use of technology in the classroom has negative impacts on learning, then you have basically formulated a hypothesis—namely, that the use of technology in the classroom should be limited because it decreases learning. How might you have arrived at this particular hypothesis? You may have noticed that your classmates who take notes on their laptops perform at lower levels on class exams than those who take notes by hand, or those who receive a lesson via a computer program versus via an in-person teacher have different levels of performance when tested (Figure 2.15).

Many rows of students are in a classroom. One student has an open laptop on his desk.
Figure 2.15How might the use of technology in the classroom impact learning? (credit: modification of work by Nikolay Georgiev/Pixabay)

These sorts of personal observations are what often lead us to formulate a specific hypothesis, but we cannot use limited personal observations and anecdotal evidence to rigorously test our hypothesis. Instead, to find out if real-world data supports our hypothesis, we have to conduct an experiment.

Designing an Experiment

The most basic experimental design involves two groups: the experimental group and the control group. The two groups are designed to be the same except for one difference— experimental manipulation. The experimental group gets the experimental manipulation—that is, the treatment or variable being tested (in this case, the use of technology)—and the control group does not. Since experimental manipulation is the only difference between the experimental and control groups, we can be sure that any differences between the two are due to experimental manipulation rather than chance.

In our example of how the use of technology should be limited in the classroom, we have the experimental group learn algebra using a computer program and then test their learning. We measure the learning in our control group after they are taught algebra by a teacher in a traditional classroom. It is important for the control group to be treated similarly to the experimental group, with the exception that the control group does not receive the experimental manipulation.

We also need to precisely define, or operationalize, how we measure learning of algebra. An operational definition is a precise description of our variables, and it is important in allowing others to understand exactly how and what a researcher measures in a particular experiment. In operationalizing learning, we might choose to look at performance on a test covering the material on which the individuals were taught by the teacher or the computer program. We might also ask our participants to summarize the information that was just presented in some way. Whatever we determine, it is important that we operationalize learning in such a way that anyone who hears about our study for the first time knows exactly what we mean by learning. This aids peoples’ ability to interpret our data as well as their capacity to repeat our experiment should they choose to do so.

Once we have operationalized what is considered use of technology and what is considered learning in our experiment participants, we need to establish how we will run our experiment. In this case, we might have participants spend 45 minutes learning algebra (either through a computer program or with an in-person math teacher) and then give them a test on the material covered during the 45 minutes.

Ideally, the people who score the tests are unaware of who was assigned to the experimental or control group, in order to control for experimenter bias. Experimenter bias refers to the possibility that a researcher’s expectations might skew the results of the study. Remember, conducting an experiment requires a lot of planning, and the people involved in the research project have a vested interest in supporting their hypotheses. If the observers knew which child was in which group, it might influence how they interpret ambiguous responses, such as sloppy handwriting or minor computational mistakes. By being blind to which child is in which group, we protect against those biases. This situation is a single-blind study, meaning that one of the groups (participants) are unaware as to which group they are in (experiment or control group) while the researcher who developed the experiment knows which participants are in each group.

In a double-blind study, both the researchers and the participants are blind to group assignments. Why would a researcher want to run a study where no one knows who is in which group? Because by doing so, we can control for both experimenter and participant expectations. If you are familiar with the phrase placebo effect, you already have some idea as to why this is an important consideration. The placebo effect occurs when people’s expectations or beliefs influence or determine their experience in a given situation. In other words, simply expecting something to happen can actually make it happen.

The placebo effect is commonly described in terms of testing the effectiveness of a new medication. Imagine that you work in a pharmaceutical company, and you think you have a new drug that is effective in treating depression. To demonstrate that your medication is effective, you run an experiment with two groups: The experimental group receives the medication, and the control group does not. But you don’t want participants to know whether they received the drug or not.

Why is that? Imagine that you are a participant in this study, and you have just taken a pill that you think will improve your mood. Because you expect the pill to have an effect, you might feel better simply because you took the pill and not because of any drug actually contained in the pill—this is the placebo effect.

To make sure that any effects on mood are due to the drug and not due to expectations, the control group receives a placebo (in this case a sugar pill). Now everyone gets a pill, and once again neither the researcher nor the experimental participants know who got the drug and who got the sugar pill. Any differences in mood between the experimental and control groups can now be attributed to the drug itself rather than to experimenter bias or participant expectations (Figure 2.16).

A photograph shows three glass bottles of pills labeled as placebos.
Figure 2.16Providing the control group with a placebo treatment protects against bias caused by expectancy. (credit: Elaine and Arthur Shapiro)

Independent and Dependent Variables

In a research experiment, we strive to study whether changes in one thing cause changes in another. To achieve this, we must pay attention to two important variables, or things that can be changed, in any experimental study: the independent variable and the dependent variable. An independent variable is manipulated or controlled by the experimenter. In a well-designed experimental study, the independent variable is the only important difference between the experimental and control groups. In our example of how technology use in the classroom affects learning, the independent variable is the type of learning by participants in the study (Figure 2.17). A dependent variable is what the researcher measures to see how much effect the independent variable had. In our example, the dependent variable is the learning exhibited by our participants.

A box labeled “independent variable: taking notes on a laptop or by hand” contains a photograph of a classroom of students with an open laptop on one student's desk. An arrow labeled “influences change in the…” leads to a second box. The second box is labeled “dependent variable: performance on measure of learning” and has a photograph of a student at a desk, taking a test.
Figure 2.17In an experiment, manipulations of the independent variable are expected to result in changes in the dependent variable. (credit: “classroom” modification of work by Nikolay Georgiev/Pixabay; credit “note taking”: modification of work by KF/Wikimedia)

We expect that the dependent variable will change as a function of the independent variable. In other words, the dependent variable depends on the independent variable. A good way to think about the relationship between the independent and dependent variables is with this question: What effect does the independent variable have on the dependent variable? Returning to our example, what is the effect of being taught a lesson through a computer program versus through an in-person instructor?

Selecting and Assigning Experimental Participants

Now that our study is designed, we need to obtain a sample of individuals to include in our experiment. Our study involves human participants so we need to determine who to include. Participants are the subjects of psychological research, and as the name implies, individuals who are involved in psychological research actively participate in the process. Often, psychological research projects rely on college students to serve as participants. In fact, the vast majority of research in psychology subfields has historically involved students as research participants (Sears, 1986; Arnett, 2008). But are college students truly representative of the general population? College students tend to be younger, more educated, more liberal, and less diverse than the general population. Although using students as test subjects is an accepted practice, relying on such a limited pool of research participants can be problematic because it is difficult to generalize findings to the larger population.

Our hypothetical experiment involves high school students, and we must first generate a sample of students. Samples are used because populations are usually too large to reasonably involve every member in our particular experiment (Figure 2.18). If possible, we should use a random sample (there are other types of samples, but for the purposes of this chapter, we will focus on random samples). A random sample is a subset of a larger population in which every member of the population has an equal chance of being selected. Random samples are preferred because if the sample is large enough we can be reasonably sure that the participating individuals are representative of the larger population. This means that the percentages of characteristics in the sample—sex, ethnicity, socioeconomic level, and any other characteristics that might affect the results—are close to those percentages in the larger population.

In our example, let’s say we decide our population of interest is algebra students. But all algebra students is a very large population, so we need to be more specific; instead we might say our population of interest is all algebra students in a particular city. We should include students from various income brackets, family situations, races, ethnicities, religions, and geographic areas of town. With this more manageable population, we can work with the local schools in selecting a random sample of around 200 algebra students who we want to participate in our experiment.

In summary, because we cannot test all of the algebra students in a city, we want to find a group of about 200 that reflects the composition of that city. With a representative group, we can generalize our findings to the larger population without fear of our sample being biased in some way.

(a) A photograph shows an aerial view of crowds on a street. (b) A photograph shows s small group of children.
Figure 2.18Researchers may work with (a) a large population or (b) a sample group that is a subset of the larger population. (credit “crowd”: modification of work by James Cridland; credit “students”: modification of work by Laurie Sullivan)

Now that we have a sample, the next step of the experimental process is to split the participants into experimental and control groups through random assignment. With random assignment, all participants have an equal chance of being assigned to either group. There is statistical software that will randomly assign each of the algebra students in the sample to either the experimental or the control group.

Random assignment is critical for sound experimental design. With sufficiently large samples, random assignment makes it unlikely that there are systematic differences between the groups. So, for instance, it would be very unlikely that we would get one group composed entirely of males, a given ethnic identity, or a given religious ideology. This is important because if the groups were systematically different before the experiment began, we would not know the origin of any differences we find between the groups: Were the differences preexisting, or were they caused by manipulation of the independent variable? Random assignment allows us to assume that any differences observed between experimental and control groups result from the manipulation of the independent variable.

Issues to Consider

While experiments allow scientists to make cause-and-effect claims, they are not without problems. True experiments require the experimenter to manipulate an independent variable, and that can complicate many questions that psychologists might want to address. For instance, imagine that you want to know what effect sex (the independent variable) has on spatial memory (the dependent variable). Although you can certainly look for differences between males and females on a task that taps into spatial memory, you cannot directly control a person’s sex. We categorize this type of research approach as quasi-experimental and recognize that we cannot make cause-and-effect claims in these circumstances.

Experimenters are also limited by ethical constraints. For instance, you would not be able to conduct an experiment designed to determine if experiencing abuse as a child leads to lower levels of self-esteem among adults. To conduct such an experiment, you would need to randomly assign some experimental participants to a group that receives abuse, and that experiment would be unethical.

Interpreting Experimental Findings

Once data is collected from both the experimental and the control groups, a statistical analysis is conducted to find out if there are meaningful differences between the two groups. A statistical analysis determines how likely any difference found is due to chance (and thus not meaningful). For example, if an experiment is done on the effectiveness of a nutritional supplement, and those taking a placebo pill (and not the supplement) have the same result as those taking the supplement, then the experiment has shown that the nutritional supplement is not effective. Generally, psychologists consider differences to be statistically significant if there is less than a five percent chance of observing them if the groups did not actually differ from one another. Stated another way, psychologists want to limit the chances of making “false positive” claims to five percent or less.

The greatest strength of experiments is the ability to assert that any significant differences in the findings are caused by the independent variable. This occurs because random selection, random assignment, and a design that limits the effects of both experimenter bias and participant expectancy should create groups that are similar in composition and treatment. Therefore, any difference between the groups is attributable to the independent variable, and now we can finally make a causal statement. If we find that watching a violent television program results in more violent behavior than watching a nonviolent program, we can safely say that watching violent television programs causes an increase in the display of violent behavior.

Reporting Research

When psychologists complete a research project, they generally want to share their findings with other scientists. The American Psychological Association (APA) publishes a manual detailing how to write a paper for submission to scientific journals. Unlike an article that might be published in a magazine like Psychology Today, which targets a general audience with an interest in psychology, scientific journals generally publish peer-reviewed journal articles aimed at an audience of professionals and scholars who are actively involved in research themselves.

A peer-reviewed journal article is read by several other scientists (generally anonymously) with expertise in the subject matter. These peer reviewers provide feedback—to both the author and the journal editor—regarding the quality of the draft. Peer reviewers look for a strong rationale for the research being described, a clear description of how the research was conducted, and evidence that the research was conducted in an ethical manner. They also look for flaws in the study’s design, methods, and statistical analyses. They check that the conclusions drawn by the authors seem reasonable given the observations made during the research. Peer reviewers also comment on how valuable the research is in advancing the discipline’s knowledge. This helps prevent unnecessary duplication of research findings in the scientific literature and, to some extent, ensures that each research article provides new information. Ultimately, the journal editor will compile all of the peer reviewer feedback and determine whether the article will be published in its current state (a rare occurrence), published with revisions, or not accepted for publication.

Peer review provides some degree of quality control for psychological research. Poorly conceived or executed studies can be weeded out, and even well-designed research can be improved by the revisions suggested. Peer review also ensures that the research is described clearly enough to allow other scientists to replicate it, meaning they can repeat the experiment using different samples to determine reliability. Sometimes replications involve additional measures that expand on the original finding. In any case, each replication serves to provide more evidence to support the original research findings. Successful replications of published research make scientists more apt to adopt those findings, while repeated failures tend to cast doubt on the legitimacy of the original article and lead scientists to look elsewhere. For example, it would be a major advancement in the medical field if a published study indicated that taking a new drug helped individuals achieve a healthy weight without changing their diet. But if other scientists could not replicate the results, the original study’s claims would be questioned.

In recent years, there has been increasing concern about a “replication crisis” that has affected a number of scientific fields, including psychology. Some of the most well-known studies and scientists have produced research that has failed to be replicated by others (as discussed in Shrout & Rodgers, 2018). In fact, even a famous Nobel Prize-winning scientist has recently retracted a published paper because she had difficulty replicating her results (Nobel Prize-winning scientist Frances Arnold retracts paper, 2020 January 3). These kinds of outcomes have prompted some scientists to begin to work together and more openly, and some would argue that the current “crisis” is actually improving the ways in which science is conducted and in how its results are shared with others (Aschwanden, 2018).

DIG DEEPER: The Vaccine-Autism Myth and Retraction of Published Studies

Some scientists have claimed that routine childhood vaccines cause some children to develop autism, and, in fact, several peer-reviewed publications published research making these claims. Since the initial reports, large-scale epidemiological research has suggested that vaccinations are not responsible for causing autism and that it is much safer to have your child vaccinated than not. Furthermore, several of the original studies making this claim have since been retracted.

A published piece of work can be rescinded when data is called into question because of falsification, fabrication, or serious research design problems. Once rescinded, the scientific community is informed that there are serious problems with the original publication. Retractions can be initiated by the researcher who led the study, by research collaborators, by the institution that employed the researcher, or by the editorial board of the journal in which the article was originally published. In the vaccine-autism case, the retraction was made because of a significant conflict of interest in which the leading researcher had a financial interest in establishing a link between childhood vaccines and autism (Offit, 2008). Unfortunately, the initial studies received so much media attention that many parents around the world became hesitant to have their children vaccinated (Figure 2.19). Continued reliance on such debunked studies has significant consequences. For instance, between January and October of 2019, there were 22 measles outbreaks across the United States and more than a thousand cases of individuals contracting measles (Patel et al., 2019). This is likely due to the anti-vaccination movements that have arisen from the debunked research.

A photograph shows a child being given an oral vaccine.
Figure 2.19Some people still think vaccinations cause autism. (credit: modification of work by UNICEF Sverige)

Reliability and Validity

Reliability and validity are two important considerations that must be made with any type of data collection. Reliability refers to the ability to consistently produce a given result. In the context of psychological research, this would mean that any instruments or tools used to collect data do so in consistent, reproducible ways. There are a number of different types of reliability. Some of these include inter-rater reliability (the degree to which two or more different observers agree on what has been observed), internal consistency (the degree to which different items on a survey that measure the same thing correlate with one another), and test-retest reliability (the degree to which the outcomes of a particular measure remain consistent over multiple administrations).

Unfortunately, being consistent in measurement does not necessarily mean that you have measured something correctly. To illustrate this concept, consider a kitchen scale that would be used to measure the weight of cereal that you eat in the morning. If the scale is not properly calibrated, it may consistently under- or overestimate the amount of cereal that’s being measured. While the scale is highly reliable in producing consistent results (e.g., the same amount of cereal poured onto the scale produces the same reading each time), those results are incorrect. This is where validity comes into play. Validity refers to the extent to which a given instrument or tool accurately measures what it’s supposed to measure, and once again, there are a number of ways in which validity can be expressed. Ecological validity (the degree to which research results generalize to real-world applications), construct validity (the degree to which a given variable actually captures or measures what it is intended to measure), and face validity (the degree to which a given variable seems valid on the surface) are just a few types that researchers consider. While any valid measure is by necessity reliable, the reverse is not necessarily true. Researchers strive to use instruments that are both highly reliable and valid.

An interactive H5P element has been excluded from this version of the text. You can view it online here:
https://open.maricopa.edu/intropsychme/?p=21#h5p-5

EVERYDAY CONNECTION

How Valid Are the SAT and ACT?

Standardized tests like the SAT and ACT are supposed to measure an individual’s aptitude for a college education, but how reliable and valid are such tests? Research conducted by the College Board suggests that scores on the SAT have high predictive validity for first-year college students’ GPA (Kobrin et al., 2008). In this context, predictive validity refers to the test’s ability to effectively predict the GPA of college freshmen. Given that many institutions of higher education require the SAT or ACT for admission, this high degree of predictive validity might be comforting.

However, the emphasis placed on SAT or ACT scores in college admissions has generated some controversy on a number of fronts. For one, some researchers assert that these tests are biased and place minority students at a disadvantage and unfairly reduces the likelihood of being admitted into a college (Santelices & Wilson, 2010). Additionally, some research has suggested that the predictive validity of these tests is grossly exaggerated in how well they are able to predict the GPA of first-year college students. In fact, it has been suggested that the SAT’s predictive validity may be overestimated by as much as 150% (Rothstein, 2004). Many institutions of higher education are beginning to consider de-emphasizing the significance of SAT scores in making admission decisions (Rimer, 2008).

Recent examples of high profile cheating scandals both domestically and abroad have only increased the scrutiny being placed on these types of tests, and as of March 2019, more than 1000 institutions of higher education have either relaxed or eliminated the requirements for SAT or ACT testing for admissions (Strauss, 2019, March 19).

Learning Objectives

  • By the end of this section, you will be able to:
    • Discuss how research involving human subjects is regulated
    • Summarize the processes of informed consent and debriefing
    • Explain how research involving animal subjects is regulated

Today, scientists agree that good research is ethical in nature and is guided by a basic respect for human dignity and safety. However, as you will read in the feature box, this has not always been the case. Modern researchers must demonstrate that the research they perform is ethically sound. This section presents how ethical considerations affect the design and implementation of research conducted today.

Research Involving Human Participants

Any experiment involving the participation of human subjects is governed by extensive, strict guidelines designed to ensure that the experiment does not result in harm. Any research institution that receives federal support for research involving human participants must have access to an institutional review board (IRB). The IRB is a committee of individuals often made up of members of the institution’s administration, scientists, and community members (Figure 2.20). The purpose of the IRB is to review proposals for research that involves human participants. The IRB reviews these proposals with the principles mentioned above in mind, and generally, approval from the IRB is required in order for the experiment to proceed.

A photograph shows a group of people seated around tables in a meeting room.
Figure 2.20 An institution’s IRB meets regularly to review experimental proposals that involve human participants. (credit: International Hydropower Association/Flickr)

An institution’s IRB requires several components in any experiment it approves. For one, each participant must sign an informed consent form before they can participate in the experiment. An informed consent form provides a written description of what participants can expect during the experiment, including potential risks and implications of the research. It also lets participants know that their involvement is completely voluntary and can be discontinued without penalty at any time. Furthermore, the informed consent guarantees that any data collected in the experiment will remain completely confidential. In cases where research participants are under the age of 18, the parents or legal guardians are required to sign the informed consent form.

While the informed consent form should be as honest as possible in describing exactly what participants will be doing, sometimes deception is necessary to prevent participants’ knowledge of the exact research question from affecting the results of the study. Deception involves purposely misleading experiment participants in order to maintain the integrity of the experiment, but not to the point where the deception could be considered harmful. For example, if we are interested in how our opinion of someone is affected by their attire, we might use deception in describing the experiment to prevent that knowledge from affecting participants’ responses. In cases where deception is involved, participants must receive a full debriefing upon conclusion of the study—complete, honest information about the purpose of the experiment, how the data collected will be used, the reasons why deception was necessary, and information about how to obtain additional information about the study.

DIG DEEPER: Ethics and the Tuskegee Syphilis Study

Unfortunately, the ethical guidelines that exist for research today were not always applied in the past. In 1932, poor, rural, black, male sharecroppers from Tuskegee, Alabama, were recruited to participate in an experiment conducted by the U.S. Public Health Service, with the aim of studying syphilis in black men (Figure 2.21). In exchange for free medical care, meals, and burial insurance, 600 men agreed to participate in the study. A little more than half of the men tested positive for syphilis, and they served as the experimental group (given that the researchers could not randomly assign participants to groups, this represents a quasi-experiment). The remaining syphilis-free individuals served as the control group. However, those individuals that tested positive for syphilis were never informed that they had the disease.

While there was no treatment for syphilis when the study began, by 1947 penicillin was recognized as an effective treatment for the disease. Despite this, no penicillin was administered to the participants in this study, and the participants were not allowed to seek treatment at any other facility if they continued in the study. Over the course of 40 years, many of the participants unknowingly spread syphilis to their wives (and subsequently their children born from their wives) and eventually died because they never received treatment for the disease. This study was discontinued in 1972 when the experiment was discovered by the national press (Tuskegee University, n.d.). The resulting outrage over the experiment led directly to the National Research Act of 1974 and the strict ethical guidelines for research on humans described in this chapter. Why is this study unethical? How were the men who participated and their families harmed as a function of this research?

A photograph shows a person administering an injection.
Figure 2.21 A participant in the Tuskegee Syphilis Study receives an injection.

Many psychologists conduct research involving animal subjects. Often, these researchers use rodents (Figure 2.22) or birds as the subjects of their experiments—the APA estimates that 90% of all animal research in psychology uses these species (American Psychological Association, n.d.). Because many basic processes in animals are sufficiently similar to those in humans, these animals are acceptable substitutes for research that would be considered unethical in human participants.

A photograph shows a rat.
Figure 2.22 Rats, like the one shown here, often serve as the subjects of animal research.

This does not mean that animal researchers are immune to ethical concerns. Indeed, the humane and ethical treatment of animal research subjects is a critical aspect of this type of research. Researchers must design their experiments to minimize any pain or distress experienced by animals serving as research subjects.

Whereas IRBs review research proposals that involve human participants, animal experimental proposals are reviewed by an Institutional Animal Care and Use Committee (IACUC). An IACUC consists of institutional administrators, scientists, veterinarians, and community members. This committee is charged with ensuring that all experimental proposals require the humane treatment of animal research subjects. It also conducts semi-annual inspections of all animal facilities to ensure that the research protocols are being followed. No animal research project can proceed without the committee’s approval.

Review of MCCCD Course Competencies

After reading this chapter are you better able to do the following?

  • Describe the scientific method and how it is used to answer psychological questions about human thought and behavior.
  • Critically evaluate information to help make evidence-based decisions.
  • Apply biopsychosocial principles to real-world situations.
  • Use psychological principles to explain the diversity and complexity of the human experience.

Chapter Review Quiz

An interactive H5P element has been excluded from this version of the text. You can view it online here:
https://open.maricopa.edu/intropsychme/?p=21#h5p-23

Access for free at https://openstax.org/books/psychology-2e/pages/1-introduction

Media Attributions

3

Biopsychology

Three brain-imaging scans are shown.
Figure 3.1 Different brain imaging techniques provide scientists with insight into different aspects of how the human brain functions. Left to right, PET scan (positron emission tomography), CT scan (computerized tomography), and fMRI (functional magnetic resonance imaging) are three types of scans. 

Have you ever taken a device apart to find out how it works? Many of us have done so, whether to attempt a repair or simply to satisfy our curiosity. A device’s internal workings are often distinct from its user interface on the outside. For example, we don’t think about microchips and circuits when we turn up the volume on a mobile phone; instead, we think about getting the volume just right. Similarly, the inner workings of the human body are often distinct from the external expression of those workings. It is the job of psychologists to find the connection between these—for example, to figure out how the firings of millions of neurons become a thought.

This chapter strives to explain the biological mechanisms that underlie behavior. These physiological and anatomical foundations are the basis for many areas of psychology. In this chapter, you will learn how genetics influence both physiological and psychological traits. You will become familiar with the structure and function of the nervous system. And, finally, you will learn how the nervous system interacts with the endocrine system.

MCCCD Course Competencies

  • Identify brain structures and how neuroscientific processes play a role in human thought and behavior.
  • Critically evaluate information to help make evidence-based decisions.
  • Apply biopsychosocial principles to real-world situations.
  • Use psychological principles to explain the diversity and complexity of the human experience.

 

Learning Objectives

By the end of this section, you will be able to:

  • Explain the basic principles of the theory of evolution by natural selection
  • Describe the differences between genotype and phenotype
  • Discuss how gene-environment interactions are critical for expression of physical and psychological characteristics

Psychological researchers study genetics in order to better understand the biological factors that contribute to certain behaviors. While all humans share certain biological mechanisms, we are each unique. And while our bodies have many of the same parts—brains and hormones and cells with genetic codes—these are expressed in a wide variety of behaviors, thoughts, and reactions.

Why do two people infected by the same disease have different outcomes: one surviving and one succumbing to the ailment? How are genetic diseases passed through family lines? Are there genetic components to psychological disorders, such as depression or schizophrenia? To what extent might there be a psychological basis to health conditions such as childhood obesity?

To explore these questions, let’s start by focusing on a specific genetic disorder, sickle cell anemia, and how it might manifest in two affected sisters. Sickle-cell anemia is a genetic condition in which red blood cells, which are normally round, take on a crescent-like shape (Figure 3.2). The changed shape of these cells affects how they function: sickle-shaped cells can clog blood vessels and block blood flow, leading to high fever, severe pain, swelling, and tissue damage.

An illustration shows round and sickle-shaped blood cells.
Figure 3.2 Normal blood cells travel freely through the blood vessels, while sickle-shaped cells form blockages preventing blood flow.

Many people with sickle-cell anemia—and the particular genetic mutation that causes it—die at an early age. While the notion of “survival of the fittest” may suggest that people suffering from this disorder have a low survival rate and therefore the disorder will become less common, this is not the case. Despite the negative evolutionary effects associated with this genetic mutation, the sickle-cell gene remains relatively common among people of African descent. Why is this? The explanation is illustrated with the following scenario.

Imagine two young women—Luwi and Sena—sisters in rural Zambia, Africa. Luwi carries the gene for sickle-cell anemia; Sena does not carry the gene. Sickle-cell carriers have one copy of the sickle-cell gene but do not have full-blown sickle-cell anemia. They experience symptoms only if they are severely dehydrated or are deprived of oxygen (as in mountain climbing). Carriers are thought to be immune from malaria (an often deadly disease that is widespread in tropical climates) because changes in their blood chemistry and immune functioning prevent the malaria parasite from having its effects (Gong et al., 2013). However, full-blown sickle-cell anemia, with two copies of the sickle-cell gene, does not provide immunity to malaria.

While walking home from school, both sisters are bitten by mosquitos carrying the malaria parasite. Luwi is protected against malaria because she carries the sickle-cell mutation. Sena, on the other hand, develops malaria and dies just two weeks later. Luwi survives and eventually has children, to whom she may pass on the sickle-cell mutation.

Malaria is rare in the United States, so the sickle-cell gene benefits nobody: the gene manifests primarily in minor health problems for carriers with one copy, or a severe full-blown disease with no health benefits for carriers with two copies. However, the situation is quite different in other parts of the world. In parts of Africa where malaria is prevalent, having the sickle-cell mutation does provide health benefits for carriers (protection from malaria).

The story of malaria fits with Charles Darwin’s theory of evolution by natural selection (Figure 3.3). In simple terms, the theory states that organisms that are better suited for their environment will survive and reproduce, while those that are poorly suited for their environment will die off. In our example, we can see that, as a carrier, Luwi’s mutation is highly adaptive in her African homeland; however, if she resided in the United States (where malaria is rare), her mutation could prove costly—with a high probability of the disease in her descendants and minor health problems of her own.

Image (a) is a painted portrait of Darwin. Image (b) is a sketch of lines that split apart into branched structures.
Figure 3.3 (a) In 1859, Charles Darwin proposed his theory of evolution by natural selection in his book, On the Origin of Species. (b) The book contains just one illustration: this diagram that shows how species evolve over time through natural selection.

DIG DEEPER: Two Perspectives on Genetics and Behavior

It’s easy to get confused about two fields that study the interaction of genes and the environment, such as the fields of evolutionary psychology and behavioral genetics. How can we tell them apart?

In both fields, it is understood that genes not only code for particular traits, but also contribute to certain patterns of cognition and behavior. Evolutionary psychology focuses on how universal patterns of behavior and cognitive processes have evolved over time. Therefore, variations in cognition and behavior would make individuals more or less successful in reproducing and passing those genes on to their offspring. Evolutionary psychologists study a variety of psychological phenomena that may have evolved as adaptations, including fear response, food preferences, mate selection, and cooperative behaviors (Confer et al., 2010).

Whereas evolutionary psychologists focus on universal patterns that evolved over millions of years, behavioral geneticists study how individual differences arise, in the present, through the interaction of genes and the environment. When studying human behavior, behavioral geneticists often employ twin and adoption studies to research questions of interest. Twin studies compare the likelihood that a given behavioral trait is shared among identical and fraternal twins; adoption studies compare those rates among biologically related relatives and adopted relatives. Both approaches provide some insight into the relative importance of genes and environment for the expression of a given trait.

 

Genetic Variation

Genetic variation, the genetic difference between individuals, is what contributes to a species’ adaptation to its environment. In humans, genetic variation begins with an egg, about 100 million sperm, and fertilization. Fertile women ovulate roughly once per month, releasing an egg from follicles in the ovary. During the egg’s journey from the ovary through the fallopian tubes, to the uterus, a sperm may fertilize the egg.

The egg and the sperm each contain 23 chromosomes. Chromosomes are long strings of genetic material known as deoxyribonucleic acid (DNA). DNA is a helix-shaped molecule made up of nucleotide base pairs. In each chromosome, sequences of DNA make up genes that control or partially control a number of visible characteristics, known as traits, such as eye color, hair color, and so on. A single gene may have multiple possible variations, or alleles. An allele is a specific version of a gene. So, a given gene may code for the trait of hair color, and the different alleles of that gene affect which hair color an individual has.

When a sperm and egg fuse, their 23 chromosomes combine to create a zygote with 46 chromosomes (23 pairs). Therefore, each parent contributes half the genetic information carried by the offspring; the resulting physical characteristics of the offspring (called the phenotype) are determined by the interaction of genetic material supplied by the parents (called the genotype). A person’s genotype is the genetic makeup of that individual. Phenotype, on the other hand, refers to the individual’s inherited physical characteristics, which are a combination of genetic and environmental influences (Figure 3.4).

Image (a) shows the helical structure of DNA. Image (b) shows a person’s face.
Figure 3.4 (a) Genotype refers to the genetic makeup of an individual based on the genetic material (DNA) inherited from one’s parents. (b) Phenotype describes an individual’s observable characteristics, such as hair color, skin color, height, and build. (credit a: modification of work by Caroline Davis; credit b: modification of work by Cory Zanker)

Most traits are controlled by multiple genes, but some traits are controlled by one gene. A characteristic like cleft chin, for example, is influenced by a single gene from each parent. In this example, we will call the gene for cleft chin “B,” and the gene for smooth chin “b.” Cleft chin is a dominant trait, which means that having the dominant allele either from one parent (Bb) or both parents (BB) will always result in the phenotype associated with the dominant allele. When someone has two copies of the same allele, they are said to be homozygous for that allele. When someone has a combination of alleles for a given gene, they are said to be heterozygous. For example, a smooth chin is a recessive trait, which means that an individual will only display the smooth chin phenotype if they are homozygous for that recessive allele (bb).

Imagine that a woman with a cleft chin mates with a man with a smooth chin. What type of chin will their child have? The answer to that depends on which alleles each parent carries. If the woman is homozygous for cleft chin (BB), her offspring will always have a cleft chin. It gets a little more complicated, however, if the mother is heterozygous for this gene (Bb). Since the father has a smooth chin—therefore homozygous for the recessive allele (bb)—we can expect the offspring to have a 50% chance of having a cleft chin and a 50% chance of having a smooth chin (Figure 3.5).

Image (a) is a Punnett square showing the four possible combinations (Bb, bb, Bb, bb) resulting from the pairing of a bb father and a Bb mother. Image (b) is a close-up photograph showing a cleft chin.
Figure 3.5 (a) A Punnett square is a tool used to predict how genes will interact in the production of offspring. The capital B represents the dominant allele, and the lowercase b represents the recessive allele. In the example of the cleft chin, where B is cleft chin (dominant allele), wherever a pair contains the dominant allele, B, you can expect a cleft chin phenotype. You can expect a smooth chin phenotype only when there are two copies of the recessive allele, bb. (b) A cleft chin, shown here, is an inherited trait.

In sickle cell anemia, heterozygous carriers (like Luwi from the example) can develop blood resistance to malaria infection while those who are homozygous (like Sena) have a potentially lethal blood disorder. Sickle-cell anemia is just one of many genetic disorders caused by the pairing of two recessive genes. For example, phenylketonuria (PKU) is a condition in which individuals lack an enzyme that normally converts harmful amino acids into harmless byproducts. If someone with this condition goes untreated, he or she will experience significant deficits in cognitive function, seizures, and an increased risk of various psychiatric disorders. Because PKU is a recessive trait, each parent must have at least one copy of the recessive allele in order to produce a child with the condition (Figure 3.6).

So far, we have discussed traits that involve just one gene, but few human characteristics are controlled by a single gene. Most traits are polygenic: controlled by more than one gene. Height is one example of a polygenic trait, as are skin color and weight.

A Punnett square shows the four possible combinations (NN, Np, Np, pp) resulting from the pairing of two Np parents.
Figure 3.6 In this Punnett square, N represents the normal allele, and p represents the recessive allele that is associated with PKU. If two individuals mate who are both heterozygous for the allele associated with PKU, their offspring have a 25% chance of expressing the PKU phenotype.

Where do harmful genes that contribute to diseases like PKU come from? Gene mutations provide one source of harmful genes. A mutation is a sudden, permanent change in a gene. While many mutations can be harmful or lethal, once in a while, a mutation benefits an individual by giving that person an advantage over those who do not have the mutation. Recall that the theory of evolution asserts that individuals best adapted to their particular environments are more likely to reproduce and pass on their genes to future generations. In order for this process to occur, there must be competition—more technically, there must be variability in genes (and resultant traits) that allow for variation in adaptability to the environment. If a population consisted of identical individuals, then any dramatic changes in the environment would affect everyone in the same way, and there would be no variation in selection. In contrast, diversity in genes and associated traits allows some individuals to perform slightly better than others when faced with environmental change. This creates a distinct advantage for individuals best suited for their environments in terms of successful reproduction and genetic transmission.

DIG DEEPER: Human Diversity

This chapter focuses on biology. Later in this course, you will learn about social psychology and issues of race, prejudice, and discrimination. When we focus strictly on biology, race becomes a weak construct. After the sequencing of the human genome at the turn of the millennium, many scientists began to argue that race was not a useful variable in genetic research and that its continued use represents a potential source of confusion and harm. The racial categories that some believed to be helpful in studying genetic diversity in humans are largely irrelevant. A person’s skin tone, eye color, and hair texture are functions of their genetic makeups, but there is actually more genetic variation within a given racial category than there is between racial categories. In some cases, focus on race has led to difficulties with misdiagnoses and/or under-diagnoses of diseases ranging from sickle cell anemia to cystic fibrosis. Some argue that we need to distinguish between ancestry and race and then focus on ancestry. This approach would facilitate a greater understanding of human genetic diversity (Yudell et al., 2016).

Gene-Environment Interactions

Genes do not exist in a vacuum. Although we are all biological organisms, we also exist in an environment that is incredibly important in determining not only when and how our genes express themselves, but also in what combination. Each of us represents a unique interaction between our genetic makeup and our environment; range of reaction is one way to describe this interaction. Range of reaction asserts that our genes set the boundaries within which we can operate, and our environment interacts with the genes to determine where in that range we will fall. For example, if an individual’s genetic makeup predisposes her to high levels of intellectual potential and she is reared in a rich, stimulating environment, then she will be more likely to achieve her full potential than if she were raised under conditions of significant deprivation. According to the concept of range of reaction, genes set definite limits on potential, and environment determines how much of that potential is achieved. Some disagree with this theory and argue that genes do not set a limit on a person’s potential with reaction norms being determined by the environment. For example, when individuals experience neglect or abuse early in life, they are more likely to exhibit adverse psychological and/or physical conditions that can last throughout their lives. These conditions may develop as a function of the negative environmental experiences in individuals from dissimilar genetic backgrounds (Miguel et al., 2019; Short & Baram, 2019).

Another perspective on the interaction between genes and the environment is the concept of genetic environmental correlation. Stated simply, our genes influence our environment, and our environment influences the expression of our genes (Figure 3.7). Not only do our genes and environment interact, as in range of reaction, but they also influence one another bidirectionally. For example, the child of an NBA player would probably be exposed to basketball from an early age. Such exposure might allow the child to realize his or her full genetic, athletic potential. Thus, the parents’ genes, which the child shares, influence the child’s environment, and that environment, in turn, is well suited to support the child’s genetic potential.

Two jigsaw puzzle pieces are shown; one depicts images of houses, and the other depicts a helical DNA strand.
Figure 3.7 Nature and nurture work together like complex pieces of a human puzzle. The interaction of our environment and genes makes us the individuals we are. (credit “puzzle”: modification of work by Cory Zanker; credit “houses”: modification of work by Ben Salter; credit “DNA”: modification of work by NHGRI)

In another approach to gene-environment interactions, the field of epigenetics looks beyond the genotype itself and studies how the same genotype can be expressed in different ways. In other words, researchers study how the same genotype can lead to very different phenotypes. As mentioned earlier, gene expression is often influenced by environmental context in ways that are not entirely obvious. For instance, identical twins share the same genetic information (identical twins develop from a single fertilized egg that split, so the genetic material is exactly the same in each; in contrast, fraternal twins usually result from two different eggs fertilized by different sperm, so the genetic material varies as with non-twin siblings). But even with identical genes, there remains an incredible amount of variability in how gene expression can unfold over the course of each twin’s life. Sometimes, one twin will develop a disease and the other will not. In one example, Aliya, an identical twin, died from cancer at age 7, but her twin, now 19 years old, has never had cancer. Although these individuals share an identical genotype, their phenotypes differ as a result of how that genetic information is expressed over time and through their unique environmental interactions. The epigenetic perspective is very different from range of reaction, because here the genotype is not fixed and limited.

Genes affect more than our physical characteristics. Indeed, scientists have found genetic linkages to a number of behavioral characteristics, ranging from basic personality traits to sexual orientation to spirituality (for examples, see Mustanski et al., 2005; Comings et al., 2000). Genes are also associated with temperament and a number of psychological disorders, such as depression and schizophrenia. So while it is true that genes provide the biological blueprints for our cells, tissues, organs, and body, they also have a significant impact on our experiences and our behaviors.

Let’s look at the following findings regarding schizophrenia in light of our three views of gene-environment interactions. Which view do you think best explains this evidence?

In a 2004 study by Tienari and colleagues, of people who were given up for adoption, adoptees whose biological mothers had schizophrenia and who had been raised in a disturbed family environment were much more likely to develop schizophrenia or another psychotic disorder than were any of the other groups in the study:

  • Of adoptees whose biological mothers had schizophrenia (high genetic risk) and who were raised in disturbed family environments, 36.8% were likely to develop schizophrenia.
  • Of adoptees whose biological mothers had schizophrenia (high genetic risk) and who were raised in healthy family environments, 5.8% were likely to develop schizophrenia.
  • Of adoptees with a low genetic risk (whose mothers did not have schizophrenia) and who were raised in disturbed family environments, 5.3% were likely to develop schizophrenia.
  • Of adoptees with a low genetic risk (whose mothers did not have schizophrenia) and who were raised in healthy family environments, 4.8% were likely to develop schizophrenia.

The study shows that adoptees with high genetic risk were most likely to develop schizophrenia if they were raised in disturbed home environments. This research lends credibility to the notion that both genetic vulnerability and environmental stress are necessary for schizophrenia to develop, and that genes alone do not tell the full tale.

Learning Objectives

By the end of this section, you will be able to:

  • Identify the basic parts of a neuron
  • Describe how neurons communicate with each other
  • Explain how drugs act as agonists or antagonists for a given neurotransmitter system
Psychologists striving to understand the human mind may study the nervous system. Learning how the body’s cells and organs function can help us understand the biological basis of human psychology. The nervous system is composed of two basic cell types: glial cells (also known as glia) and neurons. Glial cells are traditionally thought to play a supportive role to neurons, both physically and metabolically. Glial cells provide scaffolding on which the nervous system is built, help neurons line up close with each other to allow neuronal communication, provide insulation to neurons, transport nutrients, and waste products, and mediate immune responses. For years, researchers believed that there were many more glial cells than neurons; however, more recent work from Suzanna Herculano-Houzel’s laboratory has called this long-standing assumption into question and has provided important evidence that there may be nearly a 1:1 ratio of glia cells to neurons. This is important because it suggests that human brains are more similar to other primate brains than previously thought (Azevedo et al., 2009; Hercaulano-Houzel, 2012; Herculano-Houzel, 2009). Neurons, on the other hand, serve as interconnected information processors that are essential for all of the tasks of the nervous system. This section briefly describes the structure and function of neurons. 

Neuron Structure

Neurons are the central building blocks of the nervous system, 100 billion strong at birth. Like all cells, neurons consist of several different parts, each serving a specialized function (Figure 3.8). A neuron’s outer surface is made up of a semipermeable membrane. This membrane allows smaller molecules and molecules without an electrical charge to pass through it while stopping larger or highly charged molecules.

An illustration shows a neuron with labeled parts for the cell membrane, dendrite, cell body, axon, and terminal buttons. A myelin sheath covers part of the neuron.
Figure 3.8This illustration shows a prototypical neuron, which is being myelinated by a glial cell.

The nucleus of the neuron is located in the soma, or cell body. The soma has branching extensions known as dendrites. The neuron is a small information processor, and dendrites serve as input sites where signals are received from other neurons. These signals are transmitted electrically across the soma and down a major extension from the soma known as the axon, which ends at multiple terminal buttons. The terminal buttons contain synaptic vesicles that house neurotransmitters, the chemical messengers of the nervous system.

Axons range in length from a fraction of an inch to several feet. In some axons, glial cells form a fatty substance known as the myelin sheath, which coats the axon and acts as an insulator, increasing the speed at which the signal travels. The myelin sheath is not continuous and there are small gaps that occur down the length of the axon. These gaps in the myelin sheath are known as the Nodes of Ranvier. The myelin sheath is crucial for the normal operation of the neurons within the nervous system: the loss of the insulation it provides can be detrimental to normal function. To understand how this works, let’s consider an example. PKU, a genetic disorder discussed earlier, causes a reduction in myelin and abnormalities in white matter cortical and subcortical structures. The disorder is associated with a variety of issues including severe cognitive deficits, exaggerated reflexes, and seizures (Anderson & Leuzzi, 2010; Huttenlocher, 2000). Another disorder, multiple sclerosis (MS), an autoimmune disorder, involves a large-scale loss of the myelin sheath on axons throughout the nervous system. The resulting interference in the electrical signal prevents the quick transmittal of information by neurons and can lead to a number of symptoms, such as dizziness, fatigue, loss of motor control, and sexual dysfunction. While some treatments may help to modify the course of the disease and manage certain symptoms, there is currently no known cure for multiple sclerosis.

In healthy individuals, the neuronal signal moves rapidly down the axon to the terminal buttons, where synaptic vesicles release neurotransmitters into the synaptic cleft (Figure 3.9). The synaptic cleft is a very small space between two neurons and is an important site where communication between neurons occurs. Once neurotransmitters are released into the synaptic cleft, they travel across it and bind with corresponding receptors on the dendrite of an adjacent neuron. Receptors, proteins on the cell surface where neurotransmitters attach, vary in shape, with different shapes “matching” different neurotransmitters.

How does a neurotransmitter “know” which receptor to bind to? The neurotransmitter and the receptor have what is referred to as a lock-and-key relationship—specific neurotransmitters fit specific receptors similar to how a key fits a lock. The neurotransmitter binds to any receptor that it fits.

Image (a) shows the synaptic space between two neurons, with neurotransmitters being released into the synapse and attaching to receptors. Image (b) is a micrograph showing a spherical terminal button with part of the exterior removed, revealing a solid interior of small round parts.
Figure 3.9(a) The synaptic cleft is the space between the terminal button of one neuron and the dendrite of another neuron. (b) In this pseudo-colored image from a scanning electron microscope, a terminal button (green) has been opened to reveal the synaptic vesicles (orange and blue) inside. Each vesicle contains about 10,000 neurotransmitter molecules. (credit b: modification of work by Tina Carvalho, NIH-NIGMS; scale-bar data from Matt Russell)

Neuronal Communication

Now that we have learned about the basic structures of the neuron and the role that these structures play in neuronal communication, let’s take a closer look at the signal itself—how it moves through the neuron and then jumps to the next neuron, where the process is repeated.

We begin at the neuronal membrane. The neuron exists in a fluid environment—it is surrounded by extracellular fluid and contains intracellular fluid (i.e., cytoplasm). The neuronal membrane keeps these two fluids separate—a critical role because the electrical signal that passes through the neuron depends on the intra- and extracellular fluids being electrically different. This difference in charge across the membrane, called the membrane potential, provides energy for the signal.

The electrical charge of the fluids is caused by charged molecules (ions) dissolved in the fluid. The semipermeable nature of the neuronal membrane somewhat restricts the movement of these charged molecules, and, as a result, some of the charged particles tend to become more concentrated either inside or outside the cell.

Between signals, the neuron membrane’s potential is held in a state of readiness, called the resting potential. Like a rubber band stretched out and waiting to spring into action, ions line up on either side of the cell membrane, ready to rush across the membrane when the neuron goes active and the membrane opens its gates (i.e., a sodium-potassium pump that allows movement of ions across the membrane). Ions in high-concentration areas are ready to move to low-concentration areas, and positive ions are ready to move to areas with a negative charge.

In the resting state, sodium (Na+) is at higher concentrations outside the cell, so it will tend to move into the cell. Potassium (K+), on the other hand, is more concentrated inside the cell, and will tend to move out of the cell (Figure 3.10). In addition, the inside of the cell is slightly negatively charged compared to the outside. This provides an additional force on sodium, causing it to move into the cell.

A close-up illustration depicts the difference in charges across the cell membrane, and shows how Na+ and K+ cells concentrate more closely near the membrane.
Figure 3.10At resting potential, Na+ (blue pentagons) is more highly concentrated outside the cell in the extracellular fluid (shown in blue), whereas K+ (purple squares) is more highly concentrated near the membrane in the cytoplasm or intracellular fluid. Other molecules, such as chloride ions (yellow circles) and negatively charged proteins (brown squares), help contribute to a positive net charge in the extracellular fluid and a negative net charge in the intracellular fluid.

From this resting potential state, the neuron receives a signal and its state changes abruptly (Figure 3.11). When a neuron receives signals at the dendrites—due to neurotransmitters from an adjacent neuron binding to its receptors—small pores, or gates, open on the neuronal membrane, allowing Na+ ions, propelled by both charge and concentration differences, to move into the cell. With this influx of positive ions, the internal charge of the cell becomes more positive. If that charge reaches a certain level, called the threshold of excitation, the neuron becomes active and the action potential begins.

Many additional pores open, causing a massive influx of Na+ ions and a huge positive spike in the membrane potential, the peak action potential. At the peak of the spike, the sodium gates close and the potassium gates open. As positively charged potassium ions leave, the cell quickly begins repolarization. At first, it hyperpolarizes, becoming slightly more negative than the resting potential, and then it levels off, returning to the resting potential.

A graph shows the increase, peak, and decrease in membrane potential. The millivolts through the phases are approximately -70mV at resting potential, -55mV at threshold of excitation, 30mV at peak action potential, 5mV at repolarization, and -80mV at hyperpolarization.
Figure 3.11During the action potential, the electrical charge across the membrane changes dramatically.

This positive spike constitutes the action potential: the electrical signal that typically moves from the cell body down the axon to the axon terminals. The electrical signal moves down the axon with the impulses jumping in a leapfrog fashion between the Nodes of Ranvier. The Nodes of Ranvier are natural gaps in the myelin sheath. At each point, some of the sodium ions that enter the cell diffuse to the next section of the axon, raising the charge past the threshold of excitation and triggering a new influx of sodium ions. The action potential moves all the way down the axon in this fashion until reaching the terminal buttons.

The action potential is an all-or-none phenomenon. In simple terms, this means that an incoming signal from another neuron is either sufficient or insufficient to reach the threshold of excitation. There is no in-between, and there is no turning off an action potential once it starts. Think of it like sending an email or a text message. You can think about sending it all you want, but the message is not sent until you hit the send button. Furthermore, once you send the message, there is no stopping it.

Because it is all or none, the action potential is recreated, or propagated, at its full strength at every point along the axon. Much like the lit fuse of a firecracker, it does not fade away as it travels down the axon. It is this all-or-none property that explains the fact that your brain perceives an injury to a distant body part like your toe as equally painful as one to your nose.

As noted earlier, when the action potential arrives at the terminal button, the synaptic vesicles release their neurotransmitters into the synaptic cleft. The neurotransmitters travel across the synapse and bind to receptors on the dendrites of the adjacent neuron, and the process repeats itself in the new neuron (assuming the signal is sufficiently strong to trigger an action potential). Once the signal is delivered, excess neurotransmitters in the synaptic cleft drift away, are broken down into inactive fragments, or are reabsorbed in a process known as reuptake. Reuptake involves the neurotransmitter being pumped back into the neuron that released it, in order to clear the synapse (Figure 3.12). Clearing the synapse serves both to provide a clear “on” and “off” state between signals and to regulate the production of neurotransmitter (full synaptic vesicles provide signals that no additional neurotransmitters need to be produced).

The synaptic space between two neurons is shown. Some neurotransmitters that have been released into the synapse are attaching to receptors while others undergo reuptake into the axon terminal.
Figure 3.12Reuptake involves moving a neurotransmitter from the synapse back into the axon terminal from which it was released.

Neuronal communication is often referred to as an electrochemical event. The movement of the action potential down the length of the axon is an electrical event, and movement of the neurotransmitter across the synaptic space represents the chemical portion of the process. However, there are some specialized connections between neurons that are entirely electrical. In such cases, the neurons are said to communicate via an electrical synapse. In these cases, two neurons physically connect to one another via gap junctions, which allows the current from one cell to pass into the next. There are far fewer electrical synapses in the brain, but those that do exist are much faster than the chemical synapses that have been described above (Connors & Long, 2004).

There are several different types of neurotransmitters released by different neurons, and we can speak in broad terms about the kinds of functions associated with different neurotransmitters (Table 3.1). Much of what psychologists know about the functions of neurotransmitters comes from research on the effects of drugs in psychological disorders. Psychologists who take a biological perspective and focus on the physiological causes of behavior assert that psychological disorders like depression and schizophrenia are associated with imbalances in one or more neurotransmitter systems. In this perspective, psychotropic medications can help improve the symptoms associated with these disorders. Psychotropic medications are drugs that treat psychiatric symptoms by restoring neurotransmitter balance.

Major Neurotransmitters and How They Affect Behavior
Neurotransmitter Involved in Potential Effect on Behavior
Acetylcholine Muscle action, memory Increased arousal, enhanced cognition
Beta-endorphin Pain, pleasure Decreased anxiety, decreased tension
Dopamine Mood, sleep, learning Increased pleasure, suppressed appetite
Gamma-aminobutyric acid (GABA) Brain function, sleep Decreased anxiety, decreased tension
Glutamate Memory, learning Increased learning, enhanced memory
Norepinephrine Heart, intestines, alertness Increased arousal, suppressed appetite
Serotonin Mood, sleep Modulated mood, suppressed appetite
Table3.1

Psychoactive drugs can act as agonists or antagonists for a given neurotransmitter system. Agonists are chemicals that mimic a neurotransmitter at the receptor site. An antagonist, on the other hand, blocks or impedes the normal activity of a neurotransmitter at the receptor. Agonists and antagonists represent drugs that are prescribed to correct the specific neurotransmitter imbalances underlying a person’s condition. For example, Parkinson’s disease, a progressive nervous system disorder, is associated with low levels of dopamine. Therefore, a common treatment strategy for Parkinson’s disease involves using dopamine agonists, which mimic the effects of dopamine by binding to dopamine receptors.

Certain symptoms of schizophrenia are associated with overactive dopamine neurotransmission. The antipsychotics used to treat these symptoms are antagonists for dopamine—they block dopamine’s effects by binding its receptors without activating them. Thus, they prevent dopamine released by one neuron from signaling information to adjacent neurons.

In contrast to agonists and antagonists, which both operate by binding to receptor sites, reuptake inhibitors prevent unused neurotransmitters from being transported back to the neuron. This allows neurotransmitters to remain active in the synaptic cleft for longer durations, increasing their effectiveness. Depression, which has been consistently linked with reduced serotonin levels, is commonly treated with selective serotonin reuptake inhibitors (SSRIs). By preventing reuptake, SSRIs strengthen the effect of serotonin, giving it more time to interact with serotonin receptors on dendrites. Common SSRIs on the market today include Prozac, Paxil, and Zoloft. The drug LSD is structurally very similar to serotonin, and it affects the same neurons and receptors as serotonin. Psychotropic drugs are not instant solutions for people suffering from psychological disorders. Often, an individual must take a drug for several weeks before seeing improvement, and many psychoactive drugs have significant negative side effects. Furthermore, individuals vary dramatically in how they respond to the drugs. To improve chances for success, it is not uncommon for people receiving pharmacotherapy to undergo psychological and/or behavioral therapies as well. Some research suggests that combining drug therapy with other forms of therapy tends to be more effective than any one treatment alone (for one such example, see March et al., 2007).

Learning Objectives

By the end of this section, you will be able to:

  • Describe the difference between the central and peripheral nervous systems
  • Explain the difference between the somatic and autonomic nervous systems
  • Differentiate between the sympathetic and parasympathetic divisions of the autonomic nervous system

The nervous system can be divided into two major subdivisions: the central nervous system (CNS) and the peripheral nervous system (PNS), shown in Figure 3.13. The CNS is comprised of the brain and spinal cord; the PNS connects the CNS to the rest of the body. In this section, we focus on the peripheral nervous system; later, we look at the brain and spinal cord.

Image (a) shows an outline of a human body with the brain and spinal cord illustrated. Image (b) shows an outline of a human body with a network of nerves depicted.
Figure 3.13The nervous system is divided into two major parts: (a) the Central Nervous System and (b) the Peripheral Nervous System.

Peripheral Nervous System

The peripheral nervous system is made up of thick bundles of axons, called nerves, carrying messages back and forth between the CNS and the muscles, organs, and senses in the periphery of the body (i.e., everything outside the CNS). The PNS has two major subdivisions: the somatic nervous system and the autonomic nervous system.

The somatic nervous system is associated with activities traditionally thought of as conscious or voluntary. It is involved in the relay of sensory and motor information to and from the CNS; therefore, it consists of motor neurons and sensory neurons. Motor neurons, carrying instructions from the CNS to the muscles, are efferent fibers (efferent means “moving away from”). Sensory neurons, carrying sensory information to the CNS, are afferent fibers (afferent means “moving toward”). A helpful way to remember this is that efferent = exit and afferent = arrive. Each nerve is basically a bundle of neurons forming a two-way superhighway, containing thousands of axons, both efferent and afferent.

The autonomic nervous system controls our internal organs and glands and is generally considered to be outside the realm of voluntary control. It can be further subdivided into the sympathetic and parasympathetic divisions (Figure 3.14). The sympathetic nervous system is involved in preparing the body for stress-related activities; the parasympathetic nervous system is associated with returning the body to routine, day-to-day operations. The two systems have complementary functions, operating in tandem to maintain the body’s homeostasis. Homeostasis is a state of equilibrium, or balance, in which biological conditions (such as body temperature) are maintained at optimal levels.

A diagram of a human body lists the different functions of the sympathetic and parasympathetic nervous system. The parasympathetic system can constrict pupils, stimulate salivation, slow heart rate, constrict bronchi, stimulate digestion, stimulate bile secretion, and cause the bladder to contract. The sympathetic nervous system can dilate pupils, inhibit salivation, increase heart rate, dilate bronchi, inhibit digestion, stimulate the breakdown of glycogen, stimulate secretion of adrenaline and noradrenaline, and inhibit contraction of the bladder.
Figure 3.14The sympathetic and parasympathetic divisions of the autonomic nervous system have the opposite effects on various systems.

The sympathetic nervous system is activated when we are faced with stressful or high-arousal situations. The activity of this system was adaptive for our ancestors, increasing their chances of survival. Imagine, for example, that one of our early ancestors, out hunting small game, suddenly disturbs a large bear with her cubs. At that moment, his body undergoes a series of changes—a direct function of sympathetic activation—preparing him to face the threat. His pupils dilate, his heart rate and blood pressure increase, his bladder relaxes, his liver releases glucose, and adrenaline surges into his bloodstream. This constellation of physiological changes, known as the fight or flight response, allows the body access to energy reserves and heightened sensory capacity so that it might fight off a threat or run away to safety.

While it is clear that such a response would be critical for survival for our ancestors, who lived in a world full of real physical threats, many of the high-arousal situations we face in the modern world are more psychological in nature. For example, think about how you feel when you have to stand up and give a presentation in front of a roomful of people, or right before taking a big test. You are in no real physical danger in those situations, and yet you have evolved to respond to a perceived threat with the fight or flight response. This kind of response is not nearly as adaptive in the modern world; in fact, we suffer negative health consequences when faced constantly with psychological threats that we can neither fight nor flee. Recent research suggests that an increase in susceptibility to heart disease (Chandola et al., 2006) and impaired function of the immune system (Glaser & Kiecolt-Glaser, 2005) are among the many negative consequences of persistent and repeated exposure to stressful situations. Some of this tendency for stress reactivity can be wired by early experiences of trauma.

Once the threat has been resolved, the parasympathetic nervous system takes over and returns bodily functions to a relaxed state. Our hunter’s heart rate and blood pressure return to normal, his pupils constrict, he regains control of his bladder, and the liver begins to store glucose in the form of glycogen for future use. These restorative processes are associated with the activation of the parasympathetic nervous system.

Learning Objectives

By the end of this section, you will be able to:

  • Explain the functions of the spinal cord
  • Identify the hemispheres and lobes of the brain
  • Describe the types of techniques available to clinicians and researchers to image or scan the brain

The brain is a remarkably complex organ comprised of billions of interconnected neurons and glia. It is a bilateral, or two-sided, structure that can be separated into distinct lobes. Each lobe is associated with certain types of functions, but, ultimately, all of the areas of the brain interact with one another to provide the foundation for our thoughts and behaviors. In this section, we discuss the overall organization of the brain and the functions associated with different brain areas, beginning with what can be seen as an extension of the brain, the spinal cord.

The Spinal Cord

It can be said that the spinal cord is what connects the brain to the outside world. Because of it, the brain can act. The spinal cord is like a relay station, but a very smart one. It not only routes messages to and from the brain, but it also has its own system of automatic processes, called reflexes.

The top of the spinal cord is a bundle of nerves that merges with the brain stem, where the basic processes of life are controlled, such as breathing and digestion. In the opposite direction, the spinal cord ends just below the ribs—contrary to what we might expect, it does not extend all the way to the base of the spine.

The spinal cord is functionally organized in 30 segments, corresponding with the vertebrae. Each segment is connected to a specific part of the body through the peripheral nervous system. Nerves branch out from the spine at each vertebra. Sensory nerves bring messages in; motor nerves send messages out to the muscles and organs. Messages travel to and from the brain through every segment.

Some sensory messages are immediately acted on by the spinal cord, without any input from the brain. Withdrawal from a hot object and the knee jerk are two examples. When a sensory message meets certain parameters, the spinal cord initiates an automatic reflex. The signal passes from the sensory nerve to a simple processing center, which initiates a motor command. Seconds are saved, because messages don’t have to go the brain, be processed, and get sent back. In matters of survival, the spinal reflexes allow the body to react extraordinarily fast.

The spinal cord is protected by bony vertebrae and cushioned in cerebrospinal fluid, but injuries still occur. When the spinal cord is damaged in a particular segment, all lower segments are cut off from the brain, causing paralysis. Therefore, the lower on the spine damage occurs, the fewer functions an injured individual will lose.

Neuroplasticity

Bob Woodruff, a reporter for ABC, suffered a traumatic brain injury after a bomb exploded next to the vehicle he was in while covering a news story in Iraq. As a consequence of these injuries, Woodruff experienced many cognitive deficits including difficulties with memory and language. However, over time and with the aid of intensive amounts of cognitive and speech therapy, Woodruff has shown an incredible recovery of function (Fernandez, 2008, October 16).

One of the factors that made this recovery possible was neuroplasticity. Neuroplasticity refers to how the nervous system can change and adapt. Neuroplasticity can occur in a variety of ways including personal experiences, developmental processes, or, as in Woodruff’s case, in response to some sort of damage or injury that has occurred. Neuroplasticity can involve the creation of new synapses, pruning of synapses that are no longer used, changes in glial cells, and even the birth of new neurons. Because of neuroplasticity, our brains are constantly changing and adapting, and while our nervous system is most plastic when we are very young, as Woodruff’s case suggests, it is still capable of remarkable changes later in life.

The Two Hemispheres

The surface of the brain, known as the cerebral cortex, is very uneven, characterized by a distinctive pattern of folds or bumps, known as gyri (singular: gyrus), and grooves, known as sulci (singular: sulcus), shown in Figure 3.15. These gyri and sulci form important landmarks that allow us to separate the brain into functional centers. The most prominent sulcus, known as the longitudinal fissure, is the deep groove that separates the brain into two halves or hemispheres: the left hemisphere and the right hemisphere.

An illustration of the brain’s exterior surface shows the ridges and depressions, and the deep fissure that runs through the center.
Figure 3.15The surface of the brain is covered with gyri and sulci. A deep sulcus is called a fissure, such as the longitudinal fissure that divides the brain into left and right hemispheres. (credit: modification of work by Bruce Blaus)

There is evidence of specialization of function—referred to as lateralization—in each hemisphere, mainly regarding differences in language functions. The left hemisphere controls the right half of the body, and the right hemisphere controls the left half of the body. Decades of research on lateralization of function by Michael Gazzaniga and his colleagues suggest that a variety of functions ranging from cause-and-effect reasoning to self-recognition may follow patterns that suggest some degree of hemispheric dominance (Gazzaniga, 2005). For example, the left hemisphere has been shown to be superior for forming associations in memory, selective attention, and positive emotions. The right hemisphere, on the other hand, has been shown to be superior in pitch perception, arousal, and negative emotions (Ehret, 2006). However, it should be pointed out that research on which hemisphere is dominant in a variety of different behaviors has produced inconsistent results, and therefore, it is probably better to think of how the two hemispheres interact to produce a given behavior rather than attributing certain behaviors to one hemisphere versus the other (Banich & Heller, 1998).

The two hemispheres are connected by a thick band of neural fibers known as the corpus callosum, consisting of about 200 million axons. The corpus callosum allows the two hemispheres to communicate with each other and allows for information being processed on one side of the brain to be shared with the other side.

Normally, we are not aware of the different roles that our two hemispheres play in day-to-day functions, but there are people who come to know the capabilities and functions of their two hemispheres quite well. In some cases of severe epilepsy, doctors elect to sever the corpus callosum as a means of controlling the spread of seizures (Figure 3.16). While this is an effective treatment option, it results in individuals who have “split brains.” After surgery, these split-brain patients show a variety of interesting behaviors. For instance, a split-brain patient is unable to name a picture that is shown in the patient’s left visual field because the information is only available in the largely nonverbal right hemisphere. However, they are able to recreate the picture with their left hand, which is also controlled by the right hemisphere. When the more verbal left hemisphere sees the picture that the hand drew, the patient is able to name it (assuming the left hemisphere can interpret what was drawn by the left hand).

Illustrations (a) and (b) show the corpus callosum’s location in the brain in front and side views. Photograph (c) shows the corpus callosum in a dissected brain.
Figure 3.16(a, b) The corpus callosum connects the left and right hemispheres of the brain. (c) A scientist spreads this dissected sheep brain apart to show the corpus callosum between the hemispheres. (credit c: modification of work by Aaron Bornstein)

Much of what we know about the functions of different areas of the brain comes from studying changes in the behavior and ability of individuals who have suffered damage to the brain. For example, researchers study the behavioral changes caused by strokes to learn about the functions of specific brain areas. A stroke, caused by an interruption of blood flow to a region in the brain, causes a loss of brain function in the affected region. The damage can be in a small area, and, if it is, this gives researchers the opportunity to link any resulting behavioral changes to a specific area. The types of deficits displayed after a stroke will be largely dependent on where in the brain the damage occurred.

Consider Theona, an intelligent, self-sufficient woman, who is 62 years old. Recently, she suffered a stroke in the front portion of her right hemisphere. As a result, she has great difficulty moving her left leg. (As you learned earlier, the right hemisphere controls the left side of the body; also, the brain’s main motor centers are located at the front of the head, in the frontal lobe.) Theona has also experienced behavioral changes. For example, while in the produce section of the grocery store, she sometimes eats grapes, strawberries, and apples directly from their bins before paying for them. This behavior—which would have been very embarrassing to her before the stroke—is consistent with damage in another region in the frontal lobe—the prefrontal cortex, which is associated with judgment, reasoning, and impulse control.

Forebrain Structures

The two hemispheres of the cerebral cortex are part of the forebrain (Figure 3.17), which is the largest part of the brain. The forebrain contains the cerebral cortex and a number of other structures that lie beneath the cortex (called subcortical structures): thalamus, hypothalamus, pituitary gland, and the limbic system (a collection of structures). The cerebral cortex, which is the outer surface of the brain, is associated with higher-level processes such as consciousness, thought, emotion, reasoning, language, and memory. Each cerebral hemisphere can be subdivided into four lobes, each associated with different functions.

An illustration shows the position and size of the forebrain (the largest portion), midbrain (a small central portion), and hindbrain (a portion in the lower back part of the brain).
Figure 3.17The brain and its parts can be divided into three main categories: the forebrain, midbrain, and hindbrain.

Lobes of the Brain

The four lobes of the brain are the frontal, parietal, temporal, and occipital lobes (Figure 3.18). The frontal lobe is located in the forward part of the brain, extending back to a fissure known as the central sulcus. The frontal lobe is involved in reasoning, motor control, emotion, and language. It contains the motor cortex, which is involved in planning and coordinating movement; the prefrontal cortex, which is responsible for higher-level cognitive functioning; and Broca’s area, which is essential for language production.

An interactive H5P element has been excluded from this version of the text. You can view it online here:
https://open.maricopa.edu/intropsychme/?p=23#h5p-17

An illustration shows the four lobes of the brain.
Figure 3.18The lobes of the brain are shown.

People who suffer damage to Broca’s area have great difficulty producing language of any form (Figure 3.18). For example, Padma was an electrical engineer who was socially active and a caring, involved parent. About twenty years ago, she was in a car accident and suffered damage to her Broca’s area. She completely lost the ability to speak and form any kind of meaningful language. There is nothing wrong with her mouth or her vocal cords, but she is unable to produce words. She can follow directions but can’t respond verbally, and she can read but no longer write. She can do routine tasks like running to the market to buy milk, but she could not communicate verbally if a situation called for it.

Probably the most famous case of frontal lobe damage is that of a man by the name of Phineas Gage. On September 13, 1848, Gage (age 25) was working as a railroad foreman in Vermont. He and his crew were using an iron rod to tamp explosives down into a blasting hole to remove rock along the railway’s path. Unfortunately, the iron rod created a spark and caused the rod to explode out of the blasting hole, into Gage’s face, and through his skull (Figure 3.19). Although lying in a pool of his own blood with brain matter emerging from his head, Gage was conscious and able to get up, walk, and speak. But in the months following his accident, people noticed that his personality had changed. Many of his friends described him as no longer being himself. Before the accident, it was said that Gage was a well-mannered, soft-spoken man, but he began to behave in odd and inappropriate ways after the accident. Such changes in personality would be consistent with loss of impulse control—a frontal lobe function.

Beyond the damage to the frontal lobe itself, subsequent investigations into the rod’s path also identified probable damage to pathways between the frontal lobe and other brain structures, including the limbic system. With connections between the planning functions of the frontal lobe and the emotional processes of the limbic system severed, Gage had difficulty controlling his emotional impulses.

However, there is some evidence suggesting that the dramatic changes in Gage’s personality were exaggerated and embellished. Gage’s case occurred in the midst of a 19th-century debate over localization—regarding whether certain areas of the brain are associated with particular functions. On the basis of extremely limited information about Gage, the extent of his injury, and his life before and after the accident, scientists tended to find support for their own views, on whichever side of the debate they fell (Macmillan, 1999).

Image (a) is a photograph of Phineas Gage holding a metal rod. Image (b) is an illustration of a skull with a metal rod passing through it from the cheek area to the top of the skull.
Figure 3.19(a) Phineas Gage holds the iron rod that penetrated his skull in an 1848 railroad construction accident. (b) Gage’s prefrontal cortex was severely damaged in the left hemisphere. The rod entered Gage’s face on the left side, passed behind his eye, and exited through the top of his skull, before landing about 80 feet away. (credit a: modification of work by Jack and Beverly Wilgus)

The brain’s parietal lobe is located immediately behind the frontal lobe and is involved in processing information from the body’s senses. It contains the somatosensory cortex, which is essential for processing sensory information from across the body, such as touch, temperature, and pain. The somatosensory cortex is organized topographically, which means that spatial relationships that exist in the body are generally maintained on the surface of the somatosensory cortex (Figure 3.20). For example, the portion of the cortex that processes sensory information from the hand is adjacent to the portion that processes information from the wrist.

A diagram shows the organization in the somatosensory cortex, with functions for these parts in this proximal sequential order: toes, ankles, knees, hips, trunk, shoulders, elbows, wrists, hands, fingers, thumbs, neck, eyebrows and eyelids, eyeballs, face, lips, jaw, tongue, salivation, chewing, and swallowing.
Figure 3.20Spatial relationships in the body are mirrored in the organization of the somatosensory cortex.

The temporal lobe is located on the side of the head (temporal means “near the temples”) and is associated with hearing, memory, emotion, and some aspects of language. The auditory cortex, the main area responsible for processing auditory information, is located within the temporal lobe. Wernicke’s area, important for speech comprehension, is also located here. Whereas individuals with damage to Broca’s area have difficulty producing language, those with damage to Wernicke’s area can produce sensible language, but they are unable to understand it (Figure 3.21).

An illustration shows the locations of Broca’s and Wernicke’s areas.
Figure 3.21Damage to either Broca’s area or Wernicke’s area can result in language deficits. The types of deficits are very different, however, depending on which area is affected.

The occipital lobe is located at the very back of the brain, and contains the primary visual cortex, which is responsible for interpreting incoming visual information. The occipital cortex is organized retinotopically, which means there is a close relationship between the position of an object in a person’s visual field and the position of that object’s representation on the cortex. You will learn much more about how visual information is processed in the occipital lobe when you study sensation and perception.

Other Areas of the Forebrain

Other areas of the forebrain, located beneath the cerebral cortex, include the thalamus and the limbic system. The thalamus is a sensory relay for the brain. All of our senses, with the exception of smell, are routed through the thalamus before being directed to other areas of the brain for processing (Figure 3.22).

An illustration shows the location of the thalamus in the brain.
Figure 3.22The thalamus serves as the relay center of the brain where most senses are routed for processing.

The limbic system is involved in processing both emotion and memory. Interestingly, the sense of smell projects directly to the limbic system; therefore, not surprisingly, smell can evoke emotional responses in ways that other sensory modalities cannot. The limbic system is made up of a number of different structures, but three of the most important are the hippocampus, the amygdala, and the hypothalamus (Figure 3.23). The hippocampus is an essential structure for learning and memory. The amygdala is involved in our experience of emotion and in tying emotional meaning to our memories. The hypothalamus regulates a number of homeostatic processes, including the regulation of body temperature, appetite, and blood pressure. The hypothalamus also serves as an interface between the nervous system and the endocrine system and in the regulation of sexual motivation and behavior.

An illustration shows the locations of parts of the brain involved in the limbic system: the hypothalamus, amygdala, and hippocampus.
Figure 3.23The limbic system is involved in mediating emotional response and memory.

The Case of Henry Molaison (H.M.)

In 1953, Henry Gustav Molaison (H. M.) was a 27-year-old man who experienced severe seizures. In an attempt to control his seizures, H. M. underwent brain surgery to remove his hippocampus and amygdala. Following the surgery, H.M’s seizures became much less severe, but he also suffered some unexpected—and devastating—consequences of the surgery: he lost his ability to form many types of new memories. For example, he was unable to learn new facts, such as who was president of the United States. He was able to learn new skills, but afterward he had no recollection of learning them. For example, while he might learn to use a computer, he would have no conscious memory of ever having used one. He could not remember new faces, and he was unable to remember events, even immediately after they occurred. Researchers were fascinated by his experience, and he is considered one of the most studied cases in medical and psychological history (Hardt, Einarsson, & Nader, 2010; Squire, 2009). Indeed, his case has provided tremendous insight into the role that the hippocampus plays in the consolidation of new learning into explicit memory.

Midbrain and Hindbrain Structures

The midbrain is comprised of structures located deep within the brain, between the forebrain and the hindbrain. The reticular formation is centered in the midbrain, but it actually extends up into the forebrain and down into the hindbrain. The reticular formation is important in regulating the sleep/wake cycle, arousal, alertness, and motor activity.

The substantia nigra (Latin for “black substance”) and the ventral tegmental area (VTA) are also located in the midbrain (Figure 3.24). Both regions contain cell bodies that produce the neurotransmitter dopamine, and both are critical for movement. Degeneration of the substantia nigra and VTA is involved in Parkinson’s disease. In addition, these structures are involved in mood, reward, and addiction (Berridge & Robinson, 1998; Gardner, 2011; George et al., 2012).

An illustration shows the location of the substantia nigra and VTA in the brain.
Figure 3.24The substantia nigra and ventral tegmental area (VTA) are located in the midbrain.

The hindbrain is located at the back of the head and looks like an extension of the spinal cord. It contains the medulla, pons, and cerebellum (Figure 3.25). The medulla controls the automatic processes of the autonomic nervous system, such as breathing, blood pressure, and heart rate. The word pons literally means “bridge,” and as the name suggests, the pons serves to connect the hindbrain to the rest of the brain. It also is involved in regulating brain activity during sleep. The medulla, pons, and various structures are known as the brainstem, and aspects of the brainstem span both the midbrain and the hindbrain.

An illustration shows the location of the pons, medulla, and cerebellum.
Figure 3.25The pons, medulla, and cerebellum make up the hindbrain.

The cerebellum (Latin for “little brain”) receives messages from muscles, tendons, joints, and structures in our ear to control balance, coordination, movement, and motor skills. The cerebellum is also thought to be an important area for processing some types of memories. In particular, procedural memory, or memory involved in learning and remembering how to perform tasks, is thought to be associated with the cerebellum. Recall that H. M. was unable to form new explicit memories, but he could learn new tasks. This is likely due to the fact that H. M.’s cerebellum remained intact.

WHAT DO YOU THINK? Brain Dead and on Life Support

What would you do if your spouse or loved one was declared brain dead but his or her body was being kept alive by medical equipment? Whose decision should it be to remove a feeding tube? Should medical care costs be a factor?

On February 25, 1990, a Florida woman named Terri Schiavo went into cardiac arrest, apparently triggered by a bulimic episode. She was eventually revived, but her brain had been deprived of oxygen for a long time. Brain scans indicated that there was no activity in her cerebral cortex, and she suffered from severe and permanent cerebral atrophy. Basically, Schiavo was in a vegetative state. Medical professionals determined that she would never again be able to move, talk, or respond in any way. To remain alive, she required a feeding tube, and there was no chance that her situation would ever improve.

On occasion, Schiavo’s eyes would move, and sometimes she would groan. Despite the doctors’ insistence to the contrary, her parents believed that these were signs that she was trying to communicate with them.

After 12 years, Schiavo’s husband argued that his wife would not have wanted to be kept alive with no feelings, sensations, or brain activity. Her parents, however, were very much against removing her feeding tube. Eventually, the case made its way to the courts, both in the state of Florida and at the federal level. By 2005, the courts found in favor of Schiavo’s husband, and the feeding tube was removed on March 18, 2005. Schiavo died 13 days later.

Why did Schiavo’s eyes sometimes move, and why did she groan? Although the parts of her brain that control thought, voluntary movement, and feeling were completely damaged, her brainstem was still intact. Her medulla and pons maintained her breathing and caused involuntary movements of her eyes and the occasional groans. Over the 15-year period that she was on a feeding tube, Schiavo’s medical costs may have topped $7 million (Arnst, 2003).

These questions were brought to popular conscience decades ago in the case of Terri Schiavo, and they have persisted. In 2013, a 13-year-old girl who suffered complications after tonsil surgery was declared brain dead. There was a battle between her family, who wanted her to remain on life support, and the hospital’s policies regarding persons declared brain dead. In another complicated 2013–14 case in Texas, a pregnant EMT professional declared brain dead was kept alive for weeks, despite her spouse’s directives, which were based on her wishes should this situation arise. In this case, state laws designed to protect an unborn fetus came into consideration until doctors determined the fetus unviable.

Decisions surrounding the medical response to patients declared brain dead are complex. What do you think about these issues?

Brain Imaging

You have learned how brain injury can provide information about the functions of different parts of the brain. Increasingly, however, we are able to obtain that information using brain imaging techniques on individuals who have not suffered brain injury. In this section, we take a more in-depth look at some of the techniques that are available for imaging the brain, including techniques that rely on radiation, magnetic fields, or electrical activity within the brain.

Techniques Involving Radiation

computerized tomography (CT) scan involves taking a number of x-rays of a particular section of a person’s body or brain (Figure 3.26). The x-rays pass through tissues of different densities at different rates, allowing a computer to construct an overall image of the area of the body being scanned. A CT scan is often used to determine whether someone has a tumor or significant brain atrophy.

Image (a) shows a brain scan where the brain matter’s appearance is fairly uniform. Image (b) shows a section of the brain that looks different from the surrounding tissue and is labeled “tumor.”
Figure 3.26A CT scan can be used to show brain tumors. (a) The image on the left shows a healthy brain, whereas (b) the image on the right indicates a brain tumor in the left frontal lobe. (credit a: modification of work by “Aceofhearts1968″/Wikimedia Commons; credit b: modification of work by Roland Schmitt et al)

Positron emission tomography (PET) scans create pictures of the living, active brain (Figure 3.27). An individual receiving a PET scan drinks or is injected with a mildly radioactive substance, called a tracer. Once in the bloodstream, the amount of tracer in any given region of the brain can be monitored. As a brain area becomes more active, more blood flows to that area. A computer monitors the movement of the tracer and creates a rough map of active and inactive areas of the brain during a given behavior. PET scans show little detail, are unable to pinpoint events precisely in time, and require that the brain be exposed to radiation; therefore, this technique has been replaced by the fMRI as an alternative diagnostic tool. However, combined with CT, PET technology is still being used in certain contexts. For example, CT/PET scans allow better imaging of the activity of neurotransmitter receptors and open new avenues in schizophrenia research. In this hybrid CT/PET technology, CT contributes clear images of brain structures, while PET shows the brain’s activity.

A brain scan shows different parts of the brain in different colors.
Figure 3.27A PET scan is helpful for showing activity in different parts of the brain. (credit: Health and Human Services Department, National Institutes of Health)

Techniques Involving Magnetic Fields

In magnetic resonance imaging (MRI), a person is placed inside a machine that generates a strong magnetic field. The magnetic field causes the hydrogen atoms in the body’s cells to move. When the magnetic field is turned off, the hydrogen atoms emit electromagnetic signals as they return to their original positions. Tissues of different densities give off different signals, which a computer interprets and displays on a monitor. Functional magnetic resonance imaging (fMRI) operates on the same principles, but it shows changes in brain activity over time by tracking blood flow and oxygen levels. The fMRI provides more detailed images of the brain’s structure, as well as better accuracy in time, than is possible in PET scans (Figure 3.28). With their high level of detail, MRI and fMRI are often used to compare the brains of healthy individuals to the brains of individuals diagnosed with psychological disorders. This comparison helps determine what structural and functional differences exist between these populations.

A brain scan shows brain tissue in gray with some small areas highlighted red.
Figure 3.28An fMRI shows activity in the brain over time. This image represents a single frame from an fMRI. (credit: modification of work by Kim J, Matthews NL, Park S.)

In some situations, it is helpful to gain an understanding of the overall activity of a person’s brain, without needing information on the actual location of the activity. Electroencephalography (EEG) serves this purpose by providing a measure of a brain’s electrical activity. An array of electrodes is placed around a person’s head (Figure 3.29). The signals received by the electrodes result in a printout of the electrical activity of his or her brain, or brainwaves, showing both the frequency (number of waves per second) and amplitude (height) of the recorded brainwaves, with an accuracy within milliseconds. Such information is especially helpful to researchers studying sleep patterns among individuals with sleep disorders.

A photograph depicts a person looking at a computer screen and using the keyboard and mouse. The person wears a white cap covered in electrodes and wires.
Figure 3.29Using caps with electrodes, modern EEG research can study the precise timing of overall brain activities. (credit: SMI Eye Tracking)

Learning Objectives

By the end of this section, you will be able to:

  • Identify the major glands of the endocrine system
  • Identify the hormones secreted by each gland
  • Describe each hormone’s role in regulating bodily functions

The endocrine system consists of a series of glands that produce chemical substances known as hormones (Figure 3.30). Like neurotransmitters, hormones are chemical messengers that must bind to a receptor in order to send their signal. However, unlike neurotransmitters, which are released in close proximity to cells with their receptors, hormones are secreted into the bloodstream and travel throughout the body, affecting any cells that contain receptors for them. Thus, whereas neurotransmitters’ effects are localized, the effects of hormones are widespread. Also, hormones are slower to take effect, and tend to be longer lasting.

An interactive H5P element has been excluded from this version of the text. You can view it online here:
https://open.maricopa.edu/intropsychme/?p=23#h5p-21

Figure 3.30 The major glands of the endocrine system are shown.

Hormones are involved in regulating all sorts of bodily functions, and they are ultimately controlled through interactions between the hypothalamus (in the central nervous system) and the pituitary gland (in the endocrine system). Imbalances in hormones are related to a number of disorders. This section explores some of the major glands that make up the endocrine system and the hormones secreted by these glands (Table 3.2).

Major Glands

The pituitary gland descends from the hypothalamus at the base of the brain, and acts in close association with it. The pituitary is often referred to as the “master gland” because its messenger hormones control all the other glands in the endocrine system, although it mostly carries out instructions from the hypothalamus. In addition to messenger hormones, the pituitary also secretes growth hormone, endorphins for pain relief, and a number of key hormones that regulate fluid levels in the body.

Located in the neck, the thyroid gland releases hormones that regulate growth, metabolism, and appetite. In hyperthyroidism, or Grave’s disease, the thyroid secretes too much of the hormone thyroxine, causing agitation, bulging eyes, and weight loss. In hypothyroidism, reduced hormone levels cause sufferers to experience tiredness, and they often complain of feeling cold. Fortunately, thyroid disorders are often treatable with medications that help reestablish a balance in the hormones secreted by the thyroid.

The adrenal glands sit atop our kidneys and secrete hormones involved in the stress response, such as epinephrine (adrenaline) and norepinephrine (noradrenaline). The pancreas is an internal organ that secretes hormones that regulate blood sugar levels: insulin and glucagon. These pancreatic hormones are essential for maintaining stable levels of blood sugar throughout the day by lowering blood glucose levels (insulin) or raising them (glucagon). People who suffer from diabetes do not produce enough insulin; therefore, they must take medications that stimulate or replace insulin production, and they must closely control the amount of sugars and carbohydrates they consume.

The gonads secrete sexual hormones, which are important in reproduction, and mediate both sexual motivation and behavior. The female gonads are the ovaries; the male gonads are the testes. Ovaries secrete estrogens and progesterone, and the testes secrete androgens, such as testosterone.

Major Endocrine Glands and Associated Hormone Functions
Endocrine Gland Associated Hormones Function
Hypothalamus Releasing and inhibiting hormones, such as oxytocin Regulate hormone release from pituitary gland
Pituitary Growth hormone, releasing and inhibiting hormones (such as thyroid stimulating hormone) Regulate growth, regulate hormone release
Thyroid Thyroxine, triiodothyronine Regulate metabolism and appetite
Pineal Melatonin Regulate some biological rhythms such as sleep cycles
Adrenal Epinephrine, norepinephrine Stress response, increase metabolic activities
Pancreas Insulin, glucagon Regulate blood sugar levels
Ovaries Estrogen, progesterone Mediate sexual motivation and behavior, reproduction
Testes Androgens, such as testosterone Mediate sexual motivation and behavior, reproduction
Table3.2

DIG DEEPER: Athletes and Anabolic Steroids

Although it is against Federal laws and many professional athletic associations (The National Football League, for example) have banned their use, anabolic steroid drugs continue to be used by amateur and professional athletes. The drugs are believed to enhance athletic performance. Anabolic steroid drugs mimic the effects of the body’s own steroid hormones, like testosterone and its derivatives. These drugs have the potential to provide a competitive edge by increasing muscle mass, strength, and endurance, although not all users may experience these results. Moreover, use of performance-enhancing drugs (PEDs) does not come without risks. Anabolic steroid use has been linked with a wide variety of potentially negative outcomes, ranging in severity from largely cosmetic (acne) to life threatening (heart attack). Furthermore, use of these substances can result in profound changes in mood and can increase aggressive behavior (National Institute on Drug Abuse, 2001).

Baseball player Alex Rodriguez (A-Rod) has been at the center of a media storm regarding his use of illegal PEDs. Rodriguez’s performance on the field was unparalleled while using the drugs; his success played a large role in negotiating a contract that made him the highest paid player in professional baseball. Although Rodriguez maintains that he has not used PEDs for the several years, he received a substantial suspension in 2013 that, if upheld, will cost him more than 20 million dollars in earnings (Gaines, 2013). What are your thoughts on athletes and doping? Why or why not should the use of PEDs be banned? What advice would you give an athlete who was considering using PEDs?

Review of MCCCD Course Competencies

After reading this chapter are you better able to do the following?

  • Identify brain structures and how neuroscientific processes play a role in human thought and behavior.
  • Critically evaluate information to help make evidence-based decisions.
  • Apply biopsychosocial principles to real-world situations.
  • Use psychological principles to explain the diversity and complexity of the human experience.

Chapter Review Quiz

An interactive H5P element has been excluded from this version of the text. You can view it online here:
https://open.maricopa.edu/intropsychme/?p=23#h5p-24

(credit “left”: modification of work by Health and Human Services Department, National Institutes of Health; credit “center": modification of work by "Aceofhearts1968"/Wikimedia Commons; credit “right”: modification of work by Kim J, Matthews NL, Park S.)

4

Sensation and Perception

A photograph shows a person playing a piano on the sidewalk near a busy intersection in a city.
Figure 5.1 If you were standing in the midst of this street scene, you would be absorbing and processing numerous pieces of sensory input. (credit: modification of work by Cory Zanker)

Imagine standing on a city street corner. You might be struck by movement everywhere as cars and people go about their business, by the sound of a street musician’s melody or a horn honking in the distance, by the smell of exhaust fumes or of food being sold by a nearby vendor, and by the sensation of hard pavement under your feet.

We rely on our sensory systems to provide important information about our surroundings. We use this information to successfully navigate and interact with our environment so that we can find nourishment, seek shelter, maintain social relationships, and avoid potentially dangerous situations.

This chapter will provide an overview of how sensory information is received and processed by the nervous system and how that affects our conscious experience of the world. We begin by learning the distinction between sensation and perception. Then we consider the physical properties of light and sound stimuli, along with an overview of the basic structure and function of the major sensory systems. The chapter will close with a discussion of a historically important theory of perception called Gestalt.

MCCCD Course Competencies

  • Describe basic principles of consciousness, sensation, and perception.
  • Critically evaluate information to help make evidence-based decisions.
  • Apply biopsychosocial principles to real-world situations.
  • Use psychological principles to explain the diversity and complexity of the human experience.

Learning Objectives

By the end of this section, you will be able to:

  • Distinguish between sensation and perception
  • Describe the concepts of absolute threshold and difference threshold
  • Discuss the roles attention, motivation, and sensory adaptation play in perception
Sensation

What does it mean to sense something? Sensory receptors are specialized neurons that respond to specific types of stimuli. When sensory information is detected by a sensory receptor, sensation has occurred. For example, light that enters the eye causes chemical changes in cells that line the back of the eye. These cells relay messages, in the form of action potentials (as you learned when studying biopsychology), to the central nervous system. The conversion from sensory stimulus energy to action potential is known as transduction.

You have probably known since elementary school that we have five senses: vision, hearing (audition), smell (olfaction), taste (gustation), and touch (somatosensation). It turns out that this notion of five senses is oversimplified. We also have sensory systems that provide information about balance (the vestibular sense), body position and movement (proprioception and kinesthesia), pain (nociception), and temperature (thermoception).

The sensitivity of a given sensory system to the relevant stimuli can be expressed as an absolute threshold. Absolute threshold refers to the minimum amount of stimulus energy that must be present for the stimulus to be detected 50% of the time. Another way to think about this is by asking how dim can a light be or how soft can a sound be and still be detected half of the time. The sensitivity of our sensory receptors can be quite amazing. It has been estimated that on a clear night, the most sensitive sensory cells in the back of the eye can detect a candle flame 30 miles away (Okawa & Sampath, 2007). Under quiet conditions, the hair cells (the receptor cells of the inner ear) can detect the tick of a clock 20 feet away (Galanter, 1962).

It is also possible for us to get messages that are presented below the threshold for conscious awareness—these are called subliminal messages. A stimulus reaches a physiological threshold when it is strong enough to excite sensory receptors and send nerve impulses to the brain: This is an absolute threshold. A message below that threshold is said to be subliminal: We receive it, but we are not consciously aware of it. Over the years there has been a great deal of speculation about the use of subliminal messages in advertising, rock music, and self-help audio programs. Research evidence shows that in laboratory settings, people can process and respond to information outside of awareness. But this does not mean that we obey these messages like zombies; in fact, hidden messages have little effect on behavior outside the laboratory (Kunst-Wilson & Zajonc, 1980; Rensink, 2004; Nelson, 2008; Radel et al., 2009; Loersch et al., 2013).

Absolute thresholds are generally measured under incredibly controlled conditions in situations that are optimal for sensitivity. Sometimes, we are more interested in how much difference in stimuli is required to detect a difference between them. This is known as the just noticeable difference (jnd) or difference threshold. Unlike the absolute threshold, the difference threshold changes depending on the stimulus intensity. As an example, imagine yourself in a very dark movie theater. If an audience member were to receive a text message that caused the cell phone screen to light up, chances are that many people would notice the change in illumination in the theater. However, if the same thing happened in a brightly lit arena during a basketball game, very few people would notice. The cell phone brightness does not change, but its ability to be detected as a change in illumination varies dramatically between the two contexts. Ernst Weber proposed this theory of change in difference threshold in the 1830s, and it has become known as Weber’s law: The difference threshold is a constant fraction of the original stimulus, as the example illustrates.

Perception

While our sensory receptors are constantly collecting information from the environment, it is ultimately how we interpret that information that affects how we interact with the world. Perception refers to the way sensory information is organized, interpreted, and consciously experienced. Perception involves both bottom-up and top-down processing. Bottom-up processing refers to sensory information from a stimulus in the environment driving a process, and top-down processing refers to knowledge and expectancy driving a process, as shown in Figure 5.2 (Egeth & Yantis, 1997; Fine & Minnery, 2009; Yantis & Egeth, 1999).

The figure includes two vertical arrows. The first arrow comes from the word “Top” and points downward to the word “Down.” The explanation reads, “Top-down processing occurs when previous experience and expectations are first used to recognize stimuli.” The second arrow comes from the word “bottom” and points upward to the word “up.” The explanation reads, “Bottom-up processing occurs when we sense basic features of stimuli and then integrate them.”
Figure 5.2 Top-down and bottom-up are ways we process our perceptions.

Imagine that you and some friends are sitting in a crowded restaurant eating lunch and talking. It is very noisy, and you are concentrating on your friend’s face to hear what she is saying, then the sound of breaking glass and clang of metal pans hitting the floor rings out. The server dropped a large tray of food. Although you were attending to your meal and conversation, that crashing sound would likely get through your attentional filters and capture your attention. You would have no choice but to notice it. That attentional capture would be caused by the sound from the environment: it would be bottom-up.

Alternatively, top-down processes are generally goal directed, slow, deliberate, effortful, and under your control (Fine & Minnery, 2009; Miller & Cohen, 2001; Miller & D’Esposito, 2005). For instance, if you misplaced your keys, how would you look for them? If you had a yellow key fob, you would probably look for yellowness of a certain size in specific locations, such as on the counter, coffee table, and other similar places. You would not look for yellowness on your ceiling fan, because you know keys are not normally lying on top of a ceiling fan. That act of searching for a certain size of yellowness in some locations and not others would be top-down—under your control and based on your experience.

One way to think of this concept is that sensation is a physical process, whereas perception is psychological. For example, upon walking into a kitchen and smelling the scent of baking cinnamon rolls, the sensation is the scent receptors detecting the odor of cinnamon, but the perception may be “Mmm, this smells like the bread Grandma used to bake when the family gathered for holidays.”

Although our perceptions are built from sensations, not all sensations result in perception. In fact, we often don’t perceive stimuli that remain relatively constant over prolonged periods of time. This is known as sensory adaptation. Imagine going to a city that you have never visited. You check in to the hotel, but when you get to your room, there is a road construction sign with a bright flashing light outside your window. Unfortunately, there are no other rooms available, so you are stuck with a flashing light. You decide to watch television to unwind. The flashing light was extremely annoying when you first entered your room. It was as if someone was continually turning a bright yellow spotlight on and off in your room, but after watching television for a short while, you no longer notice the light flashing. The light is still flashing and filling your room with yellow light every few seconds, and the photoreceptors in your eyes still sense the light, but you no longer perceive the rapid changes in lighting conditions. That you no longer perceive the flashing light demonstrates sensory adaptation and shows that while closely associated, sensation and perception are different.

There is another factor that affects sensation and perception: attention. Attention plays a significant role in determining what is sensed versus what is perceived. Imagine you are at a party full of music, chatter, and laughter. You get involved in an interesting conversation with a friend, and you tune out all the background noise. If someone interrupted you to ask what song had just finished playing, you would probably be unable to answer that question.

One of the most interesting demonstrations of how important attention is in determining our perception of the environment occurred in a famous study conducted by Daniel Simons and Christopher Chabris (1999). In this study, participants watched a video of people dressed in black and white passing basketballs. Participants were asked to count the number of times the team dressed in white passed the ball. During the video, a person dressed in a black gorilla costume walks among the two teams. You would think that someone would notice the gorilla, right? Nearly half of the people who watched the video didn’t notice the gorilla at all, despite the fact that he was clearly visible for nine seconds. Because participants were so focused on the number of times the team dressed in white was passing the ball, they completely tuned out other visual information. Inattentional blindness is the failure to notice something that is completely visible because the person was actively attending to something else and did not pay attention to other things (Mack & Rock, 1998; Simons & Chabris, 1999).

In a similar experiment, researchers tested inattentional blindness by asking participants to observe images moving across a computer screen. They were instructed to focus on either white or black objects, disregarding the other color. When a red cross passed across the screen, about one-third of subjects did not notice it (Figure 5.3) (Most et al., 2000).

A photograph shows a person staring at a screen that displays one red cross toward the left side and numerous black and white shapes all over.
Figure 5.3 Nearly one-third of participants in a study did not notice that a red cross passed on the screen because their attention was focused on the black or white figures. (credit: Cory Zanker)

Motivation can also affect perception. Have you ever been expecting a really important phone call and, while taking a shower, you think you hear the phone ringing, only to discover that it is not? If so, then you have experienced how motivation to detect a meaningful stimulus can shift our ability to discriminate between a true sensory stimulus and background noise. The ability to identify a stimulus when it is embedded in a distracting background is called signal detection theory. This might also explain why a mother is awakened by a quiet murmur from her baby but not by other sounds that occur while she is asleep. Signal detection theory has practical applications, such as increasing air traffic controller accuracy. Controllers need to be able to detect planes among many signals (blips) that appear on the radar screen and follow those planes as they move through the sky. In fact, the original work of the researcher who developed signal detection theory was focused on improving the sensitivity of air traffic controllers to plane blips (Swets, 1964).

Our perceptions can also be affected by our beliefs, values, prejudices, expectations, and life experiences. As you will see later in this chapter, individuals who are deprived of the experience of binocular vision during critical periods of development have trouble perceiving depth (Fawcett et al., 2005). The shared experiences of people within a given cultural context can have pronounced effects on perception. For example, Segall et al. (1963) published the results of a multinational study in which they demonstrated that individuals from Western cultures were more prone to experience certain types of visual illusions than individuals from non-Western cultures, and vice versa. One such illusion that Westerners were more likely to experience was the Müller-Lyer illusion (Figure 5.4): The lines appear to be different lengths, but they are actually the same length.

Two vertical lines are shown on the left in (a). They each have V–shaped brackets on their ends, but one line has the brackets angled toward its center, and the other has the brackets angled away from its center. The lines are the same length, but the second line appears longer due to the orientation of the brackets on its endpoints. To the right of these lines is a two-dimensional drawing of walls meeting at 90-degree angles. Within this drawing are 2 lines which are the same length, but appear different lengths. Because one line is bordering a window on a wall that has the appearance of being farther away from the perspective of the viewer, it appears shorter than the other line which marks the 90 degree angle where the facing wall appears closer to the viewer’s perspective point.
Figure 5.4 In the Müller-Lyer illusion, lines appear to be of different lengths although they are identical. (a) Arrows at the ends of lines may make the line on the right appear longer, although the lines are the same length. (b) When applied to a three-dimensional image, the line on the right again may appear longer although both black lines are the same length.

These perceptual differences were consistent with differences in the types of environmental features experienced on a regular basis by people in a given cultural context. People in Western cultures, for example, have a perceptual context of buildings with straight lines, what Segall’s study called a carpentered world (Segall et al., 1966). In contrast, people from certain non-Western cultures with an uncarpentered view, such as the Zulu of South Africa, whose villages are made up of round huts arranged in circles, are less susceptible to this illusion (Segall et al., 1999). It is not just vision that is affected by cultural factors. Indeed, research has demonstrated that the ability to identify an odor, and rate its pleasantness and its intensity, varies cross-culturally (Ayabe-Kanamura et al., 1998).

Children described as thrill seekers are more likely to show taste preferences for intense sour flavors (Liem et al., 2004), which suggests that basic aspects of personality might affect perception. Furthermore, individuals who hold positive attitudes toward reduced-fat foods are more likely to rate foods labeled as reduced fat as tasting better than people who have less positive attitudes about these products (Aaron et al., 1994).

Learning Objectives

By the end of this section, you will be able to:

  • Describe important physical features of wave forms
  • Show how physical properties of light waves are associated with perceptual experience
  • Show how physical properties of sound waves are associated with perceptual experience
 Visual and auditory stimuli both occur in the form of waves. Although the two stimuli are very different in terms of composition, wave forms share similar characteristics that are especially important to our visual and auditory perceptions. In this section, we describe the physical properties of the waves as well as the perceptual experiences associated with them.

Amplitude and Wavelength

Two physical characteristics of a wave are amplitude and wavelength (Figure 5.5). The amplitude of a wave is the distance from the center line to the top point of the crest or the bottom point of the trough. Wavelength refers to the length of a wave from one peak to the next.

A diagram illustrates the basic parts of a wave. Moving from left to right, the wavelength line begins above a straight horizontal line and falls and rises equally above and below that line. One of the areas where the wavelength line reaches its highest point is labeled “Peak.” A horizontal bracket, labeled “Wavelength,” extends from this area to the next peak. One of the areas where the wavelength reaches its lowest point is labeled “Trough.” A vertical bracket, labeled “Amplitude,” extends from a “Peak” to a “Trough.”
Figure 5.5 The amplitude or height of a wave is measured from the peak to the trough. The wavelength is measured from peak to peak.

Wavelength is directly related to the frequency of a given wave form. Frequency refers to the number of waves that pass a given point in a given time period and is often expressed in terms of hertz (Hz), or cycles per second. Longer wavelengths will have lower frequencies, and shorter wavelengths will have higher frequencies (Figure 5.6).

Stacked vertically are 5 waves of different colors and wavelengths. The top wave is red with a long wavelengths, which indicate a low frequency. Moving downward, the color of each wave is different: orange, yellow, green, and blue. Also moving downward, the wavelengths become shorter as the frequencies increase.
Figure 5.6 This figure illustrates waves of differing wavelengths/frequencies. At the top of the figure, the red wave has a long wavelength/short frequency. Moving from top to bottom, the wavelengths decrease and frequencies increase.

Light Waves

The visible spectrum is the portion of the larger electromagnetic spectrum that we can see. As Figure 5.7 shows, the electromagnetic spectrum encompasses all of the electromagnetic radiation that occurs in our environment and includes gamma rays, x-rays, ultraviolet light, visible light, infrared light, microwaves, and radio waves. The visible spectrum in humans is associated with wavelengths that range from 380 to 740 nm—a very small distance, since a nanometer (nm) is one-billionth of a meter. Other species can detect other portions of the electromagnetic spectrum. For instance, honeybees can see light in the ultraviolet range (Wakakuwa et al., 2007), and some snakes can detect infrared radiation in addition to more traditional visual light cues (Chen et al., 2012; Hartline et al., 1978).

This illustration shows the wavelength, frequency, and size of objects across the electromagnetic spectrum.. At the top, various wavelengths are given in sequence from small to large, with a parallel illustration of a wave with increasing frequency. These are the provided wavelengths, measured in meters: “Gamma ray 10 to the negative twelfth power,” “x-ray 10 to the negative tenth power,” ultraviolet 10 to the negative eighth power,” “visible .5 times 10 to the negative sixth power,” “infrared 10 to the negative fifth power,” microwave 10 to the negative second power,” and “radio 10 cubed.”Another section is labeled “About the size of” and lists from left to right: “Atomic nuclei,” “Atoms,” “Molecules,” “Protozoans,” “Pinpoints,” “Honeybees,” “Humans,” and “Buildings” with an illustration of each . At the bottom is a line labeled “Frequency” with the following measurements in hertz: 10 to the powers of 20, 18, 16, 15, 12, 8, and 4. From left to right the line changes in color from purple to red with the remaining colors of the visible spectrum in between.
Figure 5.7 Light that is visible to humans makes up only a small portion of the electromagnetic spectrum.

In humans, light wavelength is associated with perception of color (Figure 5.8). Within the visible spectrum, our experience of red is associated with longer wavelengths, greens are intermediate, and blues and violets are shorter in wavelength. (An easy way to remember this is the mnemonic ROYGBIV: red, orange, yellow, green, blue, indigo, violet.) The amplitude of light waves is associated with our experience of brightness or intensity of color, with larger amplitudes appearing brighter.

A line provides Wavelength in nanometers for “400,” “500,” “600,” and “700” nanometers. Within this line are all of the colors of the visible spectrum. Below this line, labeled from left to right are “Cosmic radiation,” “Gamma rays,” “X-rays,” “Ultraviolet,” then a small callout area for the line above containing the colors in the visual spectrum, followed by “Infrared,” “Terahertz radiation,” “Radar,” “Television and radio broadcasting,” and “AC circuits.”
Figure 5.8 Different wavelengths of light are associated with our perception of different colors. (credit: modification of work by Johannes Ahlmann)

Sound Waves

Like light waves, the physical properties of sound waves are associated with various aspects of our perception of sound. The frequency of a sound wave is associated with our perception of that sound’s pitch. High-frequency sound waves are perceived as high-pitched sounds, while low-frequency sound waves are perceived as low-pitched sounds. The audible range of sound frequencies is between 20 and 20000 Hz, with greatest sensitivity to those frequencies that fall in the middle of this range.

As was the case with the visible spectrum, other species show differences in their audible ranges. For instance, chickens have a very limited audible range, from 125 to 2000 Hz. Mice have an audible range from 1000 to 91000 Hz, and the beluga whale’s audible range is from 1000 to 123000 Hz. Our pet dogs and cats have audible ranges of about 70–45000 Hz and 45–64000 Hz, respectively (Strain, 2003).

The loudness of a given sound is closely associated with the amplitude of the sound wave. Higher amplitudes are associated with louder sounds. Loudness is measured in terms of decibels (dB), a logarithmic unit of sound intensity. A typical conversation would correlate with 60 dB; a rock concert might check in at 120 dB (Figure 5.9). A whisper 5 feet away or rustling leaves are at the low end of our hearing range; sounds like a window air conditioner, a normal conversation, and even heavy traffic or a vacuum cleaner are within a tolerable range. However, there is the potential for hearing damage from about 80 dB to 130 dB: These are sounds of a food processor, power lawnmower, heavy truck (25 feet away), subway train (20 feet away), live rock music, and a jackhammer. About one-third of all hearing loss is due to noise exposure, and the louder the sound, the shorter the exposure needed to cause hearing damage (Le et al., 2017). Listening to music through earbuds at maximum volume (around 100–105 decibels) can cause noise-induced hearing loss after 15 minutes of exposure. Although listening to music at maximum volume may not seem to cause damage, it increases the risk of age-related hearing loss (Kujawa & Liberman, 2006). The threshold for pain is about 130 dB, a jet plane taking off or a revolver firing at close range (Dunkle, 1982).

This illustration has a vertical bar in the middle labeled Decibels (dB) numbered 0 to 150 in intervals from the bottom to the top. To the left of the bar, the “sound intensity” of different sounds is labeled: “Hearing threshold” is 0; “Whisper” is 30, “soft music” is 40, “Refrigerator” is 45, “Safe” and “normal conversation” is 60, “Heavy city traffic” with “permanent damage after 8 hours of exposure” is 85, “Motorcycle” with “permanent damage after 6 hours exposure” is 95, “Earbuds max volume” with “permanent damage after 15 miutes exposure” is 105, “Risk of hearing loss” is 110, “pain threshold” is 130, “harmful” is 140, and “firearms” with “immediate permanent damage” is 150. To the right of the bar are photographs depicting “common sound”: At 20 decibels is a picture of rustling leaves; At 60 is two people talking, at 85 is traffic, at 105 is ear buds, at 120 is a music concert, and at 130 are jets.
Figure 5.9 This figure illustrates the loudness of common sounds. (credit “planes”: modification of work by Max Pfandl; credit “crowd”: modification of work by Christian Holmér; credit: “earbuds”: modification of work by “Skinny Guy Lover_Flickr”/Flickr; credit “traffic”: modification of work by “quinntheislander_Pixabay”/Pixabay; credit “talking”: modification of work by Joi Ito; credit “leaves”: modification of work by Aurelijus Valeiša)

Although wave amplitude is generally associated with loudness, there is some interaction between frequency and amplitude in our perception of loudness within the audible range. For example, a 10 Hz sound wave is inaudible no matter the amplitude of the wave. A 1000 Hz sound wave, on the other hand, would vary dramatically in terms of perceived loudness as the amplitude of the wave increased.

Of course, different musical instruments can play the same musical note at the same level of loudness, yet they still sound quite different. This is known as the timbre of a sound. Timbre refers to a sound’s purity, and it is affected by the complex interplay of frequency, amplitude, and timing of sound waves.

Learning Objectives

By the end of this section, you will be able to:

  • Describe the basic anatomy of the visual system
  • Discuss how rods and cones contribute to different aspects of vision
  • Describe how monocular and binocular cues are used in the perception of depth

The visual system constructs a mental representation of the world around us (Figure 5.10). This contributes to our ability to successfully navigate through physical space and interact with important individuals and objects in our environments. This section will provide an overview of the basic anatomy and function of the visual system. In addition, we will explore our ability to perceive color and depth.

Several photographs of peoples’ eyes are shown.
Figure 5.10 Our eyes take in sensory information that helps us understand the world around us. (credit “top left”: modification of work by “rajkumar1220″/Flickr”; credit “top right”: modification of work by Thomas Leuthard; credit “middle left”: modification of work by Demietrich Baker; credit “middle right”: modification of work by “kaybee07″/Flickr; credit “bottom left”: modification of work by “Isengardt”/Flickr; credit “bottom right”: modification of work by Willem Heerbaart)

Anatomy of the Visual System

The eye is the major sensory organ involved in vision (Figure 5.11). Light waves are transmitted across the cornea and enter the eye through the pupil. The cornea is the transparent covering over the eye. It serves as a barrier between the inner eye and the outside world, and it is involved in focusing light waves that enter the eye. The pupil is the small opening in the eye through which light passes, and the size of the pupil can change as a function of light levels as well as emotional arousal. When light levels are low, the pupil will become dilated, or expanded, to allow more light to enter the eye. When light levels are high, the pupil will constrict, or become smaller, to reduce the amount of light that enters the eye. The pupil’s size is controlled by muscles that are connected to the iris, which is the colored portion of the eye.

An interactive H5P element has been excluded from this version of the text. You can view it online here:
https://open.maricopa.edu/intropsychme/?p=27#h5p-20

Figure 5.11 The anatomy of the eye is illustrated in this diagram.

After passing through the pupil, light crosses the lens, a curved, transparent structure that serves to provide additional focus. The lens is attached to muscles that can change its shape to aid in focusing light that is reflected from near or far objects. In a normal-sighted individual, the lens will focus images perfectly on a small indentation in the back of the eye known as the fovea, which is part of the retina, the light-sensitive lining of the eye. The fovea contains densely packed specialized photoreceptor cells (Figure 5.12). These photoreceptor cells, known as cones, are light-detecting cells. The cones are specialized types of photoreceptors that work best in bright light conditions. Cones are very sensitive to acute detail and provide tremendous spatial resolution. They also are directly involved in our ability to perceive color.

While cones are concentrated in the fovea, where images tend to be focused, rods, another type of photoreceptor, are located throughout the remainder of the retina. Rods are specialized photoreceptors that work well in low light conditions, and while they lack the spatial resolution and color function of the cones, they are involved in our vision in dimly lit environments as well as in our perception of movement on the periphery of our visual field.

This illustration shows light reaching the optic nerve, beneath which are Ganglion cells, and then rods and cones.
Figure 5.12 The two types of photoreceptors are shown in this image. Cones are colored green and rods are blue.

We have all experienced the different sensitivities of rods and cones when making the transition from a brightly lit environment to a dimly lit environment. Imagine going to see a blockbuster movie on a clear summer day. As you walk from the brightly lit lobby into the dark theater, you notice that you immediately have difficulty seeing much of anything. After a few minutes, you begin to adjust to the darkness and can see the interior of the theater. In the bright environment, your vision was dominated primarily by cone activity. As you move to the dark environment, rod activity dominates, but there is a delay in transitioning between the phases. If your rods do not transform light into nerve impulses as easily and efficiently as they should, you will have difficulty seeing in dim light, a condition known as night blindness.

Rods and cones are connected (via several interneurons) to retinal ganglion cells. Axons from the retinal ganglion cells converge and exit through the back of the eye to form the optic nerve. The optic nerve carries visual information from the retina to the brain. There is a point in the visual field called the blind spot: Even when light from a small object is focused on the blind spot, we do not see it. We are not consciously aware of our blind spots for two reasons: First, each eye gets a slightly different view of the visual field; therefore, the blind spots do not overlap. Second, our visual system fills in the blind spot so that although we cannot respond to visual information that occurs in that portion of the visual field, we are also not aware that information is missing.

The optic nerve from each eye merges just below the brain at a point called the optic chiasm. As Figure 5.13 shows, the optic chiasm is an X-shaped structure that sits just below the cerebral cortex at the front of the brain. At the point of the optic chiasm, information from the right visual field (which comes from both eyes) is sent to the left side of the brain, and information from the left visual field is sent to the right side of the brain.

An illustration shows the location of the occipital lobe, optic chiasm, optic nerve, and the eyes in relation to their position in the brain and head.
Figure 5.13 This illustration shows the optic chiasm at the front of the brain and the pathways to the occipital lobe at the back of the brain, where visual sensations are processed into meaningful perceptions.

Once inside the brain, visual information is sent via a number of structures to the occipital lobe at the back of the brain for processing. Visual information might be processed in parallel pathways which can generally be described as the “what pathway” and the “where/how” pathway. The “what pathway” is involved in object recognition and identification, while the “where/how pathway” is involved with location in space and how one might interact with a particular visual stimulus (Milner & Goodale, 2008; Ungerleider & Haxby, 1994). For example, when you see a ball rolling down the street, the “what pathway” identifies what the object is, and the “where/how pathway” identifies its location or movement in space.

WHAT DO YOU THINK? The Ethics of Research Using Animals

David Hubel and Torsten Wiesel were awarded the Nobel Prize in Medicine in 1981 for their research on the visual system. They collaborated for more than twenty years and made significant discoveries about the neurology of visual perception (Hubel & Wiesel, 1959, 1962, 1963, 1970; Wiesel & Hubel, 1963). They studied animals, mostly cats and monkeys. Although they used several techniques, they did considerable single unit recordings, during which tiny electrodes were inserted in the animal’s brain to determine when a single cell was activated. Among their many discoveries, they found that specific brain cells respond to lines with specific orientations (called ocular dominance), and they mapped the way those cells are arranged in areas of the visual cortex known as columns and hypercolumns.

In some of their research, they sutured one eye of newborn kittens closed and followed the development of the kittens’ vision. They discovered there was a critical period of development for vision. If kittens were deprived of input from one eye, other areas of their visual cortex filled in the area that was normally used by the eye that was sewn closed. In other words, neural connections that exist at birth can be lost if they are deprived of sensory input.

What do you think about sewing a kitten’s eye closed for research? To many animal advocates, this would seem brutal, abusive, and unethical. What if you could do research that would help ensure babies and children born with certain conditions could develop normal vision instead of becoming blind? Would you want that research done? Would you conduct that research, even if it meant causing some harm to cats? Would you think the same way if you were the parent of such a child? What if you worked at the animal shelter?

Like virtually every other industrialized nation, the United States permits medical experimentation on animals, with few limitations (assuming sufficient scientific justification). The goal of any laws that exist is not to ban such tests but rather to limit unnecessary animal suffering by establishing standards for the humane treatment and housing of animals in laboratories.

As explained by Stephen Latham, the director of the Interdisciplinary Center for Bioethics at Yale (2012), possible legal and regulatory approaches to animal testing vary on a continuum from strong government regulation and monitoring of all experimentation at one end, to a self-regulated approach that depends on the ethics of the researchers at the other end. The United Kingdom has the most significant regulatory scheme, whereas Japan uses the self-regulation approach. The U.S. approach is somewhere in the middle, the result of a gradual blending of the two approaches.

There is no question that medical research is a valuable and important practice. The question is whether the use of animals is a necessary or even best practice for producing the most reliable results. Alternatives include the use of patient-drug databases, virtual drug trials, computer models and simulations, and noninvasive imaging techniques such as magnetic resonance imaging and computed tomography scans (“Animals in Science/Alternatives,” n.d.). Other techniques, such as microdosing, use humans not as test animals but as a means to improve the accuracy and reliability of test results. In vitro methods based on human cell and tissue cultures, stem cells, and genetic testing methods are also increasingly available.

Today, at the local level, any facility that uses animals and receives federal funding must have an Institutional Animal Care and Use Committee (IACUC) that ensures that the NIH guidelines are being followed. The IACUC must include researchers, administrators, a veterinarian, and at least one person with no ties to the institution: that is, a concerned citizen. This committee also performs inspections of laboratories and protocols.

Color and Depth Perception

We do not see the world in black and white; neither do we see it as two-dimensional (2-D) or flat (just height and width, no depth). Let’s look at how color vision works and how we perceive three dimensions (height, width, and depth).

Color Vision

Normal-sighted individuals have three different types of cones that mediate color vision. Each of these cone types is maximally sensitive to a slightly different wavelength of light. According to the trichromatic theory of color vision, shown in Figure 5.14, all colors in the spectrum can be produced by combining red, green, and blue. The three types of cones are each receptive to one of the colors.

A graph is shown with “sensitivity” plotted on the y-axis and “Wavelength” in nanometers plotted along the x-axis with measurements of 400, 500, 600, and 700. Three lines in different colors move from the base to the peak of the y axis, and back to the base. The blue line begins at 400 nm and hits its peak of sensitivity around 455 nanometers, before the sensitivity drops off at roughly the same rate at which it increased, returning to the lowest sensitivity around 530 nm . The green line begins at 400 nm and reaches its peak of sensitivity around 535 nanometers. Its sensitivity then decreases at roughly the same rate at which it increased, returning to the lowest sensitivity around 650 nm. The red line follows the same pattern as the first two, beginning at 400 nm, increasing and decreasing at the same rate, and it hits its height of sensitivity around 580 nanometers. Below this graph is a horizontal bar showing the colors of the visible spectrum.
Figure 5.14 This figure illustrates the different sensitivities for the three cone types found in a normal-sighted individual. (credit: modification of work by Vanessa Ezekowitz)

CONNECT THE CONCEPTS

Colorblindness: A Personal Story

Several years ago, I dressed to go to a public function and walked into the kitchen where my 7-year-old daughter sat. She looked up at me, and in her most stern voice, said, “You can’t wear that.” I asked, “Why not?” and she informed me the colors of my clothes did not match. She had complained frequently that I was bad at matching my shirts, pants, and ties, but this time, she sounded especially alarmed. As a single father with no one else to ask at home, I drove us to the nearest convenience store and asked the store clerk if my clothes matched. She said my pants were a bright green color, my shirt was a reddish-orange, and my tie was brown. She looked at me quizzically and said, “No way do your clothes match.” Over the next few days, I started asking my coworkers and friends if my clothes matched. After several days of being told that my coworkers just thought I had “a really unique style,” I made an appointment with an eye doctor and was tested (Figure 5.15). It was then that I found out that I was colorblind. I cannot differentiate between most greens, browns, and reds. Fortunately, other than unknowingly being badly dressed, my colorblindness rarely harms my day-to-day life.

The figure includes three large circles that are made up of smaller circles of varying shades and sizes. Inside each large circle is a number that is made visible only by its different color. The first circle has an orange number 12 in a background of green. The second color has a green number 74 in a background of orange. The third circle has a red and brown number 42 in a background of black and gray.
Figure 5.15 The Ishihara test evaluates color perception by assessing whether individuals can discern numbers that appear in a circle of dots of varying colors and sizes.

Some forms of color deficiency are rare. Seeing in grayscale (only shades of black and white) is extremely rare, and people who do so only have rods, which means they have very low visual acuity and cannot see very well. The most common X-linked inherited abnormality is red-green color blindness (Birch, 2012). Approximately 8% of males of European Caucasian descent, 5% of Asian males, 4% of African males, and less than 2% of indigenous American males, Australian males, and Polynesian males have red-green color deficiency (Birch, 2012). Comparatively, only about 0.4% of females of European Caucasian descent have red-green color deficiency (Birch, 2012).

The trichromatic theory of color vision is not the only theory—another major theory of color vision is known as the opponent-process theory. According to this theory, color is coded in opponent pairs: black-white, yellow-blue, and green-red. The basic idea is that some cells of the visual system are excited by one of the opponent colors and inhibited by the other. So, a cell that was excited by wavelengths associated with green would be inhibited by wavelengths associated with red, and vice versa. One of the implications of opponent processing is that we do not experience greenish-reds or yellowish-blues as colors. Another implication is that this leads to the experience of negative afterimages. An afterimage describes the continuation of a visual sensation after the removal of the stimulus. For example, when you stare briefly at the sun and then look away from it, you may still perceive a spot of light although the stimulus (the sun) has been removed. When color is involved in the stimulus, the color pairings identified in the opponent-process theory lead to a negative afterimage. You can test this concept using the flag in Figure 5.16.

An illustration shows a green flag with a thick, black-bordered yellow lines meeting slightly to the left of the center. A small white dot sits within the yellow space in the exact center of the flag.
Figure 5.16 Stare at the white dot for 30–60 seconds and then move your eyes to a blank piece of white paper. What do you see? This is known as a negative afterimage, and it provides empirical support for the opponent-process theory of color vision.

But these two theories—the trichromatic theory of color vision and the opponent-process theory—are not mutually exclusive. Research has shown that they just apply to different levels of the nervous system. For visual processing on the retina, trichromatic theory applies: the cones are responsive to three different wavelengths that represent red, blue, and green. But once the signal moves past the retina on its way to the brain, the cells respond in a way consistent with opponent-process theory (Land, 1959; Kaiser, 1997).

Depth Perception

Our ability to perceive spatial relationships in three-dimensional (3-D) space is known as depth perception. With depth perception, we can describe things as being in front, behind, above, below, or to the side of other things.

Our world is three-dimensional, so it makes sense that our mental representation of the world has three-dimensional properties. We use a variety of cues in a visual scene to establish our sense of depth. Some of these are binocular cues, which means that they rely on the use of both eyes. One example of a binocular depth cue is binocular disparity, the slightly different view of the world that each of our eyes receives. To experience this slightly different view, do this simple exercise: extend your arm fully and extend one of your fingers and focus on that finger. Now, close your left eye without moving your head, then open your left eye and close your right eye without moving your head. You will notice that your finger seems to shift as you alternate between the two eyes because of the slightly different view each eye has of your finger.

A 3-D movie works on the same principle: the special glasses you wear allow the two slightly different images projected onto the screen to be seen separately by your left and your right eye. As your brain processes these images, you have the illusion that the leaping animal or running person is coming right toward you.

Although we rely on binocular cues to experience depth in our 3-D world, we can also perceive depth in 2-D arrays. Think about all the paintings and photographs you have seen. Generally, you pick up on depth in these images even though the visual stimulus is 2-D. When we do this, we are relying on a number of monocular cues, or cues that require only one eye. If you think you can’t see depth with one eye, note that you don’t bump into things when using only one eye while walking—and, in fact, we have more monocular cues than binocular cues.

An example of a monocular cue would be what is known as linear perspective. Linear perspective refers to the fact that we perceive depth when we see two parallel lines that seem to converge in an image (Figure 5.17). Some other monocular depth cues are interposition, the partial overlap of objects, and the relative size and closeness of images to the horizon.

A photograph shows an empty road that continues toward the horizon.
Figure 5.17 We perceive depth in a two-dimensional figure like this one through the use of monocular cues like linear perspective, like the parallel lines converging as the road narrows in the distance. (credit: Marc Dalmulder)

DIG DEEPER: Stereoblindness

Bruce Bridgeman was born with an extreme case of lazy eye that resulted in him being stereoblind, or unable to respond to binocular cues of depth. He relied heavily on monocular depth cues, but he never had a true appreciation of the 3-D nature of the world around him. This all changed one night in 2012 while Bruce was seeing a movie with his wife.

The movie the couple was going to see was shot in 3-D, and even though he thought it was a waste of money, Bruce paid for the 3-D glasses when he purchased his ticket. As soon as the film began, Bruce put on the glasses and experienced something completely new. For the first time in his life, he appreciated the true depth of the world around him. Remarkably, his ability to perceive depth persisted outside of the movie theater.

There are cells in the nervous system that respond to binocular depth cues. Normally, these cells require activation during early development in order to persist, so experts familiar with Bruce’s case (and others like his) assume that at some point in his development, Bruce must have experienced at least a fleeting moment of binocular vision. It was enough to ensure the survival of the cells in the visual system tuned to binocular cues. The mystery now is why it took Bruce nearly 70 years to have these cells activated (Peck, 2012).

 

Learning Objectives

By the end of this section, you will be able to:
  • Describe the basic anatomy and function of the auditory system
  • Explain how we encode and perceive pitch
  • Discuss how we localize sound

Our auditory system converts pressure waves into meaningful sounds. This translates into our ability to hear the sounds of nature, to appreciate the beauty of music, and to communicate with one another through spoken language. This section will provide an overview of the basic anatomy and function of the auditory system. It will include a discussion of how the sensory stimulus is translated into neural impulses, where in the brain that information is processed, how we perceive pitch, and how we know where sound is coming from.

Anatomy of the Auditory System

The ear can be separated into multiple sections. The outer ear includes the pinna, which is the visible part of the ear that protrudes from our heads, the auditory canal, and the tympanic membrane, or eardrum. The middle ear contains three tiny bones known as the ossicles, which are named the malleus (or hammer), incus (or anvil), and the stapes (or stirrup). The inner ear contains the semi-circular canals, which are involved in balance and movement (the vestibular sense), and the cochlea. The cochlea is a fluid-filled, snail-shaped structure that contains the sensory receptor cells (hair cells) of the auditory system (Figure 5.18).

An interactive H5P element has been excluded from this version of the text. You can view it online here:
https://open.maricopa.edu/intropsychme/?p=27#h5p-15

Figure 5.18 The ear is divided into outer (pinna and tympanic membrane), middle (the three ossicles: malleus, incus, and stapes), and inner (cochlea and basilar membrane) divisions.

Sound waves travel along the auditory canal and strike the tympanic membrane, causing it to vibrate. This vibration results in movement of the three ossicles. As the ossicles move, the stapes presses into a thin membrane of the cochlea known as the oval window. As the stapes presses into the oval window, the fluid inside the cochlea begins to move, which in turn stimulates hair cells, which are auditory receptor cells of the inner ear embedded in the basilar membrane. The basilar membrane is a thin strip of tissue within the cochlea.

The activation of hair cells is a mechanical process: the stimulation of the hair cell ultimately leads to activation of the cell. As hair cells become activated, they generate neural impulses that travel along the auditory nerve to the brain. Auditory information is shuttled to the inferior colliculus, the medial geniculate nucleus of the thalamus, and finally to the auditory cortex in the temporal lobe of the brain for processing. Like the visual system, there is also evidence suggesting that information about auditory recognition and localization is processed in parallel streams (Rauschecker & Tian, 2000; Renier et al., 2009).

Pitch Perception

Different frequencies of sound waves are associated with differences in our perception of the pitch of those sounds. Low-frequency sounds are lower pitched, and high-frequency sounds are higher pitched. How does the auditory system differentiate among various pitches?

Several theories have been proposed to account for pitch perception. We’ll discuss two of them here: temporal theory and place theory. The temporal theory of pitch perception asserts that frequency is coded by the activity level of a sensory neuron. This would mean that a given hair cell would fire action potentials related to the frequency of the sound wave. While this is a very intuitive explanation, we detect such a broad range of frequencies (20–20,000 Hz) that the frequency of action potentials fired by hair cells cannot account for the entire range. Because of properties related to sodium channels on the neuronal membrane that are involved in action potentials, there is a point at which a cell cannot fire any faster (Shamma, 2001).

The place theory of pitch perception suggests that different portions of the basilar membrane are sensitive to sounds of different frequencies. More specifically, the base of the basilar membrane responds best to high frequencies and the tip of the basilar membrane responds best to low frequencies. Therefore, hair cells that are in the base portion would be labeled as high-pitch receptors, while those in the tip of basilar membrane would be labeled as low-pitch receptors (Shamma, 2001).

In reality, both theories explain different aspects of pitch perception. At frequencies up to about 4000 Hz, it is clear that both the rate of action potentials and place contribute to our perception of pitch. However, much higher frequency sounds can only be encoded using place cues (Shamma, 2001).

Sound Localization

The ability to locate sound in our environments is an important part of hearing. Localizing sound could be considered similar to the way that we perceive depth in our visual fields. Like the monocular and binocular cues that provided information about depth, the auditory system uses both monaural (one-eared) and binaural (two-eared) cues to localize sound.

Each pinna interacts with incoming sound waves differently, depending on the sound’s source relative to our bodies. This interaction provides a monaural cue that is helpful in locating sounds that occur above or below and in front or behind us. The sound waves received by your two ears from sounds that come from directly above, below, in front, or behind you would be identical; therefore, monaural cues are essential (Grothe et al., 2010).

Binaural cues, on the other hand, provide information on the location of a sound along a horizontal axis by relying on differences in patterns of vibration of the eardrum between our two ears. If a sound comes from an off-center location, it creates two types of binaural cues: interaural level differences and interaural timing differences. Interaural level difference refers to the fact that a sound coming from the right side of your body is more intense at your right ear than at your left ear because of the attenuation of the sound wave as it passes through your head. Interaural timing difference refers to the small difference in the time at which a given sound wave arrives at each ear (Figure 5.19). Certain brain areas monitor these differences to construct where along a horizontal axis a sound originates (Grothe et al., 2010).

A photograph of jets has an illustration of arced waves labeled “sound” coming from the jets. These extend to an outline of a human head, with arrows from the jets identifying the location of each ear.
Figure 5.19 Localizing sound involves the use of both monaural and binaural cues. (credit “plane”: modification of work by Max Pfandl)

Hearing Loss

Deafness is the partial or complete inability to hear. Some people are born without hearing, which is known as congenital deafness. Other people suffer from conductive hearing loss, which is due to a problem delivering sound energy to the cochlea. Causes for conductive hearing loss include blockage of the ear canal, a hole in the tympanic membrane, problems with the ossicles, or fluid in the space between the eardrum and cochlea. Another group of people suffer from sensorineural hearing loss, which is the most common form of hearing loss. Sensorineural hearing loss can be caused by many factors, such as aging, head or acoustic trauma, infections and diseases (such as measles or mumps), medications, environmental effects such as noise exposure (noise-induced hearing loss, as shown in Figure 5.20), tumors, and toxins (such as those found in certain solvents and metals).

Photograph A shows Beyoncé performing at a concert. Photograph B shows a construction worker operating a jackhammer.
Figure 5.20 Environmental factors that can lead to sensorineural hearing loss include regular exposure to loud music or construction equipment. (a) Musical performers and (b) construction workers are at risk for this type of hearing loss. (credit a: modification of work by “GillyBerlin_Flickr”/Flickr; credit b: modification of work by Nick Allen)

Given the mechanical nature by which the sound wave stimulus is transmitted from the eardrum through the ossicles to the oval window of the cochlea, some degree of hearing loss is inevitable. With conductive hearing loss, hearing problems are associated with a failure in the vibration of the eardrum and/or movement of the ossicles. These problems are often dealt with through devices like hearing aids that amplify incoming sound waves to make vibration of the eardrum and movement of the ossicles more likely to occur.

When the hearing problem is associated with a failure to transmit neural signals from the cochlea to the brain, it is called sensorineural hearing loss. One disease that results in sensorineural hearing loss is Ménière’s disease. Although not well understood, Ménière’s disease results in a degeneration of inner ear structures that can lead to hearing loss, tinnitus (constant ringing or buzzing), vertigo (a sense of spinning), and an increase in pressure within the inner ear (Semaan & Megerian, 2011). This kind of loss cannot be treated with hearing aids, but some individuals might be candidates for a cochlear implant as a treatment option. Cochlear implants are electronic devices that consist of a microphone, a speech processor, and an electrode array. The device receives incoming sound information and directly stimulates the auditory nerve to transmit information to the brain.

WHAT DO YOU THINK? Deaf Culture

In the United States and other places around the world, deaf people have their own language, schools, and customs. This is called deaf culture. In the United States, deaf individuals often communicate using American Sign Language (ASL); ASL has no verbal component and is based entirely on visual signs and gestures. The primary mode of communication is signing. One of the values of deaf culture is to continue traditions like using sign language rather than teaching deaf children to try to speak, read lips, or have cochlear implant surgery.

When a child is diagnosed as deaf, parents have difficult decisions to make. Should the child be enrolled in mainstream schools and taught to verbalize and read lips? Or should the child be sent to a school for deaf children to learn ASL and have significant exposure to deaf culture? Do you think there might be differences in the way that parents approach these decisions depending on whether or not they are also deaf?

Learning Objectives

By the end of this section, you will be able to:

  • Describe the basic functions of the chemical senses
  • Explain the basic functions of the somatosensory, nociceptive, and thermoceptive sensory systems
  • Describe the basic functions of the vestibular, proprioceptive, and kinesthetic sensory systems

Vision and hearing have received an incredible amount of attention from researchers over the years. While there is still much to be learned about how these sensory systems work, we have a much better understanding of them than of our other sensory modalities. In this section, we will explore our chemical senses (taste and smell) and our body senses (touch, temperature, pain, balance, and body position).

The Chemical Senses

Taste (gustation) and smell (olfaction) are called chemical senses because both have sensory receptors that respond to molecules in the food we eat or in the air we breathe. There is a pronounced interaction between our chemical senses. For example, when we describe the flavor of a given food, we are really referring to both gustatory and olfactory properties of the food working in combination.

Taste (Gustation)

You have learned since elementary school that there are four basic groupings of taste: sweet, salty, sour, and bitter. Research demonstrates, however, that we have at least six taste groupings. Umami is our fifth taste. Umami is actually a Japanese word that roughly translates to yummy, and it is associated with a taste for monosodium glutamate (Kinnamon & Vandenbeuch, 2009). There is also a growing body of experimental evidence suggesting that we possess a taste for the fatty content of a given food (Mizushige et al., 2007).

Molecules from the food and beverages we consume dissolve in our saliva and interact with taste receptors on our tongue and in our mouth and throat. Taste buds are formed by groupings of taste receptor cells with hair-like extensions that protrude into the central pore of the taste bud (Figure 5.21). Taste buds have a life cycle of ten days to two weeks, so even destroying some by burning your tongue won’t have any long-term effect; they just grow right back. Taste molecules bind to receptors on this extension and cause chemical changes within the sensory cell that result in neural impulses being transmitted to the brain via different nerves, depending on where the receptor is located. Taste information is transmitted to the medulla, thalamus, and limbic system, and to the gustatory cortex, which is tucked underneath the overlap between the frontal and temporal lobes (Maffei et al., 2012; Roper, 2013).

Illustration A shows a taste bud in an opening of the tongue, with the “tongue surface,” “taste pore,” “taste receptor cell” and “nerves” labeled. Part B is a micrograph showing taste buds on a human tongue.
Figure 5.21 (a) Taste buds are composed of a number of individual taste receptors cells that transmit information to nerves. (b) This micrograph shows a close-up view of the tongue’s surface. (credit a: modification of work by Jonas Töle; credit b: scale-bar data from Matt Russell)

Smell (Olfaction)

Olfactory receptor cells are located in a mucous membrane at the top of the nose. Small hair-like extensions from these receptors serve as the sites for odor molecules dissolved in the mucus to interact with chemical receptors located on these extensions (Figure 5.22). Once an odor molecule has bound a given receptor, chemical changes within the cell result in signals being sent to the olfactory bulb: a bulb-like structure at the tip of the frontal lobe where the olfactory nerves begin. From the olfactory bulb, information is sent to regions of the limbic system and to the primary olfactory cortex, which is located very near the gustatory cortex (Lodovichi & Belluscio, 2012; Spors et al., 2013).

An illustration shows a side view of a human head and the location of the “nasal cavity,” “olfactory receptors,” and “olfactory bulb.”
Figure 5.22 Olfactory receptors are the hair-like parts that extend from the olfactory bulb into the mucous membrane of the nasal cavity.

There is tremendous variation in the sensitivity of the olfactory systems of different species. We often think of dogs as having far superior olfactory systems than our own, and indeed, dogs can do some remarkable things with their noses. There is some evidence to suggest that dogs can “smell” dangerous drops in blood glucose levels as well as cancerous tumors (Wells, 2010). Dogs’ extraordinary olfactory abilities may be due to the increased number of functional genes for olfactory receptors (between 800 and 1200), compared to the fewer than 400 observed in humans and other primates (Niimura & Nei, 2007).

Many species respond to chemical messages, known as pheromones, sent by another individual (Wysocki & Preti, 2004). Pheromonal communication often involves providing information about the reproductive status of a potential mate. So, for example, when a female rat is ready to mate, she secretes pheromonal signals that draw attention from nearby male rats. Pheromonal activation is actually an important component in eliciting sexual behavior in the male rat (Furlow, 1996, 2012; Purvis & Haynes, 1972; Sachs, 1997). There has also been a good deal of research (and controversy) about pheromones in humans (Comfort, 1971; Russell, 1976; Wolfgang-Kimball, 1992; Weller, 1998).

Touch, Thermoception, and Nociception

A number of receptors are distributed throughout the skin to respond to various touch-related stimuli (Figure 5.23). These receptors include Meissner’s corpuscles, Pacinian corpuscles, Merkel’s disks, and Ruffini corpuscles. Meissner’s corpuscles respond to pressure and lower frequency vibrations, and Pacinian corpuscles detect transient pressure and higher frequency vibrations. Merkel’s disks respond to light pressure, while Ruffini corpuscles detect stretch (Abraira & Ginty, 2013).

An illustration shows “skin surface” underneath which different receptors are identified: the “pacinian corpuscle,” “ruffini corpuscle,” “merkel’s disk,” and “meissner’s corpuscle.”
Figure 5.23 There are many types of sensory receptors located in the skin, each attuned to specific touch-related stimuli.

In addition to the receptors located in the skin, there are also a number of free nerve endings that serve sensory functions. These nerve endings respond to a variety of different types of touch-related stimuli and serve as sensory receptors for both thermoception (temperature perception) and nociception (a signal indicating potential harm and maybe pain) (Garland, 2012; Petho & Reeh, 2012; Spray, 1986). Sensory information collected from the receptors and free nerve endings travels up the spinal cord and is transmitted to regions of the medulla, thalamus, and ultimately to somatosensory cortex, which is located in the postcentral gyrus of the parietal lobe.

Pain Perception

Pain is an unpleasant experience that involves both physical and psychological components. Feeling pain is quite adaptive because it makes us aware of an injury, and it motivates us to remove ourselves from the cause of that injury. In addition, pain also makes us less likely to suffer additional injury because we will be gentler with our injured body parts.

Generally speaking, pain can be considered to be neuropathic or inflammatory in nature. Pain that signals some type of tissue damage is known as inflammatory pain. In some situations, pain results from damage to neurons of either the peripheral or central nervous system. As a result, pain signals that are sent to the brain get exaggerated. This type of pain is known as neuropathic pain. Multiple treatment options for pain relief range from relaxation therapy to the use of analgesic medications to deep brain stimulation. The most effective treatment option for a given individual will depend on a number of considerations, including the severity and persistence of the pain and any medical/psychological conditions.

Some individuals are born without the ability to feel pain. This very rare genetic disorder is known as congenital insensitivity to pain (or congenital analgesia). While those with congenital analgesia can detect differences in temperature and pressure, they cannot experience pain. As a result, they often suffer significant injuries. Young children have serious mouth and tongue injuries because they have bitten themselves repeatedly. Not surprisingly, individuals suffering from this disorder have much shorter life expectancies due to their injuries and secondary infections of injured sites (U.S. National Library of Medicine, 2013).

The Vestibular Sense, Proprioception, and Kinesthesia

The vestibular sense contributes to our ability to maintain balance and body posture. As Figure 5.24 shows, the major sensory organs (utricle, saccule, and the three semicircular canals) of this system are located next to the cochlea in the inner ear. The vestibular organs are fluid-filled and have hair cells, similar to the ones found in the auditory system, which respond to movement of the head and gravitational forces. When these hair cells are stimulated, they send signals to the brain via the vestibular nerve. Although we may not be consciously aware of our vestibular system’s sensory information under normal circumstances, its importance is apparent when we experience motion sickness and/or dizziness related to infections of the inner ear (Khan & Chang, 2013).

An illustration of the vestibular system shows the locations of the three canals (“posterior canal,” “horizontal canal,” and “superior canal”) and the locations of the “urticle,” “oval window,” “cochlea,” “basilar membrane and hair cells,” “saccule,” and “vestibule.”
Figure 5.24 The major sensory organs of the vestibular system are located next to the cochlea in the inner ear. These include the utricle, saccule, and the three semicircular canals (posterior, superior, and horizontal).

In addition to maintaining balance, the vestibular system collects information critical for controlling movement and the reflexes that move various parts of our bodies to compensate for changes in body position. Therefore, both proprioception (perception of body position) and kinesthesia (perception of the body’s movement through space) interact with information provided by the vestibular system.

These sensory systems also gather information from receptors that respond to stretch and tension in muscles, joints, skin, and tendons (Lackner & DiZio, 2005; Proske, 2006; Proske & Gandevia, 2012). Proprioceptive and kinesthetic information travels to the brain via the spinal column. Several cortical regions in addition to the cerebellum receive information from and send information to the sensory organs of the proprioceptive and kinesthetic systems.

Learning Objectives

By the end of this section, you will be able to:

  • Explain the figure-ground relationship
  • Define Gestalt principles of grouping
  • Describe how perceptual set is influenced by an individual’s characteristics and mental state

In the early part of the 20th century, Max Wertheimer published a paper demonstrating that individuals perceived motion in rapidly flickering static images—an insight that came to him as he used a child’s toy tachistoscope. Wertheimer, and his assistants Wolfgang Köhler and Kurt Koffka, who later became his partners, believed that perception involved more than simply combining sensory stimuli. This belief led to a new movement within the field of psychology known as Gestalt psychology. The word gestalt literally means form or pattern, but its use reflects the idea that the whole is different from the sum of its parts. In other words, the brain creates a perception that is more than simply the sum of available sensory inputs, and it does so in predictable ways. Gestalt psychologists translated these predictable ways into principles by which we organize sensory information. As a result, Gestalt psychology has been extremely influential in the area of sensation and perception (Rock & Palmer, 1990).

One Gestalt principle is the figure-ground relationship. According to this principle, we tend to segment our visual world into figure and ground. Figure is the object or person that is the focus of the visual field, while the ground is the background. As Figure 5.25 shows, our perception can vary tremendously, depending on what is perceived as figure and what is perceived as ground. Presumably, our ability to interpret sensory information depends on what we label as figure and what we label as ground in any particular case, although this assumption has been called into question (Peterson & Gibson, 1994; Vecera & O’Reilly, 1998).

An illustration shows two identical black face-like shapes that face towards one another, and one white vase-like shape that occupies all of the space in between them. Depending on which part of the illustration is focused on, either the black shapes or the white shape may appear to be the object of the illustration, leaving the other(s) perceived as negative space.
Figure 5.25 The concept of figure-ground relationship explains why this image can be perceived either as a vase or as a pair of faces.

Another Gestalt principle for organizing sensory stimuli into meaningful perception is proximity. This principle asserts that things that are close to one another tend to be grouped together, as Figure 5.26 illustrates.

Illustration A shows thirty-six dots in six evenly-spaced rows and columns. Illustration B shows thirty-six dots in six evenly-spaced rows but with the columns separated into three sets of two columns.
Figure 5.26 The Gestalt principle of proximity suggests that you see (a) one block of dots on the left side and (b) three columns on the right side.

How we read something provides another illustration of the proximity concept. For example, we read this sentence like this, notl iket hiso rt hat. We group the letters of a given word together because there are no spaces between the letters, and we perceive words because there are spaces between each word. Here are some more examples: Cany oum akes enseo ft hiss entence? What doth es e wor dsmea n?

We might also use the principle of similarity to group things in our visual fields. According to this principle, things that are alike tend to be grouped together (Figure 5.27). For example, when watching a football game, we tend to group individuals based on the colors of their uniforms. When watching an offensive drive, we can get a sense of the two teams simply by grouping along this dimension.

An illustration shows six rows of six dots each. The rows of dots alternate between blue and white colored dots.
Figure 5.27 When looking at this array of dots, we likely perceive alternating rows of colors. We are grouping these dots according to the principle of similarity.

Two additional Gestalt principles are the law of continuity (or good continuation) and closure. The law of continuity suggests that we are more likely to perceive continuous, smooth flowing lines rather than jagged, broken lines (Figure 5.28). The principle of closure states that we organize our perceptions into complete objects rather than as a series of parts (Figure 5.29).

An illustration shows two lines of diagonal dots that cross in the middle in the general shape of an “X.”
Figure 5.28 Good continuation would suggest that we are more likely to perceive this as two overlapping lines, rather than four lines meeting in the center.
An illustration shows fragmented lines that would form a circle if they were connected. Another illustration shows fragmented lines that would form a square if they were connected.
Figure 5.29 Closure suggests that we will perceive a complete circle and rectangle rather than a series of segments.

According to Gestalt theorists, pattern perception, or our ability to discriminate among different figures and shapes, occurs by following the principles described above. You probably feel fairly certain that your perception accurately matches the real world, but this is not always the case. Our perceptions are based on perceptual hypotheses: educated guesses that we make while interpreting sensory information. These hypotheses are informed by a number of factors, including our personalities, experiences, and expectations. We use these hypotheses to generate our perceptual set. For instance, research has demonstrated that those who are given verbal priming produce a biased interpretation of complex ambiguous figures (Goolkasian & Woodbury, 2010).

DIG DEEPER: The Depths of Perception: Bias, Prejudice, and Cultural Factors

In this chapter, you have learned that perception is a complex process. Built from sensations, but influenced by our own experiences, biases, prejudices, and cultures, perceptions can be very different from person to person. Research suggests that implicit racial prejudice and stereotypes affect perception. For instance, several studies have demonstrated that non-Black participants identify weapons faster and are more likely to identify non-weapons as weapons when the image of the weapon is paired with the image of a Black person (Payne, 2001; Payne et al., 2005). Furthermore, White individuals’ decisions to shoot an armed target in a video game is made more quickly when the target is Black (Correll et al., 2002; Correll et al, 2006). This research is important, considering the number of very high-profile cases in the last few decades in which young Blacks were killed by people who claimed to believe that the unarmed individuals were armed and/or represented some threat to their personal safety.

Review of MCCCD Course Competencies

After reading this chapter are you better able to do the following?

  • Describe basic principles of consciousness, sensation, and perception.
  • Critically evaluate information to help make evidence-based decisions.
  • Apply biopsychosocial principles to real-world situations.
  • Use psychological principles to explain the diversity and complexity of the human experience.

Chapter Review Quiz

An interactive H5P element has been excluded from this version of the text. You can view it online here:
https://open.maricopa.edu/intropsychme/?p=27#h5p-26

Access for free at https://openstax.org/books/psychology-2e/pages/1-introduction

5

States of Consciousness

A painting shows two children sleeping.
Figure 4.1 Sleep, which we all experience, is a quiet and mysterious pause in our daily lives. Two sleeping children are depicted in this 1895 oil painting titled Zwei schlafende Mädchen auf der Ofenbank, which translates as “two sleeping girls on the stove,” by Swiss painter Albert Anker.

Our lives involve regular, dramatic changes in the degree to which we are aware of our surroundings and our internal states. While awake, we feel alert and aware of the many important things going on around us. Our experiences change dramatically while we are in deep sleep and once again when we are dreaming. Some people also experience altered states of consciousness through meditation, hypnosis, or alcohol and other drugs.

This chapter will discuss states of consciousness with a particular emphasis on sleep. The different stages of sleep will be identified, and sleep disorders will be described. The chapter will close with discussions of altered states of consciousness produced by psychoactive drugs, hypnosis, and meditation.

MCCCD Course Competencies

  • Describe basic principles of consciousness, sensation, and perception.
  • Critically evaluate information to help make evidence-based decisions.
  • Apply biopsychosocial principles to real-world situations.
  • Use psychological principles to explain the diversity and complexity of the human experience.

Learning Objectives

By the end of this section, you will be able to:

  • Understand what is meant by consciousness
  • Explain how circadian rhythms are involved in regulating the sleep-wake cycle, and how circadian cycles can be disrupted
  • Discuss the concept of sleep debt

Consciousness describes our awareness of internal and external stimuli. Awareness of internal stimuli includes feeling pain, hunger, thirst, sleepiness, and being aware of our thoughts and emotions. Awareness of eternal stimuli includes experiences such as seeing the light from the sun, feeling the warmth of a room, and hearing the voice of a friend.

We experience different states of consciousness and different levels of awareness on a regular basis. We might even describe consciousness as a continuum that ranges from full awareness to a deep sleep. Sleep is a state marked by relatively low levels of physical activity and reduced sensory awareness that is distinct from periods of rest that occur during wakefulness. Wakefulness is characterized by high levels of sensory awareness, thought, and behavior. Beyond being awake or asleep, there are many other states of consciousness people experience. These include daydreaming, intoxication, and unconsciousness due to anesthesia. We might also experience unconscious states of being via drug-induced anesthesia for medical purposes. Often, we are not completely aware of our surroundings, even when we are fully awake. For instance, have you ever daydreamed while driving home from work or school without really thinking about the drive itself? You were capable of engaging in all of the complex tasks involved with operating a motor vehicle even though you were not aware of doing so. Many of these processes, like much of psychological behavior, are rooted in our biology.

Biological Rhythms

Biological rhythms are internal rhythms of biological activity. A woman’s menstrual cycle is an example of a biological rhythm—a recurring, cyclical pattern of bodily changes. One complete menstrual cycle takes about 28 days—a lunar month—but many biological cycles are much shorter. For example, body temperature fluctuates cyclically over a 24-hour period (Figure 4.2). Alertness is associated with higher body temperatures, and sleepiness with lower body temperatures.

A line graph is titled “Circadian Change in Body Temperature (Source: Waterhouse et al., 2012).” The y-axis, is labeled “temperature (degrees Fahrenheit),” ranges from 97.2 to 99.3. The x-axis, which is labeled “time,” begins at 12:00 A.M. and ends at 4:00 A.M. the following day. The subjects slept from 12:00 A.M. until 8:00 A.M. during which time their average body temperatures dropped from around 98.8 degrees at midnight to 97.6 degrees at 4:00 A.M. and then gradually rose back to nearly the same starting temperature by 8:00 A.M. The average body temperature fluctuated slightly throughout the day with an upward tilt, until the next sleep cycle where the temperature again dropped.
Figure 4.2 This chart illustrates the circadian change in body temperature over 28 hours in a group of eight young men. Body temperature rises throughout the waking day, peaking in the afternoon, and falls during sleep with the lowest point occurring during the very early morning hours.

This pattern of temperature fluctuation, which repeats every day, is one example of a circadian rhythm. A circadian rhythm is a biological rhythm that takes place over a period of about 24 hours. Our sleep-wake cycle, which is linked to our environment’s natural light-dark cycle, is perhaps the most obvious example of a circadian rhythm, but we also have daily fluctuations in heart rate, blood pressure, blood sugar, and body temperature. Some circadian rhythms play a role in changes in our state of consciousness.

If we have biological rhythms, then is there some sort of biological clock? In the brain, the hypothalamus, which lies above the pituitary gland, is a main center of homeostasis. Homeostasis is the tendency to maintain a balance, or optimal level, within a biological system.

The brain’s clock mechanism is located in an area of the hypothalamus known as the suprachiasmatic nucleus (SCN). The axons of light-sensitive neurons in the retina provide information to the SCN based on the amount of light present, allowing this internal clock to be synchronized with the outside world (Klein et al., 1991; Welsh et al., 2010) (Figure 4.3).

In this graphic, the outline of a person’s head facing left is situated to the right of a picture of the sun, which is labeled ”light” with an arrow pointing to a location in the brain where light input is processed. Inside the head is an illustration of a brain with the following parts’ locations identified: Suprachiasmatic nucleus (SCN), Hypothalamus, Pituitary gland, Pineal gland, and Output rhythms: Physiology and Behavior.
Figure 4.3 The suprachiasmatic nucleus (SCN) serves as the brain’s clock mechanism. The clock sets itself with light information received through projections from the retina.

Problems With Circadian Rhythms

Generally, and for most people, our circadian cycles are aligned with the outside world. For example, most people sleep during the night and are awake during the day. One important regulator of sleep-wake cycles is the hormone melatonin. The pineal gland, an endocrine structure located inside the brain that releases melatonin, is thought to be involved in the regulation of various biological rhythms and of the immune system during sleep (Hardeland et al., 2006). Melatonin release is stimulated by darkness and inhibited by light.

There are individual differences in regard to our sleep-wake cycle. For instance, some people would say they are morning people, while others would consider themselves to be night owls. These individual differences in circadian patterns of activity are known as a person’s chronotype, and research demonstrates that morning larks and night owls differ with regard to sleep regulation (Taillard et al., 2003). Sleep regulation refers to the brain’s control of switching between sleep and wakefulness as well as coordinating this cycle with the outside world.

Whether lark, owl, or somewhere in between, there are situations in which a person’s circadian clock gets out of synchrony with the external environment. One way that this happens involves traveling across multiple time zones. When we do this, we often experience jet lag. Jet lag is a collection of symptoms that results from the mismatch between our internal circadian cycles and our environment. These symptoms include fatigue, sluggishness, irritability, and insomnia (i.e., a consistent difficulty in falling or staying asleep for at least three nights a week over a month’s time) (Roth, 2007).

Individuals who do rotating shift work are also likely to experience disruptions in circadian cycles. Rotating shift work refers to a work schedule that changes from early to late on a daily or weekly basis. For example, a person may work from 7:00 a.m. to 3:00 p.m. on Monday, 3:00 a.m. to 11:00 a.m. on Tuesday, and 11:00 a.m. to 7:00 p.m. on Wednesday. In such instances, the individual’s schedule changes so frequently that it becomes difficult for a normal circadian rhythm to be maintained. This often results in sleeping problems, and it can lead to signs of depression and anxiety. These kinds of schedules are common for individuals working in health care professions and service industries, and they are associated with persistent feelings of exhaustion and agitation that can make someone more prone to making mistakes on the job (Gold et al., 1992; Presser, 1995).

Rotating shift work has pervasive effects on the lives and experiences of individuals engaged in that kind of work, which is clearly illustrated in stories reported in a qualitative study that researched the experiences of middle-aged nurses who worked rotating shifts (West et al., 2009). Several of the nurses interviewed commented that their work schedules affected their relationships with their family. One of the nurses said,

If you’ve had a partner who does work regular job 9 to 5 office hours . . . the ability to spend time, good time with them when you’re not feeling absolutely exhausted . . . that would be one of the problems that I’ve encountered. (West et al., 2009, p. 114)

While disruptions in circadian rhythms can have negative consequences, there are things we can do to help us realign our biological clocks with the external environment. Some of these approaches, such as using a bright light as shown in Figure 4.4, have been shown to alleviate some of the problems experienced by individuals suffering from jet lag or from the consequences of rotating shift work. Because the biological clock is driven by light, exposure to bright light during working shifts and dark exposure when not working can help combat insomnia and symptoms of anxiety and depression (Huang et al., 2013).

A photograph shows a bright lamp.
Figure 4.4 Devices like this are designed to provide exposure to bright light to help people maintain a regular circadian cycle. They can be helpful for people working night shifts or for people affected by seasonal variations in light.

When people have difficulty getting sleep due to their work or the demands of day-to-day life, they accumulate a sleep debt. A person with a sleep debt does not get sufficient sleep on a chronic basis. The consequences of sleep debt include decreased levels of alertness and mental efficiency. Interestingly, since the advent of electric light, the amount of sleep that people get has declined. While we certainly welcome the convenience of having the darkness lit up, we also suffer the consequences of reduced amounts of sleep because we are more active during the nighttime hours than our ancestors were. As a result, many of us sleep less than 7–8 hours a night and accrue a sleep debt. While there is tremendous variation in any given individual’s sleep needs, the National Sleep Foundation (n.d.) cites research to estimate that newborns require the most sleep (between 12 and 18 hours a night) and that this amount declines to just 7–9 hours by the time we are adults.

If you lie down to take a nap and fall asleep very easily, chances are you may have sleep debt. Given that college students are notorious for suffering from significant sleep debt (Hicks et al., 2001; Hicks et al., 1992; Miller et al., 2010), chances are you and your classmates deal with sleep debt-related issues on a regular basis. In 2015, the National Sleep Foundation updated their sleep duration hours, to better accommodate individual differences. Table 4.1 shows the new recommendations, which describe sleep durations that are “recommended”, “may be appropriate”, and “not recommended”.

Sleep Needs at Different Ages
Age Recommended May be appropriate Not recommended
0–3 months 14–17 hours 11–13 hours
18–19 hours
Fewer than 11 hours
More than 19 hours
4–11 months 12–15 hours 10–11 hours
16–18 hours
Fewer than 10 hours
More than 18 hours
1–2 years 11–14 hours 9–10 hours
15–16 hours
Fewer than 9 hours
More than 16 hours
3–5 years 10–13 hours 8–9 hours
14 hours
Fewer than 8 hours
More than 14 hours
6–13 years 9–11 hours 7–8 hours
12 hours
Fewer than 7 hours
More than 12 hours
14–17 years 8–10 hours 7 hours
11 hours
Fewer than 7 hours
More than 11 hours
18–25 years 7–9 hours 6 hours
10–11 hours
Fewer than 6 hours
More than 11 hours
26–64 years 7–9 hours 6 hours
10 hours
Fewer than 6 hours
More than 10 hours
≥65 years 7–8 hours 5–6 hours
9 hours
Fewer than 5 hours
More than 9 hours
Table4.1

Sleep debt and sleep deprivation have significant negative psychological and physiological consequences Figure 4.5. As mentioned earlier, lack of sleep can result in decreased mental alertness and cognitive function. In addition, sleep deprivation often results in depression-like symptoms. These effects can occur as a function of accumulated sleep debt or in response to more acute periods of sleep deprivation. It may surprise you to know that sleep deprivation is associated with obesity, increased blood pressure, increased levels of stress hormones, and reduced immune functioning (Banks & Dinges, 2007). A sleep deprived individual generally will fall asleep more quickly than if she were not sleep deprived. Some sleep-deprived individuals have difficulty staying awake when they stop moving (for example sitting and watching television or driving a car). That is why individuals suffering from sleep deprivation can also put themselves and others at risk when they put themselves behind the wheel of a car or work with dangerous machinery. Some research suggests that sleep deprivation affects cognitive and motor function as much as, if not more than, alcohol intoxication (Williamson & Feyer, 2000). Research shows that the most severe effects of sleep deprivation occur when a person stays awake for more than 24 hours (Killgore & Weber, 2014; Killgore et al., 2007), or following repeated nights with fewer than four hours in bed (Wickens et al., 2015). For example, irritability, distractibility, and impairments in cognitive and moral judgment can occur with fewer than four hours of sleep. If someone stays awake for 48 consecutive hours, they could start to hallucinate.

An illustration of the top half of a human body identifies the locations in the body that correspond with various adverse affects of sleep deprivation. The brain is labeled with “Irritability,” “Cognitive impairment,” “Memory lapses or loss,” “Impaired moral judgment,” “Severe yawning,” “Hallucinations,” and “Symptoms similar to ADHD.” The heart is labeled with “Risk of heart disease.” The muscles are labeled with “Increased reaction time,” “Decreased accuracy,” “Tremors,” and “Aches.” There is an organ near the stomach labeled “Risk of diabetes Type 2.” Various parts of the neck, arm, and underarm are labeled “Impaired immune system.” Other risks include “Growth suppression,” “Risk of obesity,” “Decreased temperature.”
Figure 4.5 This figure illustrates some of the negative consequences of sleep deprivation. While cognitive deficits may be the most obvious, many body systems are negatively impacted by lack of sleep. (credit: modification of work by Mikael Häggström)

The amount of sleep we get varies across the lifespan. When we are very young, we spend up to 16 hours a day sleeping. As we grow older, we sleep less. In fact, a meta-analysis, which is a study that combines the results of many related studies, conducted within the last decade indicates that by the time we are 65 years old, we average fewer than 7 hours of sleep per day (Ohayon et al., 2004).

Learning Objectives

By the end of this section, you will be able to:

  • Describe areas of the brain involved in sleep
  • Understand hormone secretions associated with sleep
  • Describe several theories aimed at explaining the function of sleep

We spend approximately one-third of our lives sleeping. Given the average life expectancy for U.S. citizens falls between 73 and 79 years old (Singh & Siahpush, 2006), we can expect to spend approximately 25 years of our lives sleeping. Some animals never sleep (e.g., some fish and amphibian species); other animals sleep very little without apparent negative consequences (e.g., giraffes); yet some animals (e.g., rats) die after two weeks of sleep deprivation (Siegel, 2008). Why do we devote so much time to sleeping? Is it absolutely essential that we sleep? This section will consider these questions and explore various explanations for why we sleep.

What is Sleep?

You have read that sleep is distinguished by low levels of physical activity and reduced sensory awareness. As discussed by Siegel (2008), a definition of sleep must also include mention of the interplay of the circadian and homeostatic mechanisms that regulate sleep. Homeostatic regulation of sleep is evidenced by sleep rebound following sleep deprivation. Sleep rebound refers to the fact that a sleep-deprived individual will fall asleep more quickly during subsequent opportunities for sleep. Sleep is characterized by certain patterns of activity of the brain that can be visualized using electroencephalography (EEG), and different phases of sleep can be differentiated using EEG as well.

Sleep-wake cycles seem to be controlled by multiple brain areas acting in conjunction with one another. Some of these areas include the thalamus, the hypothalamus, and the pons. As already mentioned, the hypothalamus contains the SCN—the biological clock of the body—in addition to other nuclei that, in conjunction with the thalamus, regulate slow-wave sleep. The pons is important for regulating rapid eye movement (REM) sleep (National Institutes of Health, n.d.).

Sleep is also associated with the secretion and regulation of a number of hormones from several endocrine glands including: melatonin, follicle stimulating hormone (FSH), luteinizing hormone (LH), and growth hormone (National Institutes of Health, n.d.). You have read that the pineal gland releases melatonin during sleep (Figure 4.6). Melatonin is thought to be involved in the regulation of various biological rhythms and the immune system (Hardeland et al., 2006). During sleep, the pituitary gland secretes both FSH and LH which are important in regulating the reproductive system (Christensen et al., 2012; Sofikitis et al., 2008). The pituitary gland also secretes growth hormone, during sleep, which plays a role in physical growth and maturation as well as other metabolic processes (Bartke et al., 2013).

An illustration of a brain shows the locations of the hypothalamus, thalamus, pons, suprachiasmatic nucleus, pituitary gland, and pineal gland.
Figure 4.6 The pineal and pituitary glands secrete a number of hormones during sleep.

Why Do We Sleep?

Given the central role that sleep plays in our lives and the number of adverse consequences that have been associated with sleep deprivation, one would think that we would have a clear understanding of why it is that we sleep. Unfortunately, this is not the case; however, several hypotheses have been proposed to explain the function of sleep.

Adaptive Function of Sleep

One popular hypothesis of sleep incorporates the perspective of evolutionary psychology. Evolutionary psychology is a discipline that studies how universal patterns of behavior and cognitive processes have evolved over time as a result of natural selection. Variations and adaptations in cognition and behavior make individuals more or less successful in reproducing and passing their genes to their offspring. One hypothesis from this perspective might argue that sleep is essential to restore resources that are expended during the day. Just as bears hibernate in the winter when resources are scarce, perhaps people sleep at night to reduce their energy expenditures. While this is an intuitive explanation of sleep, there is little research that supports this explanation. In fact, it has been suggested that there is no reason to think that energetic demands could not be addressed with periods of rest and inactivity (Frank, 2006; Rial et al., 2007), and some research has actually found a negative correlation between energetic demands and the amount of time spent sleeping (Capellini et al., 2008).

Another evolutionary hypothesis of sleep holds that our sleep patterns evolved as an adaptive response to predatory risks, which increase in darkness. Thus we sleep in safe areas to reduce the chance of harm. Again, this is an intuitive and appealing explanation for why we sleep. Perhaps our ancestors spent extended periods of time asleep to reduce attention to themselves from potential predators. Comparative research indicates, however, that the relationship that exists between predatory risk and sleep is very complex and equivocal. Some research suggests that species that face higher predatory risks sleep fewer hours than other species (Capellini et al., 2008), while other researchers suggest there is no relationship between the amount of time a given species spends in deep sleep and its predation risk (Lesku et al., 2006).

It is quite possible that sleep serves no single universally adaptive function, and different species have evolved different patterns of sleep in response to their unique evolutionary pressures. While we have discussed the negative outcomes associated with sleep deprivation, it should be pointed out that there are many benefits that are associated with adequate amounts of sleep. A few such benefits listed by the National Sleep Foundation (n.d.) include maintaining a healthy weight, lowering stress levels, improving mood, and increasing motor coordination, as well as a number of benefits related to cognition and memory formation.

Cognitive Function of Sleep

Another theory regarding why we sleep involves sleep’s importance for cognitive function and memory formation (Rattenborg et al., 2007). Indeed, we know sleep deprivation results in disruptions in cognition and memory deficits (Brown, 2012), leading to impairments in our abilities to maintain attention, make decisions, and recall long-term memories. Moreover, these impairments become more severe as the amount of sleep deprivation increases (Alhola & Polo-Kantola, 2007). Furthermore, slow-wave sleep after learning a new task can improve resultant performance on that task (Huber, Ghilardi, Massimini, & Tononi, 2004) and seems essential for effective memory formation (Stickgold, 2005). Understanding the impact of sleep on cognitive function should help you understand that cramming all night for a test may be not effective and can even prove counterproductive.

Learning Objectives

By the end of this section, you will be able to:

  • Differentiate between REM and non-REM sleep
  • Describe the differences between the three stages of non-REM sleep
  • Understand the role that REM and non-REM sleep play in learning and memory

Sleep is not a uniform state of being. Instead, sleep is composed of several different stages that can be differentiated from one another by the patterns of brain wave activity that occur during each stage. These changes in brain wave activity can be visualized using EEG and are distinguished from one another by both the frequency and amplitude of brain waves (Figure 4.7). Sleep can be divided into two different general phases: REM sleep and non-REM (NREM) sleep. Rapid eye movement (REM) sleep is characterized by darting movements of the eyes under closed eyelids. Brain waves during REM sleep appear very similar to brain waves during wakefulness. In contrast, non-REM (NREM) sleep is subdivided into four stages distinguished from each other and from wakefulness by characteristic patterns of brain waves. The first three stages of sleep are NREM sleep, while the fourth and final stage of sleep is REM sleep. In this section, we will discuss each of these stages of sleep and their associated patterns of brain wave activity.

A photograph shows a person sleeping. Superimposed across the top of the picture is a line representing brainwave activity across the four stages of sleep. Above the line, from left to right, it reads stage 1, stage 2, stage 3, and stage 4. The wave amplitude is highest in late stage 2, and in the middle of stage 3 until stage 4. The wavelength is longer from late stage 2 through stage 3.
Figure 4.7 Brainwave activity changes dramatically across the different stages of sleep. (credit “sleeping”: modification of work by Ryan Vaarsi)

NREM Stages of Sleep

The first stage of NREM sleep is known as stage 1 sleep. Stage 1 sleep is a transitional phase that occurs between wakefulness and sleep, the period during which we drift off to sleep. During this time, there is a slowdown in both the rates of respiration and heartbeat. In addition, stage 1 sleep involves a marked decrease in both overall muscle tension and core body temperature.

In terms of brain wave activity, stage 1 sleep is associated with both alpha and theta waves. The early portion of stage 1 sleep produces alpha waves, which are relatively low frequency (8–13Hz), high amplitude patterns of electrical activity (waves) that become synchronized (Figure 4.8). This pattern of brain wave activity resembles that of someone who is very relaxed, yet awake. As an individual continues through stage 1 sleep, there is an increase in theta wave activity. Theta waves are even lower frequency (4–7 Hz), higher amplitude brain waves than alpha waves. It is relatively easy to wake someone from stage 1 sleep; in fact, people often report that they have not been asleep if they are awoken during stage 1 sleep.

A graph has a y-axis labeled “EEG” and an x-axis labeled “time (seconds.) Plotted along the y-axis and moving upward are the stages of sleep. First is REM, followed by Stage 3 NREM Delta, Stage 2 NREM Theta (sleep spindles; K-complexes), Stage 1 NREM Alpha, and Awake. Charted on the x axis is Time in seconds from 2–20 in 2 second intervals. Each sleep stage has associated wavelengths of varying amplitude and frequency. Relative to the others, “awake” has a very close wavelength and a medium amplitude. Stage 1 is characterized by a generally uniform wavelength and a relatively low amplitude which doubles and quickly reverts to normal every 2 seconds. Stage 2 is comprised of a similar wavelength as stage 1. It introduces the K-complex from seconds 10 through 12 which is a short burst of doubled or tripled amplitude and decreased wavelength. Stage 3 has a more uniform wave with gradually increasing amplitude. Finally, REM sleep looks much like stage 2 without the K-complex.
Figure 4.8 Brainwave activity changes dramatically across the different stages of sleep.

As we move into stage 2 sleep, the body goes into a state of deep relaxation. Theta waves still dominate the activity of the brain, but they are interrupted by brief bursts of activity known as sleep spindles (Figure 4.9). A sleep spindle is a rapid burst of higher frequency brain waves that may be important for learning and memory (Fogel & Smith, 2011; Poe et al., 2010). In addition, the appearance of K-complexes is often associated with stage 2 sleep. A K-complex is a very high amplitude pattern of brain activity that may in some cases occur in response to environmental stimuli. Thus, K-complexes might serve as a bridge to higher levels of arousal in response to what is going on in our environments (Halász, 1993; Steriade & Amzica, 1998).

A graph has an x-axis labeled “time” and a y-axis labeled “voltage. A line illustrates brainwaves, with two areas labeled “sleep spindle” and “k-complex”. The area labeled “sleep spindle” has decreased wavelength and moderately increased amplitude, while the area labeled “k-complex” has significantly high amplitude and longer wavelength.
Figure 4.9 Stage 2 sleep is characterized by the appearance of both sleep spindles and K-complexes.

Stage 3 is often referred to as deep sleep or slow-wave sleep because this stage is characterized by low frequency (less than 3 Hz), high amplitude delta waves (Figure 4.10). During this time, an individual’s heart rate and respiration slow dramatically. It is much more difficult to awaken someone from sleep during stage 3 than during earlier stages. Interestingly, individuals who have increased levels of alpha brain wave activity (more often associated with wakefulness and transition into stage 1 sleep) during stage 3 often report that they do not feel refreshed upon waking, regardless of how long they slept (Stone et al., 2008).

Polysonograph a shows the pattern of delta waves, which are low frequency and high amplitude. Delta waves are found mostly in stage 3 of sleep. Chart b shows brainwaves at various stages of sleep, with stage 3 highlighted.
Figure 4.10 (a) Delta waves, which are low frequency and high amplitude, characterize (b) slow-wave stage 3 and stage 4 sleep.

REM Sleep

As mentioned earlier, REM sleep is marked by rapid movements of the eyes. The brain waves associated with this stage of sleep are very similar to those observed when a person is awake, as shown in Figure 4.11, and this is the period of sleep in which dreaming occurs. It is also associated with paralysis of muscle systems in the body with the exception of those that make circulation and respiration possible. Therefore, no movement of voluntary muscles occurs during REM sleep in a normal individual; REM sleep is often referred to as paradoxical sleep because of this combination of high brain activity and lack of muscle tone. Like NREM sleep, REM has been implicated in various aspects of learning and memory (Wagner et al. 2001).

Chart A is a polysonograph with the period of rapid eye movement (REM) highlighted. Chart b shows brainwaves at various stages of sleep, with the “awake” stage highlighted to show its similarity to the wave pattern of “REM” in chart A.
Figure 4.11 (a) A period of rapid eye movement is marked by the short red line segment. The brain waves associated with REM sleep, outlined in the red box in (a), look very similar to those seen (b) during wakefulness.

If people are deprived of REM sleep and then allowed to sleep without disturbance, they will spend more time in REM sleep in what would appear to be an effort to recoup the lost time in REM. This is known as the REM rebound, and it suggests that REM sleep is also homeostatically regulated. Aside from the role that REM sleep may play in processes related to learning and memory, REM sleep may also be involved in emotional processing and regulation. In such instances, REM rebound may actually represent an adaptive response to stress in non-depressed individuals by suppressing the emotional salience of aversive events that occurred in wakefulness (Suchecki et al., 2012). Sleep deprivation, in general, is associated with a number of negative consequences (Brown, 2012).

The hypnogram below (Figure 4.12) shows a person’s passage through the stages of sleep.

This is a hypnogram showing the transitions of the sleep cycle during a typical eight hour period of sleep. During the first hour, the person goes through stages 1 and 2 and ends at 3. In the second hour, sleep oscillates in stage 3 before attaining a 30-minute period of REM sleep. The third hour follows the same pattern as the second, but ends with a brief awake period. The fourth hour follows a similar pattern as the third, with a slightly longer REM stage. In the fifth hour, stage 3 is no longer reached. The sleep stages are fluctuating from 2, to 1, to REM, to awake, and then they repeat with shortening intervals until the end of the eighth hour when the person awakens.
Figure 4.12 A hypnogram is a diagram of the stages of sleep as they occur during a period of sleep. This hypnogram illustrates how an individual moves through the various stages of sleep.

Dreams and their associated meanings vary across different cultures and periods of time. By the late 19th century, German psychiatrist Sigmund Freud had become convinced that dreams represented an opportunity to gain access to the unconscious. By analyzing dreams, Freud thought people could increase self-awareness and gain valuable insight to help them deal with the problems they faced in their lives. Freud made distinctions between the manifest content and the latent content of dreams. Manifest content is the actual content, or storyline, of a dream. Latent content, on the other hand, refers to the hidden meaning of a dream. For instance, if a woman dreams about being chased by a snake, Freud might have argued that this represents the woman’s fear of sexual intimacy, with the snake serving as a symbol of a man’s penis.

Freud was not the only theorist to focus on the content of dreams. The 20th-century Swiss psychiatrist Carl Jung believed that dreams allowed us to tap into the collective unconscious. The collective unconscious, as described by Jung, is a theoretical repository of information he believed to be shared by everyone. According to Jung, certain symbols in dreams reflected universal archetypes with meanings that are similar for all people regardless of culture or location.

The sleep and dreaming researcher Rosalind Cartwright, however, believes that dreams simply reflect life events that are important to the dreamer. Unlike Freud and Jung, Cartwright’s ideas about dreaming have found empirical support. For example, she and her colleagues published a study in which women going through divorce were asked several times over a five-month period to report the degree to which their former spouses were on their minds. These same women were awakened during REM sleep in order to provide a detailed account of their dream content. There was a significant positive correlation between the degree to which women thought about their former spouses during waking hours and the number of times their former spouses appeared as characters in their dreams (Cartwright et al., 2006). Recent research (Horikawa, Tamaki, Miyawaki, & Kamitani, 2013) has uncovered new techniques by which researchers may effectively detect and classify the visual images that occur during dreaming by using fMRI for neural measurement of brain activity patterns, opening the way for additional research in this area.

Alan Hobson, a neuroscientist, is credited for developing activation-synthesis theory of dreaming. Early versions of this theory proposed that dreams were not the meaning-filled representations of angst proposed by Freud and others, but were rather the result of our brain attempting to make sense of (“synthesize”) the neural activity (“activation”) that was happening during REM sleep. Recent adaptations (e.g., Hobson, 2002) continue to update the theory based on accumulating evidence. For example, Hobson (2009) suggests that dreaming may represent a state of protoconsciousness. In other words, dreaming involves constructing a virtual reality in our heads that we might use to help us during wakefulness. Among a variety of neurobiological evidence, John Hobson cites research on lucid dreams as an opportunity to better understand dreaming in general. Lucid dreams are dreams in which certain aspects of wakefulness are maintained during a dream state. In a lucid dream, a person becomes aware of the fact that they are dreaming, and as such, they can control the dream’s content (LaBerge, 1990).

Learning Objectives

By the end of this section, you will be able to:

  • Describe the symptoms and treatments of insomnia
  • Recognize the symptoms of several parasomnias
  • Describe the symptoms and treatments for sleep apnea
  • Recognize risk factors associated with sudden infant death syndrome (SIDS) and steps to prevent it
  • Describe the symptoms and treatments for narcolepsy

Many people experience disturbances in their sleep at some point in their lives. Depending on the population and sleep disorder being studied, between 30% and 50% of the population suffers from a sleep disorder at some point in their lives (Bixler et al., 1979; Hossain & Shapiro, 2002; Ohayon, 1997, 2002; Ohayon & Roth, 2002). This section will describe several sleep disorders as well as some of their treatment options.

Insomnia

Insomnia, a consistent difficulty in falling or staying asleep, is the most common of the sleep disorders. Individuals with insomnia often experience long delays between the times that they go to bed and actually fall asleep. In addition, these individuals may wake up several times during the night only to find that they have difficulty getting back to sleep. As mentioned earlier, one of the criteria for insomnia involves experiencing these symptoms for at least three nights a week for at least one month’s time (Roth, 2007).

It is not uncommon for people suffering from insomnia to experience increased levels of anxiety about their inability to fall asleep. This becomes a self-perpetuating cycle because increased anxiety leads to increased arousal, and higher levels of arousal make the prospect of falling asleep even more unlikely. Chronic insomnia is almost always associated with feeling overtired and may be associated with symptoms of depression.

There may be many factors that contribute to insomnia, including age, drug use, exercise, mental status, and bedtime routines. Not surprisingly, insomnia treatment may take one of several different approaches. People who suffer from insomnia might limit their use of stimulant drugs (such as caffeine) or increase their amount of physical exercise during the day. Some people might turn to over-the-counter (OTC) or prescribed sleep medications to help them sleep, but this should be done sparingly because many sleep medications result in dependence and alter the nature of the sleep cycle, and they can increase insomnia over time. Those who continue to have insomnia, particularly if it affects their quality of life, should seek professional treatment.

Some forms of psychotherapy, such as cognitive-behavioral therapy, can help sufferers of insomnia. Cognitive-behavioral therapy is a type of psychotherapy that focuses on cognitive processes and problem behaviors. The treatment of insomnia likely would include stress management techniques and changes in problematic behaviors that could contribute to insomnia (e.g., spending more waking time in bed). Cognitive-behavioral therapy has been demonstrated to be quite effective in treating insomnia (Savard et al., 2005; Williams et al., 2013).

EVERYDAY CONNECTION: Solutions to Support Healthy Sleep

Has something like this ever happened to you? My sophomore college housemate got so stressed out during finals sophomore year he drank almost a whole bottle of Nyquil to try to fall asleep. When he told me, I made him go see the college therapist.

Many college students struggle getting the recommended 7–9 hours of sleep each night. However, for some, it’s not because of all-night partying or late-night study sessions. It’s simply that they feel so overwhelmed and stressed that they cannot fall asleep or stay asleep. One or two nights of sleep difficulty is not unusual, but if you experience anything more than that, you should seek a doctor’s advice.

Here are some tips to maintain healthy sleep:

  • Stick to a sleep schedule, even on the weekends. Try going to bed and waking up at the same time every day to keep your biological clock in sync so your body gets in the habit of sleeping every night.
  • Avoid anything stimulating for an hour before bed. That includes exercise and bright light from devices.
  • Exercise daily.
  • Avoid naps.
  • Keep your bedroom temperature between 60 and 67 degrees. People sleep better in cooler temperatures.
  • Avoid alcohol, cigarettes, caffeine, and heavy meals before bed. It may feel like alcohol helps you sleep, but it actually disrupts REM sleep and leads to frequent awakenings. Heavy meals may make you sleepy, but they can also lead to frequent awakenings due to gastric distress.
  • If you cannot fall asleep, leave your bed and do something else until you feel tired again. Train your body to associate the bed with sleeping rather than other activities like studying, eating, or watching television shows.

Parasomnias

parasomnia is one of a group of sleep disorders in which unwanted, disruptive motor activity and/or experiences during sleep play a role. Parasomnias can occur in either REM or NREM phases of sleep. Sleepwalking, restless leg syndrome, and night terrors are all examples of parasomnias (Mahowald & Schenck, 2000).

Sleepwalking

In sleepwalking, or somnambulism, the sleeper engages in relatively complex behaviors ranging from wandering about to driving an automobile. During periods of sleepwalking, sleepers often have their eyes open, but they are not responsive to attempts to communicate with them. Sleepwalking most often occurs during slow-wave sleep, but it can occur at any time during a sleep period in some affected individuals (Mahowald & Schenck, 2000).

Historically, somnambulism has been treated with a variety of pharmacotherapies ranging from benzodiazepines to antidepressants. However, the success rate of such treatments is questionable. Guilleminault et al. (2005) found that sleepwalking was not alleviated with the use of benzodiazepines. However, all of their somnambulistic patients who also suffered from sleep-related breathing problems showed a marked decrease in sleepwalking when their breathing problems were effectively treated.

DIG DEEPER: A Sleepwalking Defense?

On January 16, 1997, Scott Falater sat down to dinner with his wife and children and told them about difficulties he was experiencing on a project at work. After dinner, he prepared some materials to use in leading a church youth group the following morning, and then he attempted to repair the family’s swimming pool pump before retiring to bed. The following morning, he awoke to barking dogs and unfamiliar voices from downstairs. As he went to investigate what was going on, he was met by a group of police officers who arrested him for the murder of his wife (Cartwright, 2004; CNN, 1999).

Yarmila Falater’s body was found in the family’s pool with 44 stab wounds. A neighbor called the police after witnessing Falater standing over his wife’s body before dragging her into the pool. Upon a search of the premises, police found blood-stained clothes and a bloody knife in the trunk of Falater’s car, and he had blood stains on his neck.

Remarkably, Falater insisted that he had no recollection of hurting his wife in any way. His children and his wife’s parents all agreed that Falater had an excellent relationship with his wife and they couldn’t think of a reason that would provide any sort of motive to murder her (Cartwright, 2004).

Scott Falater had a history of regular episodes of sleepwalking as a child, and he had even behaved violently toward his sister once when she tried to prevent him from leaving their home in his pajamas during a sleepwalking episode. He suffered from no apparent anatomical brain anomalies or psychological disorders. It appeared that Scott Falater had killed his wife in his sleep, or at least, that is the defense he used when he was tried for his wife’s murder (Cartwright, 2004; CNN, 1999). In Falater’s case, a jury found him guilty of first degree murder in June of 1999 (CNN, 1999); however, there are other murder cases where the sleepwalking defense has been used successfully. As scary as it sounds, many sleep researchers believe that homicidal sleepwalking is possible in individuals suffering from the types of sleep disorders described below (Broughton et al., 1994; Cartwright, 2004; Mahowald et al., 2005; Pressman, 2007).

 

REM Sleep Behavior Disorder (RBD)

REM sleep behavior disorder (RBD) occurs when the muscle paralysis associated with the REM sleep phase does not occur. Individuals who suffer from RBD have high levels of physical activity during REM sleep, especially during disturbing dreams. These behaviors vary widely, but they can include kicking, punching, scratching, yelling, and behaving like an animal that has been frightened or attacked. People who suffer from this disorder can injure themselves or their sleeping partners when engaging in these behaviors. Furthermore, these types of behaviors ultimately disrupt sleep, although affected individuals have no memories that these behaviors have occurred (Arnulf, 2012).

This disorder is associated with a number of neurodegenerative diseases such as Parkinson’s disease. In fact, this relationship is so robust that some view the presence of RBD as a potential aid in the diagnosis and treatment of a number of neurodegenerative diseases (Ferini-Strambi, 2011). Clonazepam, an anti-anxiety medication with sedative properties, is most often used to treat RBD. It is administered alone or in conjunction with doses of melatonin (the hormone secreted by the pineal gland). As part of treatment, the sleeping environment is often modified to make it a safer place for those suffering from RBD (Zangini et al., 2011).

Other Parasomnias

A person with restless leg syndrome has uncomfortable sensations in the legs during periods of inactivity or when trying to fall asleep. This discomfort is relieved by deliberately moving the legs, which, not surprisingly, contributes to difficulty in falling or staying asleep. Restless leg syndrome is quite common and has been associated with a number of other medical diagnoses, such as chronic kidney disease and diabetes (Mahowald & Schenck, 2000). There are a variety of drugs that treat restless leg syndrome: benzodiazepines, opiates, and anticonvulsants (Restless Legs Syndrome Foundation, n.d.).

Night terrors result in a sense of panic in the sufferer and are often accompanied by screams and attempts to escape from the immediate environment (Mahowald & Schenck, 2000). Although individuals suffering from night terrors appear to be awake, they generally have no memories of the events that occurred, and attempts to console them are ineffective. Typically, individuals suffering from night terrors will fall back asleep again within a short time. Night terrors apparently occur during the NREM phase of sleep (Provini et al., 2011). Generally, treatment for night terrors is unnecessary unless there is some underlying medical or psychological condition that is contributing to the night terrors (Mayo Clinic, n.d.).

Sleep Apnea

Sleep apnea is defined by episodes during which a sleeper’s breathing stops. These episodes can last 10–20 seconds or longer and often are associated with brief periods of arousal. While individuals suffering from sleep apnea may not be aware of these repeated disruptions in sleep, they do experience increased levels of fatigue. Many individuals diagnosed with sleep apnea first seek treatment because their sleeping partners indicate that they snore loudly and/or stop breathing for extended periods of time while sleeping (Henry & Rosenthal, 2013). Sleep apnea is much more common in overweight people and is often associated with loud snoring. Surprisingly, sleep apnea may exacerbate cardiovascular disease (Sánchez-de-la-Torre et al., 2012). While sleep apnea is less common in thin people, anyone, regardless of their weight, who snores loudly or gasps for air while sleeping, should be checked for sleep apnea.

While people are often unaware of their sleep apnea, they are keenly aware of some of the adverse consequences of insufficient sleep. Consider a patient who believed that as a result of his sleep apnea he “had three car accidents in six weeks. They were ALL my fault. Two of them I didn’t even know I was involved in until afterwards” (Henry & Rosenthal, 2013, p. 52). It is not uncommon for people suffering from undiagnosed or untreated sleep apnea to fear that their careers will be affected by the lack of sleep, illustrated by this statement from another patient, “I’m in a job where there’s a premium on being mentally alert. I was really sleepy… and having trouble concentrating…. It was getting to the point where it was kind of scary” (Henry & Rosenthal, 2013, p. 52).

There are two types of sleep apnea: obstructive sleep apnea and central sleep apnea. Obstructive sleep apnea occurs when an individual’s airway becomes blocked during sleep, and air is prevented from entering the lungs. In central sleep apnea, disruption in signals sent from the brain that regulate breathing cause periods of interrupted breathing (White, 2005).

One of the most common treatments for sleep apnea involves the use of a special device during sleep. A continuous positive airway pressure (CPAP) device includes a mask that fits over the sleeper’s nose and mouth, which is connected to a pump that pumps air into the person’s airways, forcing them to remain open, as shown in Figure 4.13. Some newer CPAP masks are smaller and cover only the nose. This treatment option has proven to be effective for people suffering from mild to severe cases of sleep apnea (McDaid et al., 2009). However, alternative treatment options are being explored because consistent compliance by users of CPAP devices is a problem. Recently, a new EPAP (expiratory positive air pressure) device has shown promise in double-blind trials as one such alternative (Berry et al., 2011).

Photograph A shows a CPAP device. Photograph B shows a clear full face CPAP mask attached to a mannequin's head with straps.
Figure 4.13 (a) A typical CPAP device used in the treatment of sleep apnea is (b) affixed to the head with straps, and a mask that covers the nose and mouth.

SIDS

In sudden infant death syndrome (SIDS) an infant stops breathing during sleep and dies. Infants younger than 12 months appear to be at the highest risk for SIDS, and boys have a greater risk than girls. A number of risk factors have been associated with SIDS including premature birth, smoking within the home, and hyperthermia. There may also be differences in both brain structure and function in infants that die from SIDS (Berkowitz, 2012; Mage & Donner, 2006; Thach, 2005).

The substantial amount of research on SIDS has led to a number of recommendations to parents to protect their children (Figure 4.14). For one, research suggests that infants should be placed on their backs when put down to sleep, and their cribs should not contain any items which pose suffocation threats, such as blankets, pillows or padded crib bumpers (cushions that cover the bars of a crib). Infants should not have caps placed on their heads when put down to sleep in order to prevent overheating, and people in the child’s household should abstain from smoking in the home. Recommendations like these have helped to decrease the number of infant deaths from SIDS in recent years (Mitchell, 2009; Task Force on Sudden Infant Death Syndrome, 2011).

The “Safe to Sleep” campaign logo shows a baby sleeping and the words “safe to sleep.”
Figure 4.14 The Safe to Sleep campaign educates the public about how to minimize risk factors associated with SIDS. This campaign is sponsored in part by the National Institute of Child Health and Human Development.

Narcolepsy

Unlike the other sleep disorders described in this section, a person with narcolepsy cannot resist falling asleep at inopportune times. These sleep episodes are often associated with cataplexy, which is a lack of muscle tone or muscle weakness, and in some cases involves complete paralysis of the voluntary muscles. This is similar to the kind of paralysis experienced by healthy individuals during REM sleep (Burgess & Scammell, 2012; Hishikawa & Shimizu, 1995; Luppi et al., 2011). Narcoleptic episodes take on other features of REM sleep. For example, around one third of individuals diagnosed with narcolepsy experience vivid, dream-like hallucinations during narcoleptic attacks (Chokroverty, 2010).

Surprisingly, narcoleptic episodes are often triggered by states of heightened arousal or stress. The typical episode can last from a minute or two to half an hour. Once awakened from a narcoleptic attack, people report that they feel refreshed (Chokroverty, 2010). Obviously, regular narcoleptic episodes could interfere with the ability to perform one’s job or complete schoolwork, and in some situations, narcolepsy can result in significant harm and injury (e.g., driving a car or operating machinery or other potentially dangerous equipment).

Generally, narcolepsy is treated using psychomotor stimulant drugs, such as amphetamines (Mignot, 2012). These drugs promote increased levels of neural activity. Narcolepsy is associated with reduced levels of the signaling molecule hypocretin in some areas of the brain (De la Herrán-Arita & Drucker-Colín, 2012; Han, 2012), and the traditional stimulant drugs do not have direct effects on this system. Therefore, it is quite likely that new medications that are developed to treat narcolepsy will be designed to target the hypocretin system.

There is a tremendous amount of variability among sufferers, both in terms of how symptoms of narcolepsy manifest and the effectiveness of currently available treatment options. This is illustrated by McCarty’s (2010) case study of a 50-year-old woman who sought help for the excessive sleepiness during normal waking hours that she had experienced for several years. She indicated that she had fallen asleep at inappropriate or dangerous times, including while eating, while socializing with friends, and while driving her car. During periods of emotional arousal, the woman complained that she felt some weakness on the right side of her body. Although she did not experience any dream-like hallucinations, she was diagnosed with narcolepsy as a result of sleep testing. In her case, the fact that her cataplexy was confined to the right side of her body was quite unusual. Early attempts to treat her condition with a stimulant drug alone were unsuccessful. However, when a stimulant drug was used in conjunction with a popular antidepressant, her condition improved dramatically.

Learning Objectives

By the end of this section, you will be able to:

  • Describe the diagnostic criteria for substance use disorders
  • Identify the neurotransmitter systems impacted by various categories of drugs
  • Describe how different categories of drugs affect behavior and experience

While we all experience altered states of consciousness in the form of sleep on a regular basis, some people use drugs and other substances that result in altered states of consciousness as well. This section will present information relating to the use of various psychoactive drugs and problems associated with such use. This will be followed by brief descriptions of the effects of some of the more well-known drugs commonly used today.

Substance Use Disorders

The fifth edition of the Diagnostic and Statistical Manual of Mental DisordersFifth Edition (DSM-5) is used by clinicians to diagnose individuals suffering from various psychological disorders. Drug use disorders are addictive disorders, and the criteria for specific substance (drug) use disorders are described in DSM-5. A person who has a substance use disorder often uses more of the substance than they originally intended to and continues to use that substance despite experiencing significant adverse consequences. In individuals diagnosed with a substance use disorder, there is a compulsive pattern of drug use that is often associated with both physical and psychological dependence.

Physical dependence involves changes in normal bodily functions—the user will experience withdrawal from the drug upon cessation of use. In contrast, a person who has psychological dependence has an emotional, rather than physical, need for the drug and may use the drug to relieve psychological distress. Tolerance is linked to physiological dependence, and it occurs when a person requires more and more drug to achieve effects previously experienced at lower doses. Tolerance can cause the user to increase the amount of drug used to a dangerous level—even to the point of overdose and death.

Drug withdrawal includes a variety of negative symptoms experienced when drug use is discontinued. These symptoms usually are opposite of the effects of the drug. For example, withdrawal from sedative drugs often produces unpleasant arousal and agitation. In addition to withdrawal, many individuals who are diagnosed with substance use disorders will also develop tolerance to these substances. Psychological dependence, or drug craving, is a recent addition to the diagnostic criteria for substance use disorder in DSM-5. This is an important factor because we can develop tolerance and experience withdrawal from any number of drugs that we do not abuse. In other words, physical dependence in and of itself is of limited utility in determining whether or not someone has a substance use disorder.

Drug Categories

The effects of all psychoactive drugs occur through their interactions with our endogenous neurotransmitter systems. Many of these drugs, and their relationships, are shown in Table 4.2. As you have learned, drugs can act as agonists or antagonists of a given neurotransmitter system. An agonist facilitates the activity of a neurotransmitter system, and antagonists impede neurotransmitter activity.

Drugs and Their Effects
Class of Drug Examples Effects on the Body Effects When Used Psychologically Addicting?
Stimulants Cocaine, amphetamines (including some ADHD medications such as Adderall), methamphetamines, MDMA (“Ecstasy” or “Molly”) Increased heart rate, blood pressure, body temperature Increased alertness, mild euphoria, decreased appetite in low doses. High doses increase agitation, paranoia, can cause hallucinations. Some can cause heightened sensitivity to physical stimuli. High doses of MDMA can cause brain toxicity and death. Yes
Sedative-Hypnotics (“Depressants”) Alcohol, barbiturates (e.g., secobarbital, pentobarbital), Benzodiazepines (e.g., Xanax) Decreased heart rate, blood pressure Low doses increase relaxation, decrease inhibitions. High doses can induce sleep, cause motor disturbance, memory loss, decreased respiratory function, and death. Yes
Opiates Opium, Heroin, Fentanyl, Morphine, Oxycodone, Vicoden, methadone, and other prescription pain relievers Decreased pain, pupil dilation, decreased gut motility, decreased respiratory function Pain relief, euphoria, sleepiness. High doses can cause death due to respiratory depression. Yes
Hallucinogens Marijuana, LSD, Peyote, mescaline, DMT, dissociative anesthetics including ketamine and PCP Increased heart rate and blood pressure that may dissipate over time Mild to intense perceptual changes with high variability in effects based on strain, method of ingestion, and individual differences Yes
Table4.2

Alcohol and Other Depressants

Ethanol, which we commonly refer to as alcohol, is in a class of psychoactive drugs known as depressants (Figure 4.15). A depressant is a drug that tends to suppress central nervous system activity. Other depressants include barbiturates and benzodiazepines. These drugs share in common their ability to serve as agonists of the gamma-Aminobutyric acid (GABA) neurotransmitter system. Because GABA has a quieting effect on the brain, GABA agonists also have a quieting effect; these types of drugs are often prescribed to treat both anxiety and insomnia.

An illustration of a GABA-gated chloride channel in a cell membrane shows receptor sites for barbiturate, benzodiazepine, GABA, alcohol, and neurosteroids, as well as three negatively-charged chloride ions passing through the channel. Each drug type has a specific shape, such as triangular, rectangular or square, which corresponds to a similarly shaped receptor spot.
Figure 4.15 The GABA-gated chloride (Cl) channel is embedded in the cell membrane of certain neurons. The channel has multiple receptor sites where alcohol, barbiturates, and benzodiazepines bind to exert their effects. The binding of these molecules opens the chloride channel, allowing negatively-charged chloride ions (Cl) into the neuron’s cell body. Changing its charge in a negative direction pushes the neuron away from firing; thus, activating a GABA neuron has a quieting effect on the brain.

Acute alcohol administration results in a variety of changes to consciousness. At rather low doses, alcohol use is associated with feelings of euphoria. As the dose increases, people report feeling sedated. Generally, alcohol is associated with decreases in reaction time and visual acuity, lowered levels of alertness, and reduction in behavioral control. With excessive alcohol use, a person might experience a complete loss of consciousness and/or difficulty remembering events that occurred during a period of intoxication (McKim & Hancock, 2013). In addition, if a pregnant woman consumes alcohol, her infant may be born with a cluster of birth defects and symptoms collectively called fetal alcohol spectrum disorder (FASD) or fetal alcohol syndrome (FAS).

With repeated use of many central nervous system depressants, such as alcohol, a person becomes physically dependent upon the substance and will exhibit signs of both tolerance and withdrawal. Psychological dependence on these drugs is also possible. Therefore, the abuse potential of central nervous system depressants is relatively high.

Drug withdrawal is usually an aversive experience, and it can be a life-threatening process in individuals who have a long history of very high doses of alcohol and/or barbiturates. This is of such concern that people who are trying to overcome addiction to these substances should only do so under medical supervision.

Stimulants

Stimulants are drugs that tend to increase overall levels of neural activity. Many of these drugs act as agonists of the dopamine neurotransmitter system. Dopamine activity is often associated with reward and craving; therefore, drugs that affect dopamine neurotransmission often have abuse liability. Drugs in this category include cocaine, amphetamines (including methamphetamine), cathinones (i.e., bath salts), MDMA (ecstasy), nicotine, and caffeine.

Cocaine can be taken in multiple ways. While many users snort cocaine, intravenous injection and inhalation (smoking) are also common. The freebase version of cocaine, known as crack, is a potent, smokable version of the drug. Like many other stimulants, cocaine agonizes the dopamine neurotransmitter system by blocking the reuptake of dopamine in the neuronal synapse.

DIG DEEPER: Methamphetamine

Methamphetamine in its smokable form, often called “crystal meth” due to its resemblance to rock crystal formations, is highly addictive. The smokable form reaches the brain very quickly to produce an intense euphoria that dissipates almost as fast as it arrives, prompting users to continuing taking the drug. Users often consume the drug every few hours across days-long binges called “runs,” in which the user forgoes food and sleep. In the wake of the opiate epidemic, many drug cartels in Mexico are shifting from producing heroin to producing highly potent but inexpensive forms of methamphetamine. The low cost coupled with lower risk of overdose than with opiate drugs is making crystal meth a popular choice among drug users today (NIDA, 2019). Using crystal meth poses a number of serious long-term health issues, including dental problems (often called “meth mouth”), skin abrasions caused by excessive scratching, memory loss, sleep problems, violent behavior, paranoia, and hallucinations. Methamphetamine addiction produces an intense craving that is difficult to treat.

Amphetamines have a mechanism of action quite similar to cocaine in that they block the reuptake of dopamine in addition to stimulating its release (Figure 4.16). While amphetamines are often abused, they are also commonly prescribed to children diagnosed with attention deficit hyperactivity disorder (ADHD). It may seem counterintuitive that stimulant medications are prescribed to treat a disorder that involves hyperactivity, but the therapeutic effect comes from increases in neurotransmitter activity within certain areas of the brain associated with impulse control. These brain areas include the prefrontal cortex and basal ganglia.

An illustration of a presynaptic cell and a postsynaptic cell shows these cells’ interactions with cocaine and dopamine molecules. The presynaptic cell contains two cylinder-shaped channels, one on each side near where it faces the postsynaptic cell. The postsynaptic cell contains several receptors, side-by-side across the area that faces the presynaptic cell. In the space between the two cells, there are both cocaine and dopamine molecules. One of the cocaine molecules attaches to one of the presynaptic cell’s channels. This cocaine molecule is labeled “bound cocaine.” An X-shape is shown over the top of the bound cocaine and the channel to indicate that the cocaine does not enter the presynaptic cell. A dopamine molecule is shown inside of the presynaptic cell’s other channel. Arrows connect this dopamine molecule to several others inside of the presynaptic cell. More arrows connect to more dopamine molecules, tracing their paths from the channel into the presynaptic cell, and out into the space between the presynaptic cell and the postsynaptic cell. Arrows extend from two of the dopamine molecules in this in-between space to the postsynaptic cell’s receptors. Only the dopamine molecules are shown binding to the postsynaptic cell’s receptors.
Figure 4.16 As one of their mechanisms of action, cocaine and amphetamines block the reuptake of dopamine from the synapse into the presynaptic cell.

In recent years, methamphetamine (meth) use has become increasingly widespread. Methamphetamine is a type of amphetamine that can be made from ingredients that are readily available (e.g., medications containing pseudoephedrine, a compound found in many over-the-counter cold and flu remedies). Despite recent changes in laws designed to make obtaining pseudoephedrine more difficult, methamphetamine continues to be an easily accessible and relatively inexpensive drug option (Shukla, Crump, & Chrisco, 2012).

Stimulant users seek a euphoric high, feelings of intense elation and pleasure, especially in those users who take the drug via intravenous injection or smoking. MDMA (3.4-methelynedioxy-methamphetamine, commonly known as “ecstasy” or “Molly”) is a mild stimulant with perception-altering effects. It is typically consumed in pill form. Users experience increased energy, feelings of pleasure, and emotional warmth. Repeated use of these stimulants can have significant adverse consequences. Users can experience physical symptoms that include nausea, elevated blood pressure, and increased heart rate. In addition, these drugs can cause feelings of anxiety, hallucinations, and paranoia (Fiorentini et al., 2011). Normal brain functioning is altered after repeated use of these drugs. For example, repeated use can lead to overall depletion among the monoamine neurotransmitters (dopamine, norepinephrine, and serotonin). Depletion of certain neurotransmitters can lead to mood dysphoria, cognitive problems, and other factors. This can lead to people compulsively using stimulants such as cocaine and amphetamines, in part to try to reestablish the person’s physical and psychological pre-use baseline. (Jayanthi & Ramamoorthy, 2005; Rothman et al., 2007).

Caffeine is another stimulant drug. While it is probably the most commonly used drug in the world, the potency of this particular drug pales in comparison to the other stimulant drugs described in this section. Generally, people use caffeine to maintain increased levels of alertness and arousal. Caffeine is found in many common medicines (such as weight loss drugs), beverages, foods, and even cosmetics (Herman & Herman, 2013). While caffeine may have some indirect effects on dopamine neurotransmission, its primary mechanism of action involves antagonizing adenosine activity (Porkka-Heiskanen, 2011). Adenosine is a neurotransmitter that promotes sleep. Caffeine is an adenosine antagonist, so caffeine inhibits the adenosine receptors, thus decreasing sleepiness and promoting wakefulness.

While caffeine is generally considered a relatively safe drug, high blood levels of caffeine can result in insomnia, agitation, muscle twitching, nausea, irregular heartbeat, and even death (Reissig et al., 2009; Wolt et al., 2012). In 2012, Kromann and Nielson reported on a case study of a 40-year-old woman who suffered significant ill effects from her use of caffeine. The woman used caffeine in the past to boost her mood and to provide energy, but over the course of several years, she increased her caffeine consumption to the point that she was consuming three liters of soda each day. Although she had been taking a prescription antidepressant, her symptoms of depression continued to worsen and she began to suffer physically, displaying significant warning signs of cardiovascular disease and diabetes. Upon admission to an outpatient clinic for treatment of mood disorders, she met all of the diagnostic criteria for substance dependence and was advised to dramatically limit her caffeine intake. Once she was able to limit her use to less than 12 ounces of soda a day, both her mental and physical health gradually improved. Despite the prevalence of caffeine use and the large number of people who confess to suffering from caffeine addiction, this was the first published description of soda dependence appearing in scientific literature.

Nicotine is highly addictive, and the use of tobacco products is associated with increased risks of heart disease, stroke, and a variety of cancers. Nicotine exerts its effects through its interaction with acetylcholine receptors. Acetylcholine functions as a neurotransmitter in motor neurons. In the central nervous system, it plays a role in arousal and reward mechanisms. Nicotine is most commonly used in the form of tobacco products like cigarettes or chewing tobacco; therefore, there is a tremendous interest in developing effective smoking cessation techniques. To date, people have used a variety of nicotine replacement therapies in addition to various psychotherapeutic options in an attempt to discontinue their use of tobacco products. In general, smoking cessation programs may be effective in the short term, but it is unclear whether these effects persist (Cropley et al., 2008; Levitt et al., 2007; Smedslund et al., 2004). Vaping as a means to deliver nicotine is becoming increasingly popular, especially among teens and young adults. Vaping uses battery-powered devices, sometimes called e-cigarettes, that deliver liquid nicotine and flavorings as a vapor. Originally reported as a safe alternative to the known cancer-causing agents found in cigarettes, vaping is now known to be very dangerous and has led to serious lung disease and death in users.

Opioids

An opioid is one of a category of drugs that includes heroin, morphine, methadone, and codeine. Opioids have analgesic properties; that is, they decrease pain. Humans have an endogenous opioid neurotransmitter system—the body makes small quantities of opioid compounds that bind to opioid receptors reducing pain and producing euphoria. Thus, opioid drugs, which mimic this endogenous painkilling mechanism, have an extremely high potential for abuse. Natural opioids, called opiates, are derivatives of opium, which is a naturally occurring compound found in the poppy plant. There are now several synthetic versions of opiate drugs (correctly called opioids) that have very potent painkilling effects, and they are often abused. For example, the National Institutes of Drug Abuse has sponsored research that suggests the misuse and abuse of the prescription pain killers hydrocodone and oxycodone are significant public health concerns (Maxwell, 2006). In 2013, the U.S. Food and Drug Administration recommended tighter controls on their medical use.

Historically, heroin has been a major opioid drug of abuse (Figure 4.17). Heroin can be snorted, smoked, or injected intravenously. Heroin produces intense feelings of euphoria and pleasure, which are amplified when the heroin is injected intravenously. Following the initial “rush,” users experience 4–6 hours of “going on the nod,” alternating between conscious and semiconscious states. Heroin users often shoot the drug directly into their veins. Some people who have injected many times into their arms will show “track marks,” while other users will inject into areas between their fingers or between their toes, so as not to show obvious track marks and, like all abusers of intravenous drugs, have an increased risk for contraction of both tuberculosis and HIV.

Photograph A shows various paraphernalia spread out on a black surface. The items include a tourniquet, three syringes of varying widths, three cotton-balls, a tiny cooking vessel, a condom, a capsule of sterile water, and an alcohol swab. Photograph B shows a hand holding a spoon containing heroin tar above a small candle.
Figure 4.17 (a) Common paraphernalia for heroin preparation and use are shown here in a needle exchange kit. (b) Heroin is cooked on a spoon over a candle. (credit a: modification of work by Todd Huffman)

Aside from their utility as analgesic drugs, opioid-like compounds are often found in cough suppressants, anti-nausea, and anti-diarrhea medications. Given that withdrawal from a drug often involves an experience opposite to the effect of the drug, it should be no surprise that opioid withdrawal resembles a severe case of the flu. While opioid withdrawal can be extremely unpleasant, it is not life-threatening (Julien, 2005). Still, people experiencing opioid withdrawal may be given methadone to make withdrawal from the drug less difficult. Methadone is a synthetic opioid that is less euphorigenic than heroin and similar drugs. Methadone clinics help people who previously struggled with opioid addiction manage withdrawal symptoms through the use of methadone. Other drugs, including the opioid buprenorphine, have also been used to alleviate symptoms of opiate withdrawal.

Codeine is an opioid with relatively low potency. It is often prescribed for minor pain, and it is available over-the-counter in some other countries. Like all opioids, codeine does have abuse potential. In fact, abuse of prescription opioid medications is becoming a major concern worldwide (Aquina et al., 2009; Casati et al., 2012).

EVERYDAY CONNECTION: The Opioid Crisis

Few people in the United States remain untouched by the recent opioid epidemic. It seems like everyone knows a friend, family member, or neighbor who has died of an overdose. Opioid addiction reached crisis levels in the United States such that by 2019, an average of 130 people died each day of an opioid overdose (NIDA, 2019).

The crisis actually began in the 1990s, when pharmaceutical companies began mass-marketing pain-relieving opioid drugs like OxyContin with the promise (now known to be false) that they were non-addictive. Increased prescriptions led to greater rates of misuse, along with greater incidence of addiction, even among patients who used these drugs as prescribed. Physiologically, the body can become addicted to opiate drugs in less than a week, including when taken as prescribed. Withdrawal from opioids includes pain, which patients often misinterpret as pain caused by the problem that led to the original prescription, and which motivates patients to continue using the drugs.

The FDA’s 2013 recommendation for tighter controls on opiate prescriptions left many patients addicted to prescription drugs like OxyContin unable to obtain legitimate prescriptions. This created a black market for the drug, where prices soared to $80 or more for a single pill. To prevent withdrawal, many people turned to cheaper heroin, which could be bought for $5 a dose or less. To keep heroin affordable, many dealers began adding more potent synthetic opioids including fentanyl and carfentanyl to increase the effects of heroin. These synthetic drugs are so potent that even small doses can cause overdose and death.

Large-scale public health campaigns by the National Institutes of Health and the National Institute of Drug Abuse have led to recent declines in the opioid crisis. These initiatives include increasing access to treatment and recovery services, increasing access to overdose-reversal drugs like Naloxone, and implementing better public health monitoring systems (NIDA, 2019).

Hallucinogens

hallucinogen is one of a class of drugs that results in profound alterations in sensory and perceptual experiences (Figure 4.18). In some cases, users experience vivid visual hallucinations. It is also common for these types of drugs to cause hallucinations of body sensations (e.g., feeling as if you are a giant) and a skewed perception of the passage of time.

An illustration shows a colorful spiral pattern.
Figure 4.18 Psychedelic images like this are often associated with hallucinogenic compounds. (credit: modification of work by “new 1lluminati”/Flickr)

As a group, hallucinogens are incredibly varied in terms of the neurotransmitter systems they affect. Mescaline and LSD are serotonin agonists, and PCP (angel dust) and ketamine (an animal anesthetic) act as antagonists of the NMDA glutamate receptor. In general, these drugs are not thought to possess the same sort of abuse potential as other classes of drugs discussed in this section.

A photograph shows a window with a neon sign. The sign includes the word “medical” above the shape of a marijuana leaf.
Figure 4.19 Medical marijuana shops are becoming more and more common in the United States. (credit: Laurie Avocado)

While medical marijuana laws have been passed on a state-by-state basis, federal laws still classify this as an illicit substance, making conducting research on the potentially beneficial medicinal uses of marijuana problematic. There is quite a bit of controversy within the scientific community as to the extent to which marijuana might have medicinal benefits due to a lack of large-scale, controlled research (Bostwick, 2012). As a result, many scientists have urged the federal government to allow for relaxation of current marijuana laws and classifications in order to facilitate a more widespread study of the drug’s effects (Aggarwal et al., 2009; Bostwick, 2012; Kogan & Mechoulam, 2007).

Until recently, the United States Department of Justice routinely arrested people involved and seized marijuana used in medicinal settings. In the latter part of 2013, however, the United States Department of Justice issued statements indicating that they would not continue to challenge state medical marijuana laws. This shift in policy may be in response to the scientific community’s recommendations and/or reflect changing public opinion regarding marijuana.

Learning Objectives

By the end of this section, you will be able to:

  • Define hypnosis and meditation
  • Understand the similarities and differences of hypnosis and meditation

Our states of consciousness change as we move from wakefulness to sleep. We also alter our consciousness through the use of various psychoactive drugs. This final section will consider hypnotic and meditative states as additional examples of altered states of consciousness experienced by some individuals.

Hypnosis

Hypnosis is a state of extreme self-focus and attention in which minimal attention is given to external stimuli. In the therapeutic setting, a clinician may use relaxation and suggestion in an attempt to alter the thoughts and perceptions of a patient. Hypnosis has also been used to draw out information believed to be buried deeply in someone’s memory. For individuals who are especially open to the power of suggestion, hypnosis can prove to be a very effective technique, and brain imaging studies have demonstrated that hypnotic states are associated with global changes in brain functioning (Del Casale et al., 2012; Guldenmund et al., 2012).

Historically, hypnosis has been viewed with some suspicion because of its portrayal in popular media and entertainment (Figure 4.20). Therefore, it is important to make a distinction between hypnosis as an empirically based therapeutic approach versus as a form of entertainment. Contrary to popular belief, individuals undergoing hypnosis usually have clear memories of the hypnotic experience and are in control of their own behaviors. While hypnosis may be useful in enhancing memory or a skill, such enhancements are very modest in nature (Raz, 2011).

A poster titled “Barnum the Hypnotist” shows illustrations of a person performing hypnotism.
Figure 4.20 Popular portrayals of hypnosis have led to some widely-held misconceptions.

How exactly does a hypnotist bring a participant to a state of hypnosis? While there are variations, there are four parts that appear consistent in bringing people into the state of suggestibility associated with hypnosis (National Research Council, 1994). These components include:

  • The participant is guided to focus on one thing, such as the hypnotist’s words or a ticking watch.
  • The participant is made comfortable and is directed to be relaxed and sleepy.
  • The participant is told to be open to the process of hypnosis, trust the hypnotist and let go.
  • The participant is encouraged to use his or her imagination.

These steps are conducive to being open to the heightened suggestibility of hypnosis.

People vary in terms of their ability to be hypnotized, but a review of available research suggests that most people are at least moderately hypnotizable (Kihlstrom, 2013). Hypnosis in conjunction with other techniques is used for a variety of therapeutic purposes and has shown to be at least somewhat effective for pain management, treatment of depression and anxiety, smoking cessation, and weight loss (Alladin, 2012; Elkins et al., 2012; Golden, 2012; Montgomery et al., 2012).

How does hypnosis work? Two theories attempt to answer this question: One theory views hypnosis as dissociation and the other theory views it as the performance of a social role. According to the dissociation view, hypnosis is effectively a dissociated state of consciousness, much like our earlier example where you may drive to work, but you are only minimally aware of the process of driving because your attention is focused elsewhere. This theory is supported by Ernest Hilgard’s research into hypnosis and pain. In Hilgard’s experiments, he induced participants into a state of hypnosis, and placed their arms into ice water. Participants were told they would not feel pain, but they could press a button if they did; while they reported not feeling pain, they did, in fact, press the button, suggesting a dissociation of consciousness while in the hypnotic state (Hilgard & Hilgard, 1994).

Taking a different approach to explain hypnosis, the social-cognitive theory of hypnosis sees people in hypnotic states as performing the social role of a hypnotized person. As you will learn when you study social roles, people’s behavior can be shaped by their expectations of how they should act in a given situation. Some view a hypnotized person’s behavior not as an altered or dissociated state of consciousness, but as their fulfillment of the social expectations for that role (Coe, 2009; Coe & Sarbin, 1966).

Meditation

Meditation is the act of focusing on a single target (such as the breath or a repeated sound) to increase awareness of the moment. While hypnosis is generally achieved through the interaction of a therapist and the person being treated, an individual can perform meditation alone. Often, however, people wishing to learn to meditate receive some training in techniques to achieve a meditative state.

Although there are a number of different techniques in use, the central feature of all meditation is clearing the mind in order to achieve a state of relaxed awareness and focus (Chen et al., 2013; Lang et al., 2012). Mindfulness meditation has recently become popular. In the variation of mindful meditation, the meditator’s attention is focused on some internal process or an external object (Zeidan et al., 2012).

LINK TO LEARNING: Watch this video that explains the Scientific Power of Meditation.

Meditative techniques have their roots in religious practices (Figure 4.21), but their use has grown in popularity among practitioners of alternative medicine. Research indicates that meditation may help reduce blood pressure, and the American Heart Association suggests that meditation might be used in conjunction with more traditional treatments as a way to manage hypertension, although there is not sufficient data for a recommendation to be made (Brook et al., 2013). Like hypnosis, meditation also shows promise in stress management, sleep quality (Caldwell et al., 2010), treatment of mood and anxiety disorders (Chen et al., 2013; Freeman et al., 2010; Vøllestad et al., 2012), and pain management (Reiner et al., 2013).

Photograph A shows a statue of Buddha with eyes closed and legs crisscrossed. Photograph B shows a person in a similar position.
Figure 4.21 (a) This is a statue of a meditating Buddha, representing one of the many religious traditions of which meditation plays a part. (b) People practicing meditation may experience an alternate state of consciousness. (credit a: modification of work by Jim Epler; credit b: modification of work by Caleb Roenigk)

6

Learning

A photograph shows a baby turtle moving across sand toward the ocean. A photograph shows a young child standing on a surfboard in a small wave.
Figure 6.1 Loggerhead sea turtle hatchlings are born knowing how to find the ocean and how to swim. Unlike the sea turtle, humans must learn how to swim (and surf).

The summer sun shines brightly on a deserted stretch of beach. Suddenly, a tiny grey head emerges from the sand, then another and another. Soon the beach is teeming with loggerhead sea turtle hatchlings (Figure 6.1). Although only minutes old, the hatchlings know exactly what to do. Their flippers are not very efficient for moving across the hot sand, yet they continue onward, instinctively. Some are quickly snapped up by gulls circling overhead and others become lunch for hungry ghost crabs that dart out of their holes. Despite these dangers, the hatchlings are driven to leave the safety of their nest and find the ocean.

Not far down this same beach, Ben and his son, Julian, paddle out into the ocean on surfboards. A wave approaches. Julian crouches on his board, then jumps up and rides the wave for a few seconds before losing his balance. He emerges from the water in time to watch his father ride the face of the wave.

Unlike baby sea turtles, which know how to find the ocean and swim with no help from their parents, we are not born knowing how to swim (or surf). Yet we humans pride ourselves on our ability to learn. In fact, over thousands of years and across cultures, we have created institutions devoted entirely to learning. But have you ever asked yourself how exactly it is that we learn? What processes are at work as we come to know what we know? This chapter focuses on the primary ways in which learning occurs.

MCCCD Course Competencies

  • Recognize and define three basic forms of learning—classical conditioning, operant conditioning, and observational learning.
  • Critically evaluate information to help make evidence-based decisions.
  • Apply biopsychosocial principles to real-world situations.
  • Use psychological principles to explain the diversity and complexity of the human experience.

Learning Objectives

By the end of this section, you will be able to:

  • Explain how learned behaviors are different from instincts and reflexes
  • Define learning
  • Recognize and define three basic forms of learning—classical conditioning, operant conditioning, and observational learning

Birds build nests and migrate as winter approaches. Infants suckle at their mother’s breast. Dogs shake water off wet fur. Salmon swim upstream to spawn, and spiders spin intricate webs. What do these seemingly unrelated behaviors have in common? They all are unlearned behaviors. Both instincts and reflexes are innate (unlearned) behaviors that organisms are born with. Reflexes are a motor or neural reaction to a specific stimulus in the environment. They tend to be simpler than instincts, involve the activity of specific body parts and systems (e.g., the knee-jerk reflex and the contraction of the pupil in bright light), and involve more primitive centers of the central nervous system (e.g., the spinal cord and the medulla). In contrast, instincts are innate behaviors that are triggered by a broader range of events, such as maturation and the change of seasons. They are more complex patterns of behavior, involve movement of the organism as a whole (e.g., sexual activity and migration), and involve higher brain centers.

Both reflexes and instincts help an organism adapt to its environment and do not have to be learned. For example, every healthy human baby has a sucking reflex, present at birth. Babies are born knowing how to suck on a nipple, whether artificial (from a bottle) or human. Nobody teaches the baby to suck, just as no one teaches a sea turtle hatchling to move toward the ocean. Learning, like reflexes and instincts, allows an organism to adapt to its environment. But unlike instincts and reflexes, learned behaviors involve change and experience: learning is a relatively permanent change in behavior or knowledge that results from experience. In contrast to the innate behaviors discussed above, learning involves acquiring knowledge and skills through experience. Looking back at our surfing scenario, Julian will have to spend much more time training with his surfboard before he learns how to ride the waves like his father.

Learning to surf, as well as any complex learning process (e.g., learning about the discipline of psychology), involves a complex interaction of conscious and unconscious processes. Learning has traditionally been studied in terms of its simplest components—the associations our minds automatically make between events. Our minds have a natural tendency to connect events that occur closely together or in sequence. Associative learning occurs when an organism makes connections between stimuli or events that occur together in the environment. You will see that associative learning is central to all three basic learning processes discussed in this chapter; classical conditioning tends to involve unconscious processes, operant conditioning tends to involve conscious processes, and observational learning adds social and cognitive layers to all the basic associative processes, both conscious and unconscious. These learning processes will be discussed in detail later in the chapter, but it is helpful to have a brief overview of each as you begin to explore how learning is understood from a psychological perspective.

In classical conditioning, also known as Pavlovian conditioning, organisms learn to associate events—or stimuli—that repeatedly happen together. We experience this process throughout our daily lives. For example, you might see a flash of lightning in the sky during a storm and then hear a loud boom of thunder. The sound of the thunder naturally makes you jump (loud noises have that effect by reflex). Because lightning reliably predicts the impending boom of thunder, you may associate the two and jump when you see lightning. Psychological researchers study this associative process by focusing on what can be seen and measured—behaviors. Researchers ask if one stimulus triggers a reflex, can we train a different stimulus to trigger that same reflex? In operant conditioning, organisms learn, again, to associate events—a behavior and its consequence (reinforcement or punishment). A pleasant consequence encourages more of that behavior in the future, whereas a punishment deters the behavior. Imagine you are teaching your dog, Hodor, to sit. You tell Hodor to sit, and give him a treat when he does. After repeated experiences, Hodor begins to associate the act of sitting with receiving a treat. He learns that the consequence of sitting is that he gets a doggie biscuit (Figure 6.2). Conversely, if the dog is punished when exhibiting a behavior, it becomes conditioned to avoid that behavior (e.g., receiving a small shock when crossing the boundary of an invisible electric fence).

A photograph shows a dog standing at attention and smelling a treat in a person's hand.
Figure 6.2 In operant conditioning, a response is associated with a consequence. This dog has learned that certain behaviors result in receiving a treat. 

Observational learning extends the effective range of both classical and operant conditioning. In contrast to classical and operant conditioning, in which learning occurs only through direct experience, observational learning is the process of watching others and then imitating what they do. A lot of learning among humans and other animals comes from observational learning. To get an idea of the extra effective range that observational learning brings, consider Ben and his son Julian from the introduction. How might observation help Julian learn to surf, as opposed to learning by trial and error alone? By watching his father, he can imitate the moves that bring success and avoid the moves that lead to failure. Can you think of something you have learned how to do after watching someone else?

All of the approaches covered in this chapter are part of a particular tradition in psychology, called behaviorism, which we discuss in the next section. However, these approaches do not represent the entire study of learning. Separate traditions of learning have taken shape within different fields of psychology, such as memory and cognition, so you will find that other chapters will round out your understanding of the topic. Over time these traditions tend to converge. For example, in this chapter you will see how cognition has come to play a larger role in behaviorism, whose more extreme adherents once insisted that behaviors are triggered by the environment with no intervening thought.

Learning Objectives

By the end of this section, you will be able to:

  • Explain how classical conditioning occurs
  • Summarize the processes of acquisition, extinction, spontaneous recovery, generalization, and discrimination
Does the name Ivan Pavlov ring a bell? Even if you are new to the study of psychology, chances are that you have heard of Pavlov and his famous dogs.

Pavlov (1849–1936), a Russian scientist, performed extensive research on dogs and is best known for his experiments in classical conditioning (Figure 6.3). As we discussed briefly in the previous section, classical conditioning is a process by which we learn to associate stimuli and, consequently, to anticipate events.

A portrait shows Ivan Pavlov.
Figure 6.3 Ivan Pavlov’s research on the digestive system of dogs unexpectedly led to his discovery of the learning process now known as classical conditioning.

Pavlov came to his conclusions about how learning occurs completely by accident. Pavlov was a physiologist, not a psychologist. Physiologists study the life processes of organisms, from the molecular level to the level of cells, organ systems, and entire organisms. Pavlov’s area of interest was the digestive system (Hunt, 2007). In his studies with dogs, Pavlov measured the amount of saliva produced in response to various foods. Over time, Pavlov (1927) observed that the dogs began to salivate not only at the taste of food, but also at the sight of food, at the sight of an empty food bowl, and even at the sound of the laboratory assistants’ footsteps. Salivating to food in the mouth is reflexive, so no learning is involved. However, dogs don’t naturally salivate at the sight of an empty bowl or the sound of footsteps.

These unusual responses intrigued Pavlov, and he wondered what accounted for what he called the dogs’ “psychic secretions” (Pavlov, 1927). To explore this phenomenon in an objective manner, Pavlov designed a series of carefully controlled experiments to see which stimuli would cause the dogs to salivate. He was able to train the dogs to salivate in response to stimuli that clearly had nothing to do with food, such as the sound of a bell, a light, and a touch on the leg. Through his experiments, Pavlov realized that an organism has two types of responses to its environment: (1) unconditioned (unlearned) responses, or reflexes, and (2) conditioned (learned) responses.

In Pavlov’s experiments, the dogs salivated each time meat powder was presented to them. The meat powder in this situation was an unconditioned stimulus (UCS): a stimulus that elicits a reflexive response in an organism. The dogs’ salivation was an unconditioned response (UCR): a natural (unlearned) reaction to a given stimulus. Before conditioning, think of the dogs’ stimulus and response like this:

 

Meat powder (UCS)→Salivation (UCR)

In classical conditioning, a neutral stimulus is presented immediately before an unconditioned stimulus. Pavlov would sound a tone (like ringing a bell) and then give the dogs the meat powder (Figure 6.4). The tone was the neutral stimulus (NS), which is a stimulus that does not naturally elicit a response. Prior to conditioning, the dogs did not salivate when they just heard the tone because the tone had no association for the dogs.

Tone (NS) + Meat Powder (UCS)→Salivation (UCR)

When Pavlov paired the tone with the meat powder over and over again, the previously neutral stimulus (the tone) also began to elicit salivation from the dogs. Thus, the neutral stimulus became the conditioned stimulus (CS), which is a stimulus that elicits a response after repeatedly being paired with an unconditioned stimulus. Eventually, the dogs began to salivate to the tone alone, just as they previously had salivated at the sound of the assistants’ footsteps. The behavior caused by the conditioned stimulus is called the conditioned response (CR). In the case of Pavlov’s dogs, they had learned to associate the tone (CS) with being fed, and they began to salivate (CR) in anticipation of food.

 

Tone (CS)→Salivation (CR)

 

Two illustrations are labeled “before conditioning” and show a dog salivating over a dish of food, and a dog not salivating while a bell is rung. An illustration labeled “during conditioning” shows a dog salivating over a bowl of food while a bell is rung. An illustration labeled “after conditioning” shows a dog salivating while a bell is rung.
Figure 6.4 Before conditioning, an unconditioned stimulus (food) produces an unconditioned response (salivation), and a neutral stimulus (bell) does not produce a response. During conditioning, the unconditioned stimulus (food) is presented repeatedly just after the presentation of the neutral stimulus (bell). After conditioning, the neutral stimulus alone produces a conditioned response (salivation), thus becoming a conditioned stimulus.

Real World Application of Classical Conditioning

How does classical conditioning work in the real world? Consider the case of Moisha, who was diagnosed with cancer. When she received her first chemotherapy treatment, she vomited shortly after the chemicals were injected. In fact, every trip to the doctor for chemotherapy treatment shortly after the drugs were injected, she vomited. Moisha’s treatment was a success and her cancer went into remission. Now, when she visits her oncologist’s office every 6 months for a check-up, she becomes nauseous. In this case, the chemotherapy drugs are the unconditioned stimulus (UCS), vomiting is the unconditioned response (UCR), the doctor’s office is the conditioned stimulus (CS) after being paired with the UCS, and nausea is the conditioned response (CR). Let’s assume that the chemotherapy drugs that Moisha takes are given through a syringe injection. After entering the doctor’s office, Moisha sees a syringe, and then gets her medication. In addition to the doctor’s office, Moisha will learn to associate the syringe will the medication and will respond to syringes with nausea. This is an example of higher-order (or second-order) conditioning, when the conditioned stimulus (the doctor’s office) serves to condition another stimulus (the syringe). It is hard to achieve anything above second-order conditioning. For example, if someone rang a bell every time Moisha received a syringe injection of chemotherapy drugs in the doctor’s office, Moisha likely will never get sick in response to the bell.

Consider another example of classical conditioning. Let’s say you have a cat named Tiger, who is quite spoiled. You keep her food in a separate cabinet, and you also have a special electric can opener that you use only to open cans of cat food. For every meal, Tiger hears the distinctive sound of the electric can opener (“zzhzhz”) and then gets her food. Tiger quickly learns that when she hears “zzhzhz” she is about to get fed. What do you think Tiger does when she hears the electric can opener? She will likely get excited and run to where you are preparing her food. This is an example of classical conditioning. In this case, what are the UCS, CS, UCR, and CR?

What if the cabinet holding Tiger’s food becomes squeaky? In that case, Tiger hears “squeak” (the cabinet), “zzhzhz” (the electric can opener), and then she gets her food. Tiger will learn to get excited when she hears the “squeak” of the cabinet. Pairing a new neutral stimulus (“squeak”) with the conditioned stimulus (“zzhzhz”) is called higher-order conditioning, or second-order conditioning. This means you are using the conditioned stimulus of the can opener to condition another stimulus: the squeaky cabinet (Figure 6.5). It is hard to achieve anything above second-order conditioning. For example, if you ring a bell, open the cabinet (“squeak”), use the can opener (“zzhzhz”), and then feed Tiger, Tiger will likely never get excited when hearing the bell alone.

A diagram is labeled “Higher-Order / Second-Order Conditioning” and has three rows. The first row shows an electric can opener labeled “conditioned stimulus” followed by a plus sign and then a dish of food labeled “unconditioned stimulus,” followed by an equal sign and a picture of a salivating cat labeled “unconditioned response.” The second row shows a squeaky cabinet door labeled “second-order stimulus” followed by a plus sign and then an electric can opener labeled “conditioned stimulus,” followed by an equal sign and a picture of a salivating cat labeled “conditioned response.” The third row shows a squeaky cabinet door labeled “second-order stimulus” followed by an equal sign and a picture of a salivating cat labeled “conditioned response.”
Figure 6.5In higher-order conditioning, an established conditioned stimulus is paired with a new neutral stimulus (the second-order stimulus), so that eventually the new stimulus also elicits the conditioned response, without the initial conditioned stimulus being presented.

EVERYDAY CONNECTION: Classical Conditioning at Stingray City

Kate and her spouse recently vacationed in the Cayman Islands, and booked a boat tour to Stingray City, where they could feed and swim with the southern stingrays. The boat captain explained how the normally solitary stingrays have become accustomed to interacting with humans. About 40 years ago, fishermen began to clean fish and conch (unconditioned stimulus) at a particular sandbar near a barrier reef, and large numbers of stingrays would swim in to eat (unconditioned response) what the fishermen threw into the water; this continued for years. By the late 1980s, word of the large group of stingrays spread among scuba divers, who then started feeding them by hand. Over time, the southern stingrays in the area were classically conditioned much like Pavlov’s dogs. When they hear the sound of a boat engine (neutral stimulus that becomes a conditioned stimulus), they know that they will get to eat (conditioned response).

As soon as they reached Stingray City, over two dozen stingrays surrounded their tour boat. The couple slipped into the water with bags of squid, the stingrays’ favorite treat. The swarm of stingrays bumped and rubbed up against their legs like hungry cats (Figure 6.6). Kate was able to feed, pet, and even kiss (for luck) these amazing creatures. Then all the squid was gone, and so were the stingrays.

A photograph shows a woman standing in the ocean holding a stingray.
Figure 6.6Kate holds a southern stingray at Stingray City in the Cayman Islands. These stingrays have been classically conditioned to associate the sound of a boat motor with food provided by tourists. (credit: Kathryn Dumper)

Classical conditioning also applies to humans, even babies. For example, Sara buys formula in blue canisters for her six-month-old daughter, Angelina. Whenever Sara takes out a formula container, Angelina gets excited, tries to reach toward the food, and most likely salivates. Why does Angelina get excited when she sees the formula canister? What are the UCS, CS, UCR, and CR here?

So far, all of the examples have involved food, but classical conditioning extends beyond the basic need to be fed. Consider our earlier example of a dog whose owners install an invisible electric dog fence. A small electrical shock (unconditioned stimulus) elicits discomfort (unconditioned response). When the unconditioned stimulus (shock) is paired with a neutral stimulus (the edge of a yard), the dog associates the discomfort (unconditioned response) with the edge of the yard (conditioned stimulus) and stays within the set boundaries. In this example, the edge of the yard elicits fear and anxiety in the dog. Fear and anxiety are the conditioned response.

Now that you know how classical conditioning works and have seen several examples, let’s take a look at some of the general processes involved. In classical conditioning, the initial period of learning is known as acquisition, when an organism learns to connect a neutral stimulus and an unconditioned stimulus. During acquisition, the neutral stimulus begins to elicit the conditioned response, and eventually, the neutral stimulus becomes a conditioned stimulus capable of eliciting the conditioned response by itself. Timing is important for conditioning to occur. Typically, there should only be a brief interval between presentation of the conditioned stimulus and the unconditioned stimulus. Depending on what is being conditioned, sometimes this interval is as little as five seconds (Chance, 2009). However, with other types of conditioning, the interval can be up to several hours.

Taste aversion is a type of conditioning in which an interval of several hours may pass between the conditioned stimulus (something ingested) and the unconditioned stimulus (nausea or illness). Here’s how it works. Between classes, you and a friend grab a quick lunch from a food cart on campus. You share a dish of chicken curry and head off to your next class. A few hours later, you feel nauseous and become ill. Although your friend is fine and you determine that you have intestinal flu (the food is not the culprit), you’ve developed a taste aversion; the next time you are at a restaurant and someone orders curry, you immediately feel ill. While the chicken dish is not what made you sick, you are experiencing taste aversion: you’ve been conditioned to be averse to a food after a single, bad experience.

How does this occur—conditioning based on a single instance and involving an extended time lapse between the event and the negative stimulus? Research into taste aversion suggests that this response may be an evolutionary adaptation designed to help organisms quickly learn to avoid harmful foods (Garcia & Rusiniak, 1980; Garcia & Koelling, 1966). Not only may this contribute to species survival via natural selection, but it may also help us develop strategies for challenges such as helping cancer patients through the nausea induced by certain treatments (Holmes, 1993; Jacobsen et al., 1993; Hutton et al., 2007; Skolin et al., 2006). Garcia and Koelling (1966) showed not only that taste aversions could be conditioned, but also that there were biological constraints to learning. In their study, separate groups of rats were conditioned to associate either a flavor with illness, or lights and sounds with illness. Results showed that all rats exposed to flavor-illness pairings learned to avoid the flavor, but none of the rats exposed to lights and sounds with illness learned to avoid lights or sounds. This added evidence to the idea that classical conditioning could contribute to species survival by helping organisms learn to avoid stimuli that posed real dangers to health and welfare.

Robert Rescorla demonstrated how powerfully an organism can learn to predict the UCS from the CS. Take, for example, the following two situations. Ari’s dad always has dinner on the table every day at 6:00. Soraya’s mom switches it up so that some days they eat dinner at 6:00, some days they eat at 5:00, and other days they eat at 7:00. For Ari, 6:00 reliably and consistently predicts dinner, so Ari will likely start feeling hungry every day right before 6:00, even if he’s had a late snack. Soraya, on the other hand, will be less likely to associate 6:00 with dinner, since 6:00 does not always predict that dinner is coming. Rescorla, along with his colleague at Yale University, Alan Wagner, developed a mathematical formula that could be used to calculate the probability that an association would be learned given the ability of a conditioned stimulus to predict the occurrence of an unconditioned stimulus and other factors; today this is known as the Rescorla-Wagner model (Rescorla & Wagner, 1972)

Once we have established the connection between the unconditioned stimulus and the conditioned stimulus, how do we break that connection and get the dog, cat, or child to stop responding? In Tiger’s case, imagine what would happen if you stopped using the electric can opener for her food and began to use it only for human food. Now, Tiger would hear the can opener, but she would not get food. In classical conditioning terms, you would be giving the conditioned stimulus, but not the unconditioned stimulus. Pavlov explored this scenario in his experiments with dogs: sounding the tone without giving the dogs the meat powder. Soon the dogs stopped responding to the tone. Extinction is the decrease in the conditioned response when the unconditioned stimulus is no longer presented with the conditioned stimulus. When presented with the conditioned stimulus alone, the dog, cat, or other organism would show a weaker and weaker response, and finally no response. In classical conditioning terms, there is a gradual weakening and disappearance of the conditioned response.

What happens when learning is not used for a while—when what was learned lies dormant? As we just discussed, Pavlov found that when he repeatedly presented the bell (conditioned stimulus) without the meat powder (unconditioned stimulus), extinction occurred; the dogs stopped salivating to the bell. However, after a couple of hours of resting from this extinction training, the dogs again began to salivate when Pavlov rang the bell. What do you think would happen with Tiger’s behavior if your electric can opener broke, and you did not use it for several months? When you finally got it fixed and started using it to open Tiger’s food again, Tiger would remember the association between the can opener and her food—she would get excited and run to the kitchen when she heard the sound. The behavior of Pavlov’s dogs and Tiger illustrates a concept Pavlov called spontaneous recovery: the return of a previously extinguished conditioned response following a rest period (Figure 6.7).

A chart has an x-axis labeled “time” and a y-axis labeled “strength of CR;” there are four columns of graphed data. The first column is labeled “acquisition (CS + UCS) and the line rises steeply from the bottom to the top. The second column is labeled “Extinction (CS alone)” and the line drops rapidly from the top to the bottom. The third column is labeled “Pause” and has no line. The fourth column has a line that begins midway and drops sharply to the bottom. At the point where the line begins, it is labeled “Spontaneous recovery of CR”; the halfway point on the line is labeled “Extinction (CS alone).”
Figure 6.7 This is the curve of acquisition, extinction, and spontaneous recovery. The rising curve shows the conditioned response quickly getting stronger through the repeated pairing of the conditioned stimulus and the unconditioned stimulus (acquisition). Then the curve decreases, which shows how the conditioned response weakens when only the conditioned stimulus is presented (extinction). After a break or pause from conditioning, the conditioned response reappears (spontaneous recovery).

Of course, these processes also apply to humans. For example, let’s say that every day when you walk to campus, an ice cream truck passes your route. Day after day, you hear the truck’s music (neutral stimulus), so you finally stop and purchase a chocolate ice cream bar. You take a bite (unconditioned stimulus) and then your mouth waters (unconditioned response). This initial period of learning is known as acquisition, when you begin to connect the neutral stimulus (the sound of the truck) and the unconditioned stimulus (the taste of the chocolate ice cream in your mouth). During acquisition, the conditioned response gets stronger and stronger through repeated pairings of the conditioned stimulus and unconditioned stimulus. Several days (and ice cream bars) later, you notice that your mouth begins to water (conditioned response) as soon as you hear the truck’s musical jingle—even before you bite into the ice cream bar. Then one day you head down the street. You hear the truck’s music (conditioned stimulus), and your mouth waters (conditioned response). However, when you get to the truck, you discover that they are all out of ice cream. You leave disappointed. The next few days you pass by the truck and hear the music, but don’t stop to get an ice cream bar because you’re running late for class. You begin to salivate less and less when you hear the music, until by the end of the week, your mouth no longer waters when you hear the tune. This illustrates extinction. The conditioned response weakens when only the conditioned stimulus (the sound of the truck) is presented, without being followed by the unconditioned stimulus (chocolate ice cream in the mouth). Then the weekend comes. You don’t have to go to class, so you don’t pass the truck. Monday morning arrives and you take your usual route to campus. You round the corner and hear the truck again. What do you think happens? Your mouth begins to water again. Why? After a break from conditioning, the conditioned response reappears, which indicates spontaneous recovery.

Acquisition and extinction involve the strengthening and weakening, respectively, of a learned association. Two other learning processes—stimulus discrimination and stimulus generalization—are involved in determining which stimuli will trigger learned responses. Animals (including humans) need to distinguish between stimuli—for example, between sounds that predict a threatening event and sounds that do not—so that they can respond appropriately (such as running away if the sound is threatening). When an organism learns to respond differently to various stimuli that are similar, it is called stimulus discrimination. In classical conditioning terms, the organism demonstrates the conditioned response only to the conditioned stimulus. Pavlov’s dogs discriminated between the basic tone that sounded before they were fed and other tones (e.g., the doorbell), because the other sounds did not predict the arrival of food. Similarly, Tiger, the cat, discriminated between the sound of the can opener and the sound of the electric mixer. When the electric mixer is going, Tiger is not about to be fed, so she does not come running to the kitchen looking for food. In our other example, Moisha, the cancer patient, discriminated between oncologists and other types of doctors. She learned not to feel ill when visiting doctors for other types of appointments, such as her annual physical.

On the other hand, when an organism demonstrates the conditioned response to stimuli that are similar to the conditioned stimulus, it is called stimulus generalization, the opposite of stimulus discrimination. The more similar a stimulus is to the conditioned stimulus, the more likely the organism is to give the conditioned response. For instance, if the electric mixer sounds very similar to the electric can opener, Tiger may come running after hearing its sound. But if you do not feed her following the electric mixer sound, and you continue to feed her consistently after the electric can opener sound, she will quickly learn to discriminate between the two sounds (provided they are sufficiently dissimilar that she can tell them apart). In our other example, Moisha continued to feel ill whenever visiting other oncologists or other doctors in the same building as her oncologist.

An interactive H5P element has been excluded from this version of the text. You can view it online here:
https://open.maricopa.edu/intropsychme/?p=29#h5p-10

Behaviorism

John B. Watson, shown in Figure 6.8, is considered the founder of behaviorism. Behaviorism is a school of thought that arose during the first part of the 20th century, which incorporates elements of Pavlov’s classical conditioning (Hunt, 2007). In stark contrast with Freud, who considered the reasons for behavior to be hidden in the unconscious, Watson championed the idea that all behavior can be studied as a simple stimulus-response reaction, without regard for internal processes. Watson argued that in order for psychology to become a legitimate science, it must shift its concern away from internal mental processes because mental processes cannot be seen or measured. Instead, he asserted that psychology must focus on outward observable behavior that can be measured.

A photograph shows John B. Watson.
Figure 6.8 John B. Watson used the principles of classical conditioning in the study of human emotion.

Watson’s ideas were influenced by Pavlov’s work. According to Watson, human behavior, just like animal behavior, is primarily the result of conditioned responses. Whereas Pavlov’s work with dogs involved the conditioning of reflexes, Watson believed the same principles could be extended to the conditioning of human emotions (Watson, 1919). Thus began Watson’s work with his graduate student Rosalie Rayner and a baby called Little Albert. Through their experiments with Little Albert, Watson and Rayner (1920) demonstrated how fears can be conditioned.

In 1920, Watson was the chair of the psychology department at Johns Hopkins University. Through his position at the university he came to meet Little Albert’s mother, Arvilla Merritte, who worked at a campus hospital (DeAngelis, 2010). Watson offered her a dollar to allow her son to be the subject of his experiments in classical conditioning. Through these experiments, Little Albert was exposed to and conditioned to fear certain things. Initially, he was presented with various neutral stimuli, including a rabbit, a dog, a monkey, masks, cotton wool, and a white rat. He was not afraid of any of these things. Then Watson, with the help of Rayner, conditioned Little Albert to associate these stimuli with an emotion—fear. For example, Watson handed Little Albert the white rat, and Little Albert enjoyed playing with it. Then Watson made a loud sound, by striking a hammer against a metal bar hanging behind Little Albert’s head, each time Little Albert touched the rat. Little Albert was frightened by the sound—demonstrating a reflexive fear of sudden loud noises—and began to cry. Watson repeatedly paired the loud sound with the white rat. Soon Little Albert became frightened by the white rat alone. In this case, what are the UCS, CS, UCR, and CR? Days later, Little Albert demonstrated stimulus generalization—he became afraid of other furry things: a rabbit, a furry coat, and even a Santa Claus mask (Figure 6.9). Watson had succeeded in conditioning a fear response in Little Albert, thus demonstrating that emotions could become conditioned responses. It had been Watson’s intention to produce a phobia—a persistent, excessive fear of a specific object or situation— through conditioning alone, thus countering Freud’s view that phobias are caused by deep, hidden conflicts in the mind. However, there is no evidence that Little Albert experienced phobias in later years. Little Albert’s mother moved away, ending the experiment. While Watson’s research provided new insight into conditioning, it would be considered unethical by today’s standards.

A photograph shows a man wearing a mask with a white beard; his face is close to a baby who is crawling away. A caption reads, “Now he fears even Santa Claus.”
Figure 6.9 Through stimulus generalization, Little Albert came to fear furry things, including Watson in a Santa Claus mask.

Learning Objectives

By the end of this section, you will be able to:

  • Define operant conditioning
  • Explain the difference between reinforcement and punishment
  • Distinguish between reinforcement schedules

The previous section of this chapter focused on the type of associative learning known as classical conditioning. Remember that in classical conditioning, something in the environment triggers a reflex automatically, and researchers train the organism to react to a different stimulus. Now we turn to the second type of associative learning, operant conditioning. In operant conditioning, organisms learn to associate a behavior and its consequence (Table 6.1). A pleasant consequence makes that behavior more likely to be repeated in the future. For example, Spirit, a dolphin at the National Aquarium in Baltimore, does a flip in the air when her trainer blows a whistle. The consequence is that she gets a fish.

Classical and Operant Conditioning Compared
Classical Conditioning Operant Conditioning
Conditioning approach An unconditioned stimulus (such as food) is paired with a neutral stimulus (such as a bell). The neutral stimulus eventually becomes the conditioned stimulus, which brings about the conditioned response (salivation). The target behavior is followed by reinforcement or punishment to either strengthen or weaken it, so that the learner is more likely to exhibit the desired behavior in the future.
Stimulus timing The stimulus occurs immediately before the response. The stimulus (either reinforcement or punishment) occurs soon after the response.
Table6.1

Psychologist B. F. Skinner saw that classical conditioning is limited to existing behaviors that are reflexively elicited, and it doesn’t account for new behaviors such as riding a bike. He proposed a theory about how such behaviors come about. Skinner believed that behavior is motivated by the consequences we receive for the behavior: the reinforcements and punishments. His idea that learning is the result of consequences is based on the law of effect, which was first proposed by psychologist Edward Thorndike. According to the law of effect, behaviors that are followed by consequences that are satisfying to the organism are more likely to be repeated, and behaviors that are followed by unpleasant consequences are less likely to be repeated (Thorndike, 1911). Essentially, if an organism does something that brings about a desired result, the organism is more likely to do it again. If an organism does something that does not bring about a desired result, the organism is less likely to do it again. An example of the law of effect is in employment. One of the reasons (and often the main reason) we show up for work is because we get paid to do so. If we stop getting paid, we will likely stop showing up—even if we love our job.

Working with Thorndike’s law of effect as his foundation, Skinner began conducting scientific experiments on animals (mainly rats and pigeons) to determine how organisms learn through operant conditioning (Skinner, 1938). He placed these animals inside an operant conditioning chamber, which has come to be known as a “Skinner box” (Figure 6.10). A Skinner box contains a lever (for rats) or disk (for pigeons) that the animal can press or peck for a food reward via the dispenser. Speakers and lights can be associated with certain behaviors. A recorder counts the number of responses made by the animal.

In discussing operant conditioning, we use several everyday words—positive, negative, reinforcement, and punishment—in a specialized manner. In operant conditioning, positive and negative do not mean good and bad. Instead, positive means you are adding something, and negative means you are taking something away. Reinforcement means you are increasing a behavior, and punishment means you are decreasing a behavior. Reinforcement can be positive or negative, and punishment can also be positive or negative. All reinforcers (positive or negative) increase the likelihood of a behavioral response. All punishers (positive or negative) decrease the likelihood of a behavioral response. Now let’s combine these four terms: positive reinforcement, negative reinforcement, positive punishment, and negative punishment (Table 6.2).

Positive and Negative Reinforcement and Punishment
Reinforcement Punishment
Positive Something is added to increase the likelihood of a behavior. Something is added to decrease the likelihood of a behavior.
Negative Something is removed to increase the likelihood of a behavior. Something is removed to decrease the likelihood of a behavior.
Table6.2

Reinforcement

The most effective way to teach a person or animal a new behavior is with positive reinforcement. In positive reinforcement, a desirable stimulus is added to increase a behavior.

For example, you tell your five-year-old son, Jerome, that if he cleans his room, he will get a toy. Jerome quickly cleans his room because he wants a new art set. Let’s pause for a moment. Some people might say, “Why should I reward my child for doing what is expected?” But in fact we are constantly and consistently rewarded in our lives. Our paychecks are rewards, as are high grades and acceptance into our preferred school. Being praised for doing a good job and for passing a driver’s test is also a reward. Positive reinforcement as a learning tool is extremely effective. It has been found that one of the most effective ways to increase achievement in school districts with below-average reading scores was to pay the children to read. Specifically, second-grade students in Dallas were paid $2 each time they read a book and passed a short quiz about the book. The result was a significant increase in reading comprehension (Fryer, 2010). What do you think about this program? If Skinner were alive today, he would probably think this was a great idea. He was a strong proponent of using operant conditioning principles to influence students’ behavior at school. In fact, in addition to the Skinner box, he also invented what he called a teaching machine that was designed to reward small steps in learning (Skinner, 1961)—an early forerunner of computer-assisted learning. His teaching machine tested students’ knowledge as they worked through various school subjects. If students answered questions correctly, they received immediate positive reinforcement and could continue; if they answered incorrectly, they did not receive any reinforcement. The idea was that students would spend additional time studying the material to increase their chance of being reinforced the next time (Skinner, 1961).

In negative reinforcement, an undesirable stimulus is removed to increase a behavior. For example, car manufacturers use the principles of negative reinforcement in their seatbelt systems, which go “beep, beep, beep” until you fasten your seatbelt. The annoying sound stops when you exhibit the desired behavior, increasing the likelihood that you will buckle up in the future. Negative reinforcement is also used frequently in horse training. Riders apply pressure—by pulling the reins or squeezing their legs—and then remove the pressure when the horse performs the desired behavior, such as turning or speeding up. The pressure is the negative stimulus that the horse wants to remove.

Punishment

Many people confuse negative reinforcement with punishment in operant conditioning, but they are two very different mechanisms. Remember that reinforcement, even when it is negative, always increases a behavior. In contrast, punishment always decreases a behavior. In positive punishment, you add an undesirable stimulus to decrease a behavior. An example of positive punishment is scolding a student to get the student to stop texting in class. In this case, a stimulus (the reprimand) is added in order to decrease the behavior (texting in class). In negative punishment, you remove a pleasant stimulus to decrease behavior. For example, when a child misbehaves, a parent can take away a favorite toy. In this case, a stimulus (the toy) is removed in order to decrease the behavior.

Punishment, especially when it is immediate, is one way to decrease undesirable behavior. For example, imagine your four-year-old son, Brandon, hit his younger brother. You have Brandon write 100 times “I will not hit my brother” (positive punishment). Chances are he won’t repeat this behavior. While strategies like this are common today, in the past children were often subject to physical punishment, such as spanking. It’s important to be aware of some of the drawbacks in using physical punishment on children. First, punishment may teach fear. Brandon may become fearful of the street, but he also may become fearful of the person who delivered the punishment—you, his parent. Similarly, children who are punished by teachers may come to fear the teacher and try to avoid school (Gershoff et al., 2010). Consequently, most schools in the United States have banned corporal punishment. Second, punishment may cause children to become more aggressive and prone to antisocial behavior and delinquency (Gershoff, 2002). They see their parents resort to spanking when they become angry and frustrated, so, in turn, they may act out this same behavior when they become angry and frustrated. For example, because you spank Brenda when you are angry with her for her misbehavior, she might start hitting her friends when they won’t share their toys.

While positive punishment can be effective in some cases, Skinner suggested that the use of punishment should be weighed against the possible negative effects. Today’s psychologists and parenting experts favor reinforcement over punishment—they recommend that you catch your child doing something good and reward her for it.

An interactive H5P element has been excluded from this version of the text. You can view it online here:
https://open.maricopa.edu/intropsychme/?p=29#h5p-13

Shaping

In his operant conditioning experiments, Skinner often used an approach called shaping. Instead of rewarding only the target behavior, in shaping, we reward successive approximations of a target behavior. Why is shaping needed? Remember that in order for reinforcement to work, the organism must first display the behavior. Shaping is needed because it is extremely unlikely that an organism will display anything but the simplest of behaviors spontaneously. In shaping, behaviors are broken down into many small, achievable steps. The specific steps used in the process are the following:

  1. Reinforce any response that resembles the desired behavior.
  2. Then reinforce the response that more closely resembles the desired behavior. You will no longer reinforce the previously reinforced response.
  3. Next, begin to reinforce the response that even more closely resembles the desired behavior.
  4. Continue to reinforce closer and closer approximations of the desired behavior.
  5. Finally, only reinforce the desired behavior.

Shaping is often used in teaching a complex behavior or chain of behaviors. Skinner used shaping to teach pigeons not only such relatively simple behaviors as pecking a disk in a Skinner box, but also many unusual and entertaining behaviors, such as turning in circles, walking in figure eights, and even playing ping pong; the technique is commonly used by animal trainers today. An important part of shaping is stimulus discrimination. Recall Pavlov’s dogs—he trained them to respond to the tone of a bell, and not to similar tones or sounds. This discrimination is also important in operant conditioning and in shaping behavior.

It’s easy to see how shaping is effective in teaching behaviors to animals, but how does shaping work with humans? Let’s consider parents whose goal is to have their child learn to clean his room. They use shaping to help him master steps toward the goal. Instead of performing the entire task, they set up these steps and reinforce each step. First, he cleans up one toy. Second, he cleans up five toys. Third, he chooses whether to pick up ten toys or put his books and clothes away. Fourth, he cleans up everything except two toys. Finally, he cleans his entire room.

Primary and Secondary Reinforcers

Rewards such as stickers, praise, money, toys, and more can be used to reinforce learning. Let’s go back to Skinner’s rats again. How did the rats learn to press the lever in the Skinner box? They were rewarded with food each time they pressed the lever. For animals, food would be an obvious reinforcer.

What would be a good reinforcer for humans? For your child Chris, it was the promise of a toy when they cleaned their room. How about Sydney, the soccer player? If you gave Sydney a piece of candy every time Sydney scored a goal, you would be using a primary reinforcer. Primary reinforcers are reinforcers that have innate reinforcing qualities. These kinds of reinforcers are not learned. Water, food, sleep, shelter, sex, and touch, among others, are primary reinforcers. Pleasure is also a primary reinforcer. Organisms do not lose their drive for these things. For most people, jumping in a cool lake on a very hot day would be reinforcing and the cool lake would be innately reinforcing—the water would cool the person off (a physical need), as well as provide pleasure.

secondary reinforcer has no inherent value and only has reinforcing qualities when linked with a primary reinforcer. Praise, linked to affection, is one example of a secondary reinforcer, as when you called out “Great shot!” every time Sydney made a goal. Another example, money, is only worth something when you can use it to buy other things—either things that satisfy basic needs (food, water, shelter—all primary reinforcers) or other secondary reinforcers. If you were on a remote island in the middle of the Pacific Ocean and you had stacks of money, the money would not be useful if you could not spend it. What about the stickers on the behavior chart? They also are secondary reinforcers.

Sometimes, instead of stickers on a sticker chart, a token is used. Tokens, which are also secondary reinforcers, can then be traded in for rewards and prizes. Entire behavior management systems, known as token economies, are built around the use of these kinds of token reinforcers. Token economies have been found to be very effective at modifying behavior in a variety of settings such as schools, prisons, and mental hospitals. For example, a study by Cangi and Daly (2013) found that use of a token economy increased appropriate social behaviors and reduced inappropriate behaviors in a group of autistic school children. Autistic children tend to exhibit disruptive behaviors such as pinching and hitting. When the children in the study exhibited appropriate behavior (not hitting or pinching), they received a “quiet hands” token. When they hit or pinched, they lost a token. The children could then exchange specified amounts of tokens for minutes of playtime.

EVERYDAY CONNECTION: Behavior Modification in Children

Parents and teachers often use behavior modification to change a child’s behavior. Behavior modification uses the principles of operant conditioning to accomplish behavior change so that undesirable behaviors are switched for more socially acceptable ones. Some teachers and parents create a sticker chart, in which several behaviors are listed (Figure 6.11). Sticker charts are a form of token economies, as described in the text. Each time children perform the behavior, they get a sticker, and after a certain number of stickers, they get a prize, or reinforcer. The goal is to increase acceptable behaviors and decrease misbehavior. Remember, it is best to reinforce desired behaviors, rather than to use punishment. In the classroom, the teacher can reinforce a wide range of behaviors, from students raising their hands, to walking quietly in the hall, to turning in their homework. At home, parents might create a behavior chart that rewards children for things such as putting away toys, brushing their teeth, and helping with dinner. In order for behavior modification to be effective, the reinforcement needs to be connected with the behavior; the reinforcement must matter to the child and be done consistently.

A photograph shows a child placing stickers on a chart hanging on the wall.
Figure 6.11 Sticker charts are a form of positive reinforcement and a tool for behavior modification. Once this child earns a certain number of stickers for demonstrating a desired behavior, she will be rewarded with a trip to the ice cream parlor. (credit: Abigail Batchelder)

Time-out is another popular technique used in behavior modification with children. It operates on the principle of negative punishment. When a child demonstrates an undesirable behavior, she is removed from the desirable activity at hand (Figure 6.12). For example, say that Sophia and her brother Mario are playing with building blocks. Sophia throws some blocks at her brother, so you give her a warning that she will go to time-out if she does it again. A few minutes later, she throws more blocks at Mario. You remove Sophia from the room for a few minutes. When she comes back, she doesn’t throw blocks.

There are several important points that you should know if you plan to implement time-out as a behavior modification technique. First, make sure the child is being removed from a desirable activity and placed in a less desirable location. If the activity is something undesirable for the child, this technique will backfire because it is more enjoyable for the child to be removed from the activity. Second, the length of the time-out is important. The general rule of thumb is one minute for each year of the child’s age. Sophia is five; therefore, she sits in a time-out for five minutes. Setting a timer helps children know how long they have to sit in time-out. Finally, as a caregiver, keep several guidelines in mind over the course of a time-out: remain calm when directing your child to time-out; ignore your child during time-out (because caregiver attention may reinforce misbehavior); and give the child a hug or a kind word when time-out is over.

Photograph A shows several children climbing on playground equipment. Photograph B shows a child sitting alone on a bench.
Figure 6.12 Time-out is a popular form of negative punishment used by caregivers. When a child misbehaves, he or she is removed from a desirable activity in an effort to decrease the unwanted behavior. For example, (a) a child might be playing on the playground with friends and push another child; (b) the child who misbehaved would then be removed from the activity for a short period of time. (credit a: modification of work by Simone Ramella; credit b: modification of work by “Spring Dew”/Flickr)

Reinforcement Schedules

Remember, the best way to teach a person or animal a behavior is to use positive reinforcement. For example, Skinner used positive reinforcement to teach rats to press a lever in a Skinner box. At first, the rat might randomly hit the lever while exploring the box, and out would come a pellet of food. After eating the pellet, what do you think the hungry rat did next? It hit the lever again, and received another pellet of food. Each time the rat hit the lever, a pellet of food came out. When an organism receives a reinforcer each time it displays a behavior, it is called continuous reinforcement. This reinforcement schedule is the quickest way to teach someone a behavior, and it is especially effective in training a new behavior. Let’s look back at the dog that was learning to sit earlier in the chapter. Now, each time he sits, you give him a treat. Timing is important here: you will be most successful if you present the reinforcer immediately after he sits, so that he can make an association between the target behavior (sitting) and the consequence (getting a treat).

Once a behavior is trained, researchers and trainers often turn to another type of reinforcement schedule—partial reinforcement. In partial reinforcement, also referred to as intermittent reinforcement, the person or animal does not get reinforced every time they perform the desired behavior. There are several different types of partial reinforcement schedules (Table 6.3). These schedules are described as either fixed or variable, and as either interval or ratio. Fixed refers to the number of responses between reinforcements, or the amount of time between reinforcements, which is set and unchanging. Variable refers to the number of responses or amount of time between reinforcements, which varies or changes. Interval means the schedule is based on the time between reinforcements, and ratio means the schedule is based on the number of responses between reinforcements.

Reinforcement Schedules
Reinforcement Schedule Description Result Example
Fixed interval Reinforcement is delivered at predictable time intervals (e.g., after 5, 10, 15, and 20 minutes). Moderate response rate with significant pauses after reinforcement Hospital patient uses patient-controlled, doctor-timed pain relief
Variable interval Reinforcement is delivered at unpredictable time intervals (e.g., after 5, 7, 10, and 20 minutes). Moderate yet steady response rate Checking Facebook
Fixed ratio Reinforcement is delivered after a predictable number of responses (e.g., after 2, 4, 6, and 8 responses). High response rate with pauses after reinforcement Piecework—factory worker getting paid for every x number of items manufactured
Variable ratio Reinforcement is delivered after an unpredictable number of responses (e.g., after 1, 4, 5, and 9 responses). High and steady response rate Gambling
Table6.3

Now let’s combine these four terms. A fixed interval reinforcement schedule is when behavior is rewarded after a set amount of time. For example, June undergoes major surgery in a hospital. During recovery, she is expected to experience pain and will require prescription medications for pain relief. June is given an IV drip with a patient-controlled painkiller. Her doctor sets a limit: one dose per hour. June pushes a button when pain becomes difficult, and she receives a dose of medication. Since the reward (pain relief) only occurs on a fixed interval, there is no point in exhibiting the behavior when it will not be rewarded.

With a variable interval reinforcement schedule, the person or animal gets the reinforcement based on varying amounts of time, which are unpredictable. Say that Manuel is the manager at a fast-food restaurant. Every once in a while someone from the quality control division comes to Manuel’s restaurant. If the restaurant is clean and the service is fast, everyone on that shift earns a $20 bonus. Manuel never knows when the quality control person will show up, so he always tries to keep the restaurant clean and ensures that his employees provide prompt and courteous service. His productivity regarding prompt service and keeping a clean restaurant are steady because he wants his crew to earn the bonus.

With a fixed ratio reinforcement schedule, there are a set number of responses that must occur before the behavior is rewarded. Carla sells glasses at an eyeglass store, and she earns a commission every time she sells a pair of glasses. She always tries to sell people more pairs of glasses, including prescription sunglasses or a backup pair, so she can increase her commission. She does not care if the person really needs the prescription sunglasses, Carla just wants her bonus. The quality of what Carla sells does not matter because her commission is not based on quality; it’s only based on the number of pairs sold. This distinction in the quality of performance can help determine which reinforcement method is most appropriate for a particular situation. Fixed ratios are better suited to optimize the quantity of output, whereas a fixed interval, in which the reward is not quantity based, can lead to a higher quality of output.

In a variable ratio reinforcement schedule, the number of responses needed for a reward varies. This is the most powerful partial reinforcement schedule. An example of the variable ratio reinforcement schedule is gambling. Imagine that Sarah—generally a smart, thrifty woman—visits Las Vegas for the first time. She is not a gambler, but out of curiosity she puts a quarter into the slot machine, and then another, and another. Nothing happens. Two dollars in quarters later, her curiosity is fading, and she is just about to quit. But then, the machine lights up, bells go off, and Sarah gets 50 quarters back. That’s more like it! Sarah gets back to inserting quarters with renewed interest, and a few minutes later she has used up all her gains and is $10 in the hole. Now might be a sensible time to quit. And yet, she keeps putting money into the slot machine because she never knows when the next reinforcement is coming. She keeps thinking that with the next quarter she could win $50, or $100, or even more. Because the reinforcement schedule in most types of gambling has a variable ratio schedule, people keep trying and hoping that the next time they will win big. This is one of the reasons that gambling is so addictive—and so resistant to extinction.

In operant conditioning, extinction of a reinforced behavior occurs at some point after reinforcement stops, and the speed at which this happens depends on the reinforcement schedule. In a variable ratio schedule, the point of extinction comes very slowly, as described above. But in the other reinforcement schedules, extinction may come quickly. For example, if June presses the button for the pain relief medication before the allotted time her doctor has approved, no medication is administered. She is on a fixed interval reinforcement schedule (dosed hourly), so extinction occurs quickly when reinforcement doesn’t come at the expected time. Among the reinforcement schedules, variable ratio is the most productive and the most resistant to extinction. Fixed interval is the least productive and the easiest to extinguish (Figure 6.13).

A graph has an x-axis labeled “Time” and a y-axis labeled “Cumulative number of responses.” Two lines labeled “Variable Ratio” and “Fixed Ratio” have similar, steep slopes. The variable ratio line remains straight and is marked in random points where reinforcement occurs. The fixed ratio line has consistently spaced marks indicating where reinforcement has occurred, but after each reinforcement, there is a small drop in the line before it resumes its overall slope. Two lines labeled “Variable Interval” and “Fixed Interval” have similar slopes at roughly a 45-degree angle. The variable interval line remains straight and is marked in random points where reinforcement occurs. The fixed interval line has consistently spaced marks indicating where reinforcement has occurred, but after each reinforcement, there is a drop in the line.
Figure 6.13The four reinforcement schedules yield different response patterns. The variable ratio schedule is unpredictable and yields high and steady response rates, with little if any pause after reinforcement (e.g., gambler). A fixed ratio schedule is predictable and produces a high response rate, with a short pause after reinforcement (e.g., eyeglass saleswoman). The variable interval schedule is unpredictable and produces a moderate, steady response rate (e.g., restaurant manager). The fixed interval schedule yields a scallop-shaped response pattern, reflecting a significant pause after reinforcement (e.g., surgery patient).

CONNECT THE CONCEPTS: Gambling and the Brain

Skinner (1953) stated, “If the gambling establishment cannot persuade a patron to turn over money with no return, it may achieve the same effect by returning part of the patron’s money on a variable-ratio schedule” (p. 397).

Skinner uses gambling as an example of the power of the variable-ratio reinforcement schedule for maintaining behavior even during long periods without any reinforcement. In fact, Skinner was so confident in his knowledge of gambling addiction that he even claimed he could turn a pigeon into a pathological gambler (“Skinner’s Utopia,” 1971). It is indeed true that variable-ratio schedules keep behavior quite persistent—just imagine the frequency of a child’s tantrums if a parent gives in even once to the behavior. The occasional reward makes it almost impossible to stop the behavior.

Recent research in rats has failed to support Skinner’s idea that training on variable-ratio schedules alone causes pathological gambling (Laskowski et al., 2019). However, other research suggests that gambling does seem to work on the brain in the same way as most addictive drugs, and so there may be some combination of brain chemistry and reinforcement schedule that could lead to problem gambling (Figure 6.14). Specifically, modern research shows the connection between gambling and the activation of the reward centers of the brain that use the neurotransmitter (brain chemical) dopamine (Murch & Clark, 2016). Interestingly, gamblers don’t even have to win to experience the “rush” of dopamine in the brain. “Near misses,” or almost winning but not actually winning, also have been shown to increase activity in the ventral striatum and other brain reward centers that use dopamine (Chase & Clark, 2010). These brain effects are almost identical to those produced by addictive drugs like cocaine and heroin (Murch & Clark, 2016). Based on the neuroscientific evidence showing these similarities, the DSM-5 now considers gambling an addiction, while earlier versions of the DSM classified gambling as an impulse control disorder.

A photograph shows four digital gaming machines.
Figure 6.14Some research suggests that pathological gamblers use gambling to compensate for abnormally low levels of the hormone norepinephrine, which is associated with stress and is secreted in moments of arousal and thrill. (credit: Ted Murphy)

In addition to dopamine, gambling also appears to involve other neurotransmitters, including norepinephrine and serotonin (Potenza, 2013). Norepinephrine is secreted when a person feels stress, arousal, or thrill. It may be that pathological gamblers use gambling to increase their levels of this neurotransmitter. Deficiencies in serotonin might also contribute to compulsive behavior, including a gambling addiction (Potenza, 2013).

It may be that pathological gamblers’ brains are different than those of other people, and perhaps this difference may somehow have led to their gambling addiction, as these studies seem to suggest. However, it is very difficult to ascertain the cause because it is impossible to conduct a true experiment (it would be unethical to try to turn randomly assigned participants into problem gamblers). Therefore, it may be that causation actually moves in the opposite direction—perhaps the act of gambling somehow changes neurotransmitter levels in some gamblers’ brains. It also is possible that some overlooked factor, or confounding variable, played a role in both the gambling addiction and the differences in brain chemistry.

 

Cognition and Latent Learning

Strict behaviorists like Watson and Skinner focused exclusively on studying behavior rather than cognition (such as thoughts and expectations). In fact, Skinner was such a staunch believer that cognition didn’t matter that his ideas were considered radical behaviorism. Skinner considered the mind a “black box”—something completely unknowable—and, therefore, something not to be studied. However, another behaviorist, Edward C. Tolman, had a different opinion. Tolman’s experiments with rats demonstrated that organisms can learn even if they do not receive immediate reinforcement (Tolman & Honzik, 1930; Tolman et al., 1946). This finding was in conflict with the prevailing idea at the time that reinforcement must be immediate in order for learning to occur, thus suggesting a cognitive aspect to learning.

In the experiments, Tolman placed hungry rats in a maze with no reward for finding their way through it. He also studied a comparison group that was rewarded with food at the end of the maze. As the unreinforced rats explored the maze, they developed a cognitive map: a mental picture of the layout of the maze (Figure 6.15). After 10 sessions in the maze without reinforcement, food was placed in a goal box at the end of the maze. As soon as the rats became aware of the food, they were able to find their way through the maze quickly, just as quickly as the comparison group, which had been rewarded with food all along. This is known as latent learning: learning that occurs but is not observable in behavior until there is a reason to demonstrate it.

An illustration shows three rats in a maze, with a starting point and food at the end.
Figure 6.15 Psychologist Edward Tolman found that rats use cognitive maps to navigate through a maze. Have you ever worked your way through various levels on a video game? You learned when to turn left or right, move up or down. In that case you were relying on a cognitive map, just like the rats in a maze. (credit: modification of work by “FutUndBeidl”/Flickr)

Latent learning also occurs in humans. Children may learn by watching the actions of their parents but only demonstrate it at a later date, when the learned material is needed. For example, suppose that Ravi’s dad drives him to school every day. In this way, Ravi learns the route from his house to his school, but he’s never driven there himself, so he has not had a chance to demonstrate that he’s learned the way. One morning Ravi’s dad has to leave early for a meeting, so he can’t drive Ravi to school. Instead, Ravi follows the same route on his bike that his dad would have taken in the car. This demonstrates latent learning. Ravi had learned the route to school, but had no need to demonstrate this knowledge earlier.

EVERYDAY CONNECTION: This Place Is Like a Maze

Have you ever gotten lost in a building and couldn’t find your way back out? While that can be frustrating, you’re not alone. At one time or another we’ve all gotten lost in places like a museum, hospital, or university library. Whenever we go someplace new, we build a mental representation—or cognitive map—of the location, as Tolman’s rats built a cognitive map of their maze. However, some buildings are confusing because they include many areas that look alike or have short lines of sight. Because of this, it’s often difficult to predict what’s around a corner or decide whether to turn left or right to get out of a building. Psychologist Laura Carlson (2010) suggests that what we place in our cognitive map can impact our success in navigating through the environment. She suggests that paying attention to specific features upon entering a building, such as a picture on the wall, a fountain, a statue, or an escalator, adds information to our cognitive map that can be used later to help find our way out of the building.

Media Attributions

(credit “turtle”: modification of work by Becky Skiba, USFWS; credit “surfer”: modification of work by Mike Baird)

7

Memory

A photograph shows a camera and a pile of photographs.
Figure 8.1 Photographs can trigger our memories and bring past experiences back to life. (credit: modification of work by Cory Zanker)

We may be top-notch learners, but if we don’t have a way to store what we’ve learned, what good is the knowledge we’ve gained?

Take a few minutes to imagine what your day might be like if you could not remember anything you had learned. You would have to figure out how to get dressed. What clothing should you wear, and how do buttons and zippers work? You would need someone to teach you how to brush your teeth and tie your shoes. Who would you ask for help with these tasks, since you wouldn’t recognize the faces of these people in your house? Wait . . . is this even your house? Uh oh, your stomach begins to rumble and you feel hungry. You’d like something to eat, but you don’t know where the food is kept or even how to prepare it. Oh dear, this is getting confusing. Maybe it would be best just go back to bed. A bed . . . what is a bed?

We have an amazing capacity for memory, but how, exactly, do we process and store information? Are there different kinds of memory, and if so, what characterizes the different types? How, exactly, do we retrieve our memories? And why do we forget? This chapter will explore these questions as we learn about memory.

MCCCD Course Competencies

  • Describe cognitive processes including those related to learning, language, and intelligence
  • Critically evaluate information to help make evidence-based decisions.
  • Apply biopsychosocial principles to real-world situations.
  • Use psychological principles to explain the diversity and complexity of the human experience.

Learning Objectives

By the end of this section, you will be able to:

  • Discuss the three basic functions of memory
  • Describe the three stages of memory storage
  • Describe and distinguish between procedural and declarative memory and semantic and episodic memory

Memory is an information processing system; therefore, we often compare it to a computer. Memory is the set of processes used to encode, store, and retrieve information over different periods of time (Figure 8.2).

A diagram shows three boxes, placed in a row from left to right, respectively titled “Encoding,” “Storage,” and “Retrieval.” One right-facing arrow connects “Encoding” to “Storage” and another connects “Storage” to “Retrieval.”
Figure 8.2 Encoding involves the input of information into the memory system. Storage is the retention of the encoded information. Retrieval, or getting the information out of memory and back into awareness, is the third function.

We get information into our brains through a process called encoding, which is the input of information into the memory system. Once we receive sensory information from the environment, our brains label or code it. We organize the information with other similar information and connect new concepts to existing concepts. Encoding information occurs through automatic processing and effortful processing.

If someone asks you what you ate for lunch today, more than likely you could recall this information quite easily. This is known as automatic processing, or the encoding of details like time, space, frequency, and the meaning of words. Automatic processing is usually done without any conscious awareness. Recalling the last time you studied for a test is another example of automatic processing. But what about the actual test material you studied? It probably required a lot of work and attention on your part in order to encode that information. This is known as effortful processing (Figure 8.3).

A photograph shows a person driving a car.
Figure 8.3 When you first learn new skills such as driving a car, you have to put forth effort and attention to encode information about how to start a car, how to brake, how to handle a turn, and so on. Once you know how to drive, you can encode additional information about this skill automatically. (credit: Robert Couse-Baker)

What are the most effective ways to ensure that important memories are well encoded? Even a simple sentence is easier to recall when it is meaningful (Anderson, 1984). Read the following sentences (Bransford & McCarrell, 1974), then look away and count backwards from 30 by threes to zero, and then try to write down the sentences (no peeking back at this page!).

  1. The notes were sour because the seams split.
  2. The voyage wasn’t delayed because the bottle shattered.
  3. The haystack was important because the cloth ripped.

How well did you do? By themselves, the statements that you wrote down were most likely confusing and difficult for you to recall. Now, try writing them again, using the following prompts: bagpipe, ship christening, and parachutist. Next, count backward from 40 by fours, then check yourself to see how well you recalled the sentences this time. You can see that the sentences are now much more memorable because each of the sentences was placed in context. Material is far better encoded when you make it meaningful.

There are three types of encoding. The encoding of words and their meaning is known as semantic encoding. It was first demonstrated by William Bousfield (1935) in an experiment in which he asked people to memorize words. The 60 words were actually divided into 4 categories of meaning, although the participants did not know this because the words were randomly presented. When they were asked to remember the words, they tended to recall them in categories, showing that they paid attention to the meanings of the words as they learned them.

Visual encoding is the encoding of images, and acoustic encoding is the encoding of sounds, words in particular. To see how visual encoding works, read over this list of words: car, level, dog, truth, book, value. If you were asked later to recall the words from this list, which ones do you think you’d most likely remember? You would probably have an easier time recalling the words car, dog, and book, and a more difficult time recalling the words level, truth, and value. Why is this? Because you can recall images (mental pictures) more easily than words alone. When you read the words car, dog, and book you created images of these things in your mind. These are concrete, high-imagery words. On the other hand, abstract words like level, truth, and value are low-imagery words. High-imagery words are encoded both visually and semantically (Paivio, 1986), thus building a stronger memory.

Now let’s turn our attention to acoustic encoding. You are driving in your car and a song comes on the radio that you haven’t heard in at least 10 years, but you sing along, recalling every word. In the United States, children often learn the alphabet through song, and they learn the number of days in each month through rhyme: Thirty days hath September, / April, June, and November; / All the rest have thirty-one, / Save February, with twenty-eight days clear, / And twenty-nine each leap year.” These lessons are easy to remember because of acoustic encoding. We encode the sounds the words make. This is one of the reasons why much of what we teach young children is done through song, rhyme, and rhythm.

Which of the three types of encoding do you think would give you the best memory of verbal information? Some years ago, psychologists Fergus Craik and Endel Tulving (1975) conducted a series of experiments to find out. Participants were given words along with questions about them. The questions required the participants to process the words at one of the three levels. The visual processing questions included such things as asking the participants about the font of the letters. The acoustic processing questions asked the participants about the sound or rhyming of the words, and the semantic processing questions asked the participants about the meaning of the words. After participants were presented with the words and questions, they were given an unexpected recall or recognition task.

Words that had been encoded semantically were better remembered than those encoded visually or acoustically. Semantic encoding involves a deeper level of processing than the shallower visual or acoustic encoding. Craik and Tulving concluded that we process verbal information best through semantic encoding, especially if we apply what is called the self-reference effect. The self-reference effect is the tendency for an individual to have better memory for information that relates to oneself in comparison to material that has less personal relevance (Rogers, Kuiper, & Kirker, 1977). Could semantic encoding be beneficial to you as you attempt to memorize the concepts in this chapter?

Storage

Once the information has been encoded, we have to somehow retain it. Our brains take the encoded information and place it in storage. Storage is the creation of a permanent record of information.

In order for a memory to go into storage (i.e., long-term memory), it has to pass through three distinct stages: Sensory MemoryShort-Term Memory, and finally Long-Term Memory. These stages were first proposed by Richard Atkinson and Richard Shiffrin (1968). Their model of human memory (Figure 8.4), called Atkinson and Shiffrin’s model, is based on the belief that we process memories in the same way that a computer processes information.

A flow diagram consists of four boxes with connecting arrows. The first box is labeled “sensory input.” An arrow leads to the second box, which is labeled “sensory memory.” An arrow leads to the third box which is labeled “short-term memory (STM).” An arrow points to the fourth box, labeled “long-term memory (LTM),” and an arrow points in the reverse direction from the fourth to the third box. Above the short-term memory box, an arrow leaves the top-right of the box and curves around to point back to the top-left of the box; this arrow is labeled “rehearsal.” Both the “sensory memory” and “short-term memory” boxes have an arrow beneath them pointing to the text “information not transferred is lost.”
Figure 8.4 According to the Atkinson-Shiffrin model of memory, information passes through three distinct stages in order for it to be stored in long-term memory.

Atkinson and Shiffrin’s model is not the only model of memory. Baddeley and Hitch (1974) proposed a working memory model in which short-term memory has different forms. In their model, storing memories in short-term memory is like opening different files on a computer and adding information. The working memory files hold a limited amount of information. The type of short-term memory (or computer file) depends on the type of information received. There are memories in visual-spatial form, as well as memories of spoken or written material, and they are stored in three short-term systems: a visuospatial sketchpad, an episodic buffer (Baddeley, 2000), and a phonological loop. According to Baddeley and Hitch, a central executive part of memory supervises or controls the flow of information to and from the three short-term systems, and the central executive is responsible for moving information into long-term memory.

Sensory Memory

In the Atkinson-Shiffrin model, stimuli from the environment are processed first in sensory memory: storage of brief sensory events, such as sights, sounds, and tastes. It is very brief storage—up to a couple of seconds. We are constantly bombarded with sensory information. We cannot absorb all of it, or even most of it. And most of it has no impact on our lives. For example, what was your professor wearing the last class period? As long as the professor was dressed appropriately, it does not really matter what she was wearing. Sensory information about sights, sounds, smells, and even textures, which we do not view as valuable information, we discard. If we view something as valuable, the information will move into our short-term memory system.

Short-Term Memory

Short-term memory (STM) is a temporary storage system that processes incoming sensory memory. The terms short-term and working memory are sometimes used interchangeably, but they are not exactly the same. Short-term memory is more accurately described as a component of working memory. Short-term memory takes information from sensory memory and sometimes connects that memory to something already in long-term memory. Short-term memory storage lasts 15 to 30 seconds. Think of it as the information you have displayed on your computer screen, such as a document, spreadsheet, or website. Then, information in STM goes to long-term memory (you save it to your hard drive), or it is discarded (you delete a document or close a web browser).

Rehearsal moves information from short-term memory to long-term memory. Active rehearsal is a way of attending to information to move it from short-term to long-term memory. During active rehearsal, you repeat (practice) the information to be remembered. If you repeat it enough, it may be moved into long-term memory. For example, this type of active rehearsal is the way many children learn their ABCs by singing the alphabet song. Alternatively, elaborative rehearsal is the act of linking new information you are trying to learn to existing information that you already know. For example, if you meet someone at a party and your phone is dead but you want to remember his phone number, which starts with area code 203, you might remember that your uncle Abdul lives in Connecticut and has a 203 area code. This way, when you try to remember the phone number of your new prospective friend, you will easily remember the area code. Craik and Lockhart (1972) proposed the levels of processing hypothesis that states the deeper you think about something, the better you remember it.

You may find yourself asking, “How much information can our memory handle at once?” To explore the capacity and duration of your short-term memory, have a partner read the strings of random numbers (Figure 8.5) out loud to you, beginning each string by saying, “Ready?” and ending each by saying, “Recall,” at which point you should try to write down the string of numbers from memory.

A series of numbers includes two rows, with six numbers in each row. From left to right, the numbers increase from four digits to five, six, seven, eight, and nine digits. The first row includes “9754,” “68259,” “913825,” “5316842,” “86951372,” and “719384273,” and the second row includes “6419,” “67148,” “648327,” “5963827,” “51739826,” and “163875942.”
Figure 8.5 Work through this series of numbers using the recall exercise explained above to determine the longest string of digits that you can store.

Note the longest string at which you got the series correct. For most people, the capacity will probably be close to 7 plus or minus 2. In 1956, George Miller reviewed most of the research on the capacity of short-term memory and found that people can retain between 5 and 9 items, so he reported the capacity of short-term memory was the “magic number” 7 plus or minus 2. However, more contemporary research has found working memory capacity is 4 plus or minus 1 (Cowan, 2010). Generally, recall is somewhat better for random numbers than for random letters (Jacobs, 1887) and also often slightly better for information we hear (acoustic encoding) rather than information we see (visual encoding) (Anderson, 1969).

Memory trace decay and interference are two factors that affect short-term memory retention. Peterson and Peterson (1959) investigated short-term memory using the three-letter sequences called trigrams (e.g., CLS) that had to be recalled after various time intervals between 3 and 18 seconds. Participants remembered about 80% of the trigrams after a 3-second delay, but only 10% after a delay of 18 seconds, which caused them to conclude that short-term memory decayed in 18 seconds. During decay, the memory trace becomes less activated over time, and the information is forgotten. However, Keppel and Underwood (1962) examined only the first trials of the trigram task and found that proactive interference also affected short-term memory retention. During proactive interference, previously learned information interferes with the ability to learn new information. Both memory trace decay and proactive interference affect short-term memory. Once the information reaches long-term memory, it has to be consolidated at both the synaptic level, which takes a few hours, and into the memory system, which can take weeks or longer.

Long-term Memory

Long-term memory (LTM) is the continuous storage of information. Unlike short-term memory, long-term memory storage capacity is believed to be unlimited. It encompasses all the things you can remember that happened more than just a few minutes ago. One cannot really consider long-term memory without thinking about the way it is organized. Really quickly, what is the first word that comes to mind when you hear “peanut butter”? Did you think of jelly? If you did, you probably have associated peanut butter and jelly in your mind. It is generally accepted that memories are organized in semantic (or associative) networks (Collins & Loftus, 1975). A semantic network consists of concepts, and as you may recall from what you’ve learned about memory, concepts are categories or groupings of linguistic information, images, ideas, or memories, such as life experiences. Although individual experiences and expertise can affect concept arrangement, concepts are believed to be arranged hierarchically in the mind (Anderson & Reder, 1999; Johnson & Mervis, 1997, 1998; Palmer et al., 1989; Rosch et al., 1976; Tanaka & Taylor, 1991). Related concepts are linked, and the strength of the link depends on how often two concepts have been associated.

Semantic networks differ depending on personal experiences. Importantly for memory, activating any part of a semantic network also activates the concepts linked to that part to a lesser degree. The process is known as spreading activation (Collins & Loftus, 1975). If one part of a network is activated, it is easier to access the associated concepts because they are already partially activated. When you remember or recall something, you activate a concept, and the related concepts are more easily remembered because they are partially activated. However, the activations do not spread in just one direction. When you remember something, you usually have several routes to get the information you are trying to access, and the more links you have to a concept, the better your chances of remembering.

There are two types of long-term memory: explicit and implicit (Figure 8.6). Understanding the difference between explicit memory and implicit memory is important because aging, particular types of brain trauma, and certain disorders can impact explicit and implicit memory in different ways. Explicit memories are those we consciously try to remember, recall, and report. For example, if you are studying for your chemistry exam, the material you are learning will be part of your explicit memory. In keeping with the computer analogy, some information in your long-term memory would be like the information you have saved on the hard drive. It is not there on your desktop (your short-term memory), but most of the time you can pull up this information when you want it. Not all long-term memories are strong memories, and some memories can only be recalled using prompts. For example, you might easily recall a fact, such as the capital of the United States, but you might struggle to recall the name of the restaurant at which you had dinner when you visited a nearby city last summer. A prompt, such as that the restaurant was named after its owner, might help you recall the name of the restaurant. Explicit memory is sometimes referred to as declarative memory because it can be put into words. Explicit memory is divided into episodic memory and semantic memory.

Episodic memory is information about events we have personally experienced (i.e., an episode). For instance, the memory of your last birthday is an episodic memory. Usually, episodic memory is reported as a story. The concept of episodic memory was first proposed about in the 1970s (Tulving, 1972). Since then, Tulving and others have reformulated the theory, and currently, scientists believe that episodic memory is memory about happenings in particular places at particular times—the what, where, and when of an event (Tulving, 2002). It involves recollection of visual imagery as well as the feeling of familiarity (Hassabis & Maguire, 2007). Semantic memory is knowledge about words, concepts, and language-based knowledge and facts. Semantic memory is typically reported as facts. Semantic means having to do with language and knowledge about language. For example, answers to the following questions like “what is the definition of psychology” and “who was the first African American president of the United States” are stored in your semantic memory.

Implicit memories are long-term memories that are not part of our consciousness. Although implicit memories are learned outside of our awareness and cannot be consciously recalled, implicit memory is demonstrated in the performance of some task (Roediger, 1990; Schacter, 1987). Implicit memory has been studied with cognitive demand tasks, such as performance on artificial grammars (Reber, 1976), word memory (Jacoby, 1983; Jacoby & Witherspoon, 1982), and learning unspoken and unwritten contingencies and rules (Greenspoon, 1955; Giddan & Eriksen, 1959; Krieckhaus & Eriksen, 1960). Returning to the computer metaphor, implicit memories are like a program running in the background, and you are not aware of their influence. Implicit memories can influence observable behaviors as well as cognitive tasks. In either case, you usually cannot put the memory into words that adequately describe the task. There are several types of implicit memories, including procedural, priming, and emotional conditioning.

A diagram consists of three rows of boxes. The box in the top row is labeled “long-term memory;” a line from the box separates into two lines leading to two boxes on the second row, labeled “explicit memory” and “implicit memory.” From each of the second row boxes, lines split and lead to additional boxes. From the “explicit memory” box are two boxes labeled “episodic (events and experiences)” and “semantic (concepts and facts).” From the “implicit memory” box are three boxes labeled “procedural (How to do things),” “Priming (stimulus exposure affects responses to a later stimulus),” and “emotional conditioning (Classically conditioned emotional responses).”
Figure 8.6 There are two components of long-term memory: explicit and implicit. Explicit memory includes episodic and semantic memory. Implicit memory includes procedural memory and things learned through conditioning.

Implicit procedural memory is often studied using observable behaviors (Adams, 1957; Lacey & Smith, 1954; Lazarus & McCleary, 1951). Implicit procedural memory stores information about the way to do something, and it is the memory for skilled actions, such as brushing your teeth, riding a bicycle, or driving a car. You were probably not that good at riding a bicycle or driving a car the first time you tried, but you were much better after doing those things for a year. Your improved bicycle riding was due to learning balancing abilities. You likely thought about staying upright in the beginning, but now you just do it. Moreover, you probably are good at staying balanced, but cannot tell someone the exact way you do it. Similarly, when you first learned to drive, you probably thought about a lot of things that you just do now without much thought. When you first learned to do these tasks, someone may have told you how to do them, but everything you learned since those instructions that you cannot readily explain to someone else as the way to do it is implicit memory.

Implicit priming is another type of implicit memory (Schacter, 1992). During priming exposure to a stimulus affects the response to a later stimulus. Stimuli can vary and may include words, pictures, and other stimuli to elicit a response or increase recognition. For instance, some people really enjoy picnics. They love going into nature, spreading a blanket on the ground, and eating a delicious meal. Now, unscramble the following letters to make a word.

 

AETPL

What word did you come up with? Chances are good that it was “plate.”

Had you read, “Some people really enjoy growing flowers. They love going outside to their garden, fertilizing their plants, and watering their flowers,” you probably would have come up with the word “petal” instead of plate.

Do you recall the earlier discussion of semantic networks? The reason people are more likely to come up with “plate” after reading about a picnic is that plate is associated (linked) with picnic. Plate was primed by activating the semantic network. Similarly, “petal” is linked to flower and is primed by flower. Priming is also the reason you probably said jelly in response to peanut butter.

Implicit emotional conditioning is the type of memory involved in classically conditioned emotion responses (Olson & Fazio, 2001). These emotional relationships cannot be reported or recalled but can be associated with different stimuli. For example, specific smells can cause specific emotional responses for some people. If there is a smell that makes you feel positive and nostalgic, and you don’t know where that response comes from, it is an implicit emotional response. Similarly, most people have a song that causes a specific emotional response. That song’s effect could be an implicit emotional memory (Yang et al., 2011).

EVERYDAY CONNECTION: Can You Remember Everything You Ever Did or Said?

Episodic memories are also called autobiographical memories. Let’s quickly test your autobiographical memory. What were you wearing exactly five years ago today? What did you eat for lunch on April 10, 2019? You probably find it difficult, if not impossible, to answer these questions. Can you remember every event you have experienced over the course of your life—meals, conversations, clothing choices, weather conditions, and so on? Most likely none of us could even come close to answering these questions; however, American actress Marilu Henner, best known for the television show Taxi, can remember. She has an amazing and highly superior autobiographical memory known as hyperthymesia.

Very few people can recall events in this way; right now, fewer than 20 have been identified as having this ability, and only a few have been studied (Parker et al., 2006). And although hyperthymesia normally appears in adolescence, two children in the United States appear to have memories from well before their tenth birthdays.

LINK TO LEARNING: Watch this video about superior autobiographical memory from the television news show 60 Minutes to learn more.

An interactive H5P element has been excluded from this version of the text. You can view it online here:
https://open.maricopa.edu/intropsychme/?p=33#h5p-47

So you have worked hard to encode (via effortful processing) and store some important information for your upcoming final exam. How do you get that information back out of storage when you need it? The act of getting information out of memory storage and back into conscious awareness is known as retrieval. This would be similar to finding and opening a paper you had previously saved on your computer’s hard drive. Now it’s back on your desktop, and you can work with it again. Our ability to retrieve information from long-term memory is vital to our everyday functioning. You must be able to retrieve information from memory in order to do everything from knowing how to brush your hair and teeth, to driving to work, to knowing how to perform your job once you get there.

There are three ways you can retrieve information out of your long-term memory storage system: recall, recognition, and relearning. Recall is what we most often think about when we talk about memory retrieval: it means you can access information without cues. For example, you would use recall for an essay test. Recognition happens when you identify information that you have previously learned after encountering it again. It involves a process of comparison. When you take a multiple-choice test, you are relying on recognition to help you choose the correct answer. Here is another example. Let’s say you graduated from high school 10 years ago, and you have returned to your hometown for your 10-year reunion. You may not be able to recall all of your classmates, but you recognize many of them based on their yearbook photos.

The third form of retrieval is relearning, and it’s just what it sounds like. It involves learning information that you previously learned. Whitney took Spanish in high school, but after high school she did not have the opportunity to speak Spanish. Whitney is now 31, and her company has offered her an opportunity to work in their Mexico City office. In order to prepare herself, she enrolls in a Spanish course at the local community center. She’s surprised at how quickly she’s able to pick up the language after not speaking it for 13 years; this is an example of relearning.

Learning Objectives

By the end of this section, you will be able to:

  • Explain the brain functions involved in memory
  • Recognize the roles of the hippocampus, amygdala, and cerebellum

Are memories stored in just one part of the brain, or are they stored in many different parts of the brain? Karl Lashley began exploring this problem, about 100 years ago, by making lesions in the brain of animals such as rats and monkeys. He was searching for evidence of the engram: the group of neurons that serve as the “physical representation of memory (Josselyn, 2010). First, Lashley (1950) trained rats to find their way through a maze. Then, he used the tools available at the time- in this case, a soldering iron- to create lesions in the rats’ brains, specifically in the cerebral cortex. He did this because he was trying to erase the engram, or the original memory trace that the rats had of the maze.

Lashley did not find evidence of the engram, and the rats were still able to find their way through the maze, regardless of the size or location of the lesion. Based on his creation of lesions and the animals’ reaction, he formulated the equipotentiality hypothesis: if part of one area of the brain involved in memory is damaged, another part of the same area can take over that memory function (Lashley, 1950). Although Lashley’s early work did not confirm the existence of the engram, modern psychologists are making progress locating it. For example, Eric Kandel has spent decades studying the synapse and its role in controlling the flow of information through neural circuits needed to store memories (Mayford et al., 2012).

Many scientists believe that the entire brain is involved with memory. However, since Lashley’s research, other scientists have been able to look more closely at the brain and memory. They have argued that memory is located in specific parts of the brain, and specific neurons can be recognized for their involvement in forming memories. The main parts of the brain involved with memory are the amygdala, the hippocampus, the cerebellum, and the prefrontal cortex (Figure 8.8).

An illustration of a brain shows the location of the amygdala, hippocampus, cerebellum, and prefrontal cortex.
Figure 8.8 The amygdala is involved in fear and fear memories. The hippocampus is associated with declarative and episodic memory as well as recognition memory. The cerebellum plays a role in processing procedural memories, such as how to play the piano. The prefrontal cortex appears to be involved in remembering semantic tasks.

The Amygdala

First, let’s look at the role of the amygdala in memory formation. The main job of the amygdala is to regulate emotions, such as fear and aggression (Figure 8.8). The amygdala plays a part in how memories are stored because storage is influenced by stress hormones. For example, one researcher experimented with rats and the fear response (Josselyn, 2010). Using Pavlovian conditioning, a neutral tone was paired with a foot shock to the rats. This produced a fear memory in the rats. After being conditioned, each time they heard the tone, they would freeze (a defense response in rats), indicating a memory for the impending shock. Then the researchers induced cell death in neurons in the lateral amygdala, which is the specific area of the brain responsible for fear memories. They found the fear memory faded (became extinct). Because of its role in processing emotional information, the amygdala is also involved in memory consolidation: the process of transferring new learning into long-term memory. The amygdala seems to facilitate encoding memories at a deeper level when the event is emotionally arousing.

Another group of researchers also experimented with rats to learn how the hippocampus functions in memory processing (Figure 8.8). They created lesions in the hippocampi of the rats, and found that the rats demonstrated memory impairment on various tasks, such as object recognition and maze running. They concluded that the hippocampus is involved in memory, specifically normal recognition memory as well as spatial memory (when the memory tasks are like recall tests) (Clark et al., 2000). Another job of the hippocampus is to project information to cortical regions that give memories meaning and connect them with other memories. It also plays a part in memory consolidation: the process of transferring new learning into long-term memory.

Injury to this area leaves us unable to process new declarative memories. One famous patient, known for years only as H. M., had both his left and right temporal lobes (hippocampi) removed in an attempt to help control the seizures he had been suffering from for years (Corkin et al., 1997). As a result, his declarative memory was significantly affected, and he could not form new semantic knowledge. He lost the ability to form new memories, yet he could still remember information and events that had occurred prior to the surgery.

The Cerebellum and Prefrontal Cortex

Although the hippocampus seems to be more of a processing area for explicit memories, you could still lose it and be able to create implicit memories (procedural memory, motor learning, and classical conditioning), thanks to your cerebellum (Figure 8.8). For example, one classical conditioning experiment is to accustom subjects to blink when they are given a puff of air to the eyes. When researchers damaged the cerebellums of rabbits, they discovered that the rabbits were not able to learn the conditioned eye-blink response (Steinmetz, 1999; Green & Woodruff-Pak, 2000).

Other researchers have used brain scans, including positron emission tomography (PET) scans, to learn how people process and retain information. From these studies, it seems the prefrontal cortex is involved. In one study, participants had to complete two different tasks: either looking for the letter a in words (considered a perceptual task) or categorizing a noun as either living or non-living (considered a semantic task) (Kapur et al., 1994). Participants were then asked which words they had previously seen. Recall was much better for the semantic task than for the perceptual task. According to PET scans, there was much more activation in the left inferior prefrontal cortex in the semantic task. In another study, encoding was associated with left frontal activity, while retrieval of information was associated with the right frontal region (Craik et al., 1999).

Neurotransmitters

There also appear to be specific neurotransmitters involved with the process of memory, such as epinephrine, dopamine, serotonin, glutamate, and acetylcholine (Myhrer, 2003). There continues to be discussion and debate among researchers as to which neurotransmitter plays which specific role (Blockland, 1996). Although we don’t yet know which role each neurotransmitter plays in memory, we do know that communication among neurons via neurotransmitters is critical for developing new memories. Repeated activity by neurons leads to increased neurotransmitters in the synapses and more efficient and more synaptic connections. This is how memory consolidation occurs.

It is also believed that strong emotions trigger the formation of strong memories, and weaker emotional experiences form weaker memories; this is called arousal theory (Christianson, 1992). For example, strong emotional experiences can trigger the release of neurotransmitters, as well as hormones, which strengthen memory; therefore, our memory for an emotional event is usually better than our memory for a non-emotional event. When humans and animals are stressed, the brain secretes more of the neurotransmitter glutamate, which helps them remember the stressful event (McGaugh, 2003). This is clearly evidenced by what is known as the flashbulb memory phenomenon.

flashbulb memory is an exceptionally clear recollection of an important event. Most likely you can remember where you were and what you were doing. You may have been at the event or heard the news of the event which was associated with very strong emotions.  Can you think of an example of a flashbulb memory from your own life?

Learning Objectives

By the end of this section, you will be able to:

  • Compare and contrast the two types of amnesia
  • Discuss the unreliability of eyewitness testimony
  • Discuss encoding failure
  • Discuss the various memory errors
  • Compare and contrast the two types of interference

You may pride yourself on your amazing ability to remember the birthdates and ages of all of your friends and family members, or you may be able to recall vivid details of your 5th birthday party at Chuck E. Cheese’s. However, all of us have at times felt frustrated, and even embarrassed, when our memories have failed us. There are several reasons why this happens.

Amnesia

Amnesia is the loss of long-term memory that occurs as the result of disease, physical trauma, or psychological trauma. Endel Tulving (2002) and his colleagues at the University of Toronto studied K. C. for years. K. C. suffered a traumatic head injury in a motorcycle accident and then had severe amnesia. Tulving writes, the outstanding fact about K.C.’s mental make-up is his utter inability to remember any events, circumstances, or situations from his own life. His episodic amnesia covers his whole life, from birth to the present. The only exception is the experiences that, at any time, he has had in the last minute or two (Tulving, 2002, p. 14).

Anterograde Amnesia

There are two common types of amnesia: anterograde amnesia and retrograde amnesia (Figure 8.10). Anterograde amnesia is commonly caused by brain trauma, such as a blow to the head. With anterograde amnesia, you cannot remember new information, although you can remember information and events that happened prior to your injury. The hippocampus is usually affected (McLeod, 2011). This suggests that damage to the brain has resulted in the inability to transfer information from short-term to long-term memory; that is, the inability to consolidate memories.

Many people with this form of amnesia are unable to form new episodic or semantic memories, but are still able to form new procedural memories (Bayley & Squire, 2002). This was true of H. M., which was discussed earlier. The brain damage caused by his surgery resulted in anterograde amnesia. H. M. would read the same magazine over and over, having no memory of ever reading it—it was always new to him. He also could not remember people he had met after his surgery. If you were introduced to H. M. and then you left the room for a few minutes, he would not know you upon your return and would introduce himself to you again. However, when presented the same puzzle several days in a row, although he did not remember having seen the puzzle before, his speed at solving it became faster each day (because of relearning) (Corkin, 1965, 1968).

A single-line flow diagram compares two types of amnesia. In the center is a box labeled “event” with arrows extending from both sides. Extending to the left is an arrow pointing left to the word “past”; the arrow is labeled “retrograde amnesia.” Extending to the right is an arrow pointing right to the word “present”; the arrow is labeled “anterograde amnesia.”
Figure 8.10This diagram illustrates the timeline of retrograde and anterograde amnesia. Memory problems that extend back in time before the injury and prevent retrieval of information previously stored in long-term memory are known as retrograde amnesia. Conversely, memory problems that extend forward in time from the point of injury and prevent the formation of new memories are called anterograde amnesia.

Retrograde Amnesia

Retrograde amnesia is loss of memory for events that occurred prior to the trauma. People with retrograde amnesia cannot remember some or even all of their past. They have difficulty remembering episodic memories. What if you woke up in the hospital one day and there were people surrounding your bed claiming to be your spouse, your children, and your parents? The trouble is you don’t recognize any of them. You were in a car accident, suffered a head injury, and now have retrograde amnesia. You don’t remember anything about your life prior to waking up in the hospital. This may sound like the stuff of Hollywood movies, and Hollywood has been fascinated with the amnesia plot for nearly a century, going all the way back to the film Garden of Lies from 1915 to more recent movies such as the Jason Bourne spy thrillers. However, for real-life sufferers of retrograde amnesia, like former NFL football player Scott Bolzan, the story is not a Hollywood movie. Bolzan fell, hit his head, and deleted 46 years of his life in an instant. He is now living with one of the most extreme cases of retrograde amnesia on record.

The formulation of new memories is sometimes called construction, and the process of bringing up old memories is called reconstruction. Yet as we retrieve our memories, we also tend to alter and modify them. A memory pulled from long-term storage into short-term memory is flexible. New events can be added and we can change what we think we remember about past events, resulting in inaccuracies and distortions. People may not intend to distort facts, but it can happen in the process of retrieving old memories and combining them with new memories (Roediger & DeSoto, 2015).

Suggestibility

When someone witnesses a crime, that person’s memory of the details of the crime is very important in catching the suspect. Because memory is so fragile, witnesses can be easily (and often accidentally) misled due to the problem of suggestibility. Suggestibility describes the effects of misinformation from external sources that leads to the creation of false memories. In the fall of 2002, a sniper in the DC area shot people at a gas station, leaving Home Depot, and walking down the street. These attacks went on in a variety of places for over three weeks and resulted in the deaths of ten people. During this time, as you can imagine, people were terrified to leave their homes, go shopping, or even walk through their neighborhoods. Police officers and the FBI worked frantically to solve the crimes, and a tip hotline was set up. Law enforcement received over 140,000 tips, which resulted in approximately 35,000 possible suspects (Newseum, n.d.).

Most of the tips were dead ends, until a white van was spotted at the site of one of the shootings. The police chief went on national television with a picture of the white van. After the news conference, several other eyewitnesses called to say that they too had seen a white van fleeing from the scene of the shooting. At the time, there were more than 70,000 white vans in the area. Police officers, as well as the general public, focused almost exclusively on white vans because they believed the eyewitnesses. Other tips were ignored. When the suspects were finally caught, they were driving a blue sedan.

As illustrated by this example, we are vulnerable to the power of suggestion, simply based on something we see on the news. Or we can claim to remember something that in fact is only a suggestion someone made. It is the suggestion that is the cause of the false memory.

Eyewitness Misidentification

Even though memory and the process of reconstruction can be fragile, police officers, prosecutors, and the courts often rely on eyewitness identification and testimony in the prosecution of criminals. However, faulty eyewitness identification and testimony can lead to wrongful convictions (Figure 8.11).

A bar graph is titled “Leading cause of wrongful conviction in DNA exoneration cases (source: Innocence Project).” The x-axis is labeled “leading cause,” and the y-axis is labeled “percentage of wrongful convictions (first 239 DNA exonerations).” Four bars show data: “eyewitness misidentification” is the leading cause in about 75% of cases, “forensic science” in about 49% of cases, “false confession” in about 23% of cases, and “informant” in about 18% of cases.
Figure 8.11In studying cases where DNA evidence has exonerated people from crimes, the Innocence Project discovered that eyewitness misidentification is the leading cause of wrongful convictions (Benjamin N. Cardozo School of Law, Yeshiva University, 2009).

How does this happen? In 1984, Jennifer Thompson, then a 22-year-old college student in North Carolina, was brutally raped at knifepoint. As she was being raped, she tried to memorize every detail of her rapist’s face and physical characteristics, vowing that if she survived, she would help get him convicted. After the police were contacted, a composite sketch was made of the suspect, and Jennifer was shown six photos. She chose two, one of which was of Ronald Cotton. After looking at the photos for 4–5 minutes, she said, “Yeah. This is the one,” and then she added, “I think this is the guy.” When questioned about this by the detective who asked, “You’re sure? Positive?” She said that it was him. Then she asked the detective if she did OK, and he reinforced her choice by telling her she did great. These kinds of unintended cues and suggestions by police officers can lead witnesses to identify the wrong suspect. The district attorney was concerned about her lack of certainty the first time, so she viewed a lineup of seven men. She said she was trying to decide between numbers 4 and 5, finally deciding that Cotton, number 5, “Looks most like him.” He was 22 years old.

By the time the trial began, Jennifer Thompson had absolutely no doubt that she was raped by Ronald Cotton. She testified at the court hearing, and her testimony was compelling enough that it helped convict him. How did she go from, “I think it’s the guy” and it “Looks most like him,” to such certainty? Gary Wells and Deah Quinlivan (2009) assert it’s suggestive police identification procedures, such as stacking lineups to make the defendant stand out, telling the witness which person to identify, and confirming witnesses choices by telling them “Good choice,” or “You picked the guy.”

After Cotton was convicted of the rape, he was sent to prison for life plus 50 years. After 4 years in prison, he was able to get a new trial. Jennifer Thompson once again testified against him. This time Ronald Cotton was given two life sentences. After serving 11 years in prison, DNA evidence finally demonstrated that Ronald Cotton did not commit the rape, was innocent, and had served over a decade in prison for a crime he did not commit.

The Misinformation Effect

Cognitive psychologist Elizabeth Loftus has conducted extensive research on memory. She has studied false memories as well as recovered memories of childhood sexual abuse. Loftus also developed the misinformation effect paradigm, which holds that after exposure to additional and possibly inaccurate information, a person may misremember the original event.

According to Loftus, an eyewitness’s memory of an event is very flexible due to the misinformation effect. To test this theory, Loftus and John Palmer (1974) asked 45 U.S. college students to estimate the speed of cars using different forms of questions (Figure 8.12). The participants were shown films of car accidents and were asked to play the role of the eyewitness and describe what happened. They were asked, “About how fast were the cars going when they (smashed, collided, bumped, hit, contacted) each other?” The participants estimated the speed of the cars based on the verb used.

Participants who heard the word “smashed” estimated that the cars were traveling at a much higher speed than participants who heard the word “contacted.” The implied information about speed, based on the verb they heard, had an effect on the participants’ memory of the accident. In a follow-up one week later, participants were asked if they saw any broken glass (none was shown in the accident pictures). Participants who had been in the “smashed” group were more than twice as likely to indicate that they did remember seeing glass. Loftus and Palmer demonstrated that a leading question encouraged them to not only remember the cars were going faster, but to also falsely remember that they saw broken glass.

Photograph A shows two cars that have crashed into each other. Part B is a bar graph titled “perceived speed based on questioner’s verb (source: Loftus and Palmer, 1974).” The x-axis is labeled “questioner’s verb, and the y-axis is labeled “perceived speed (mph).” Five bars share data: “smashed” was perceived at about 41 mph, “collided” at about 39 mph, “bumped” at about 37 mph, “hit” at about 34 mph, and “contacted” at about 32 mph.
Figure 8.12 When people are asked leading questions about an event, their memory of the event may be altered. (credit a: modification of work by Rob Young)
LINK TO LEARNING: Watch this video that explores the idea of fake memories.

An interactive H5P element has been excluded from this version of the text. You can view it online here:
https://open.maricopa.edu/intropsychme/?p=33#h5p-49

 

Controversies over Repressed and Recovered Memories

Other researchers have described how whole events, not just words, can be falsely recalled, even when they did not happen. The idea that memories of traumatic events could be repressed has been a theme in the field of psychology, beginning with Sigmund Freud, and the controversy surrounding the idea continues today.

Recall of false autobiographical memories is called false memory syndrome. This syndrome has received a lot of publicity, particularly as it relates to memories of events that do not have independent witnesses—often the only witnesses to the abuse are the perpetrator and the victim (e.g., sexual abuse).

On one side of the debate are those who have recovered memories of childhood abuse years after it occurred. These researchers argue that some children’s experiences have been so traumatizing and distressing that they must lock those memories away in order to lead some semblance of a normal life. They believe that repressed memories can be locked away for decades and later recalled intact through hypnosis and guided imagery techniques (Devilly, 2007).

Research suggests that having no memory of childhood sexual abuse is quite common in adults. For instance, one large-scale study conducted by John Briere and Jon Conte (1993) revealed that 59% of 450 men and women who were receiving treatment for sexual abuse that had occurred before age 18 had forgotten their experiences. Ross Cheit (2007) suggested that repressing these memories created psychological distress in adulthood. The Recovered Memory Project was created so that victims of childhood sexual abuse can recall these memories and allow the healing process to begin (Cheit, 2007; Devilly, 2007).

On the other side, Loftus has challenged the idea that individuals can repress memories of traumatic events from childhood, including sexual abuse, and then recover those memories years later through therapeutic techniques such as hypnosis, guided visualization, and age regression.

Loftus is not saying that childhood sexual abuse doesn’t happen, but she does question whether or not those memories are accurate, and she is skeptical of the questioning process used to access these memories, given that even the slightest suggestion from the therapist can lead to misinformation effects. For example, researchers Stephen Ceci and Maggie Brucks (1993, 1995) asked three-year-old children to use an anatomically correct doll to show where their pediatricians had touched them during an exam. Fifty-five percent of the children pointed to the genital/anal area on the dolls, even when they had not received any form of genital exam.

Ever since Loftus published her first studies on the suggestibility of eyewitness testimony in the 1970s, social scientists, police officers, therapists, and legal practitioners have been aware of the flaws in interview practices. Consequently, steps have been taken to decrease the suggestibility of witnesses. One way is to modify how witnesses are questioned. When interviewers use neutral and less leading language, children more accurately recall what happened and who was involved (Goodman, 2006; Pipe, 1996; Pipe et al., 2004). Another change is in how police lineups are conducted. It’s recommended that a blind photo lineup be used. This way the person administering the lineup doesn’t know which photo belongs to the suspect, minimizing the possibility of giving leading cues. Additionally, judges in some states now inform jurors about the possibility of misidentification. Judges can also suppress eyewitness testimony if they deem it unreliable.

Forgetting

“I’ve a grand memory for forgetting,” quipped Robert Louis Stevenson. Forgetting refers to the loss of information from long-term memory. We all forget things, like a loved one’s birthday, someone’s name, or where we put our car keys. As you’ve come to see, memory is fragile, and forgetting can be frustrating and even embarrassing. But why do we forget? To answer this question, we will look at several perspectives on forgetting.

Encoding Failure

Sometimes memory loss happens before the actual memory process begins, which is encoding failure. We can’t remember something if we never stored it in our memory in the first place. This would be like trying to find a book on your e-reader that you never actually purchased and downloaded. Often, in order to remember something, we must pay attention to the details and actively work to process the information (effortful encoding). Lots of times we don’t do this. For instance, think of how many times in your life you’ve seen a penny. Can you accurately recall what the front of a U.S. penny looks like? When researchers Raymond Nickerson and Marilyn Adams (1979) asked this question, they found that most Americans don’t know which one it is. The reason is most likely encoding failure. Most of us never encode the details of the penny. We only encode enough information to be able to distinguish it from other coins. If we don’t encode the information, then it’s not in our long-term memory, so we will not be able to remember it.

Four illustrations of nickels have minor differences in the placement and orientation of text.
Figure 8.13 Can you tell which coin, (a), (b), (c), or (d) is the accurate depiction of a US nickel? The correct answer is (c).

Memory Errors

Psychologist Daniel Schacter (2001), a well-known memory researcher, offers seven ways our memories fail us. He calls them the seven sins of memory and categorizes them into three groups: forgetting, distortion, and intrusion (Table 8.1).

Schacter’s Seven Sins of Memory
Sin Type Description Example
Transience Forgetting Accessibility of memory decreases over time Forget events that occurred long ago
absentmindedness Forgetting Forgetting caused by lapses in attention Forget where your phone is
Blocking Forgetting Accessibility of information is temporarily blocked Tip of the tongue
Misattribution Distortion Source of memory is confused Recalling a dream memory as a waking memory
Suggestibility Distortion False memories Result from leading questions
Bias Distortion Memories distorted by current belief system Align memories to current beliefs
Persistence Intrusion Inability to forget undesirable memories Traumatic events
Table 8.1

Let’s look at the first sin of the forgetting errors: transience, which means that memories can fade over time. Here’s an example of how this happens. Nathan’s English teacher has assigned his students to read the novel To Kill a Mockingbird. Nathan comes home from school and tells his mom he has to read this book for class. “Oh, I loved that book!” she says. Nathan asks her what the book is about, and after some hesitation, she says, “Well . . . I know I read the book in high school, and I remember that one of the main characters is named Scout, and her father is an attorney, but I honestly don’t remember anything else.” Nathan wonders if his mother actually read the book, and his mother is surprised she can’t recall the plot. What is going on here is storage decay: unused information tends to fade with the passage of time.

In 1885, German psychologist Hermann Ebbinghaus analyzed the process of memorization. First, he memorized lists of nonsense syllables. Then he measured how much he learned (retained) when he attempted to relearn each list. He tested himself over different periods of time from 20 minutes later to 30 days later. The result is his famous forgetting curve (Figure 8.14). Due to storage decay, an average person will lose 50% of the memorized information after 20 minutes and 70% of the information after 24 hours (Ebbinghaus, 1885/1964). Your memory for new information decays quickly and then eventually levels out.

A line graph has an x-axis labeled “elapsed time since learning” with a scale listing these intervals: 0, 20, and 60 minutes; 9, 24, and 48 hours; and 6 and 31 days. The y-axis is labeled “retention (%)” with a scale of zero to 100. The line reflects these approximate data points: 0 minutes is 100%, 20 minutes is 55%, 60 minutes is 40%, 9 hours is 37%, 24 hours is 30%, 48 hours is 25%, 6 days is 20%, and 31 days is 10%.
Figure 8.14The Ebbinghaus forgetting curve shows how quickly memory for new information decays.

Are you constantly losing your cell phone? Have you ever driven back home to make sure you turned off the stove? Have you ever walked into a room for something, but forgotten what it was? You probably answered yes to at least one, if not all, of these examples—but don’t worry, you are not alone. We are all prone to committing the memory error known as absentmindedness, which describes lapses in memory caused by breaks in attention or our focus being somewhere else.

Cynthia, a psychologist, recalls a time when she recently committed the memory error of absentmindedness.

When I was completing court-ordered psychological evaluations, each time I went to the court, I was issued a temporary identification card with a magnetic strip which would open otherwise locked doors. As you can imagine, in a courtroom, this identification is valuable and important and no one wanted it to be lost or be picked up by a criminal. At the end of the day, I would hand in my temporary identification. One day, when I was almost done with an evaluation, my daughter’s day care called and said she was sick and needed to be picked up. It was flu season, I didn’t know how sick she was, and I was concerned. I finished up the evaluation in the next ten minutes, packed up my briefcase, and rushed to drive to my daughter’s day care. After I picked up my daughter, I could not remember if I had handed back my identification or if I had left it sitting out on a table. I immediately called the court to check. It turned out that I had handed back my identification. Why could I not remember that? (personal communication, September 5, 2013)

When have you experienced absentmindedness?

“I just streamed this movie called Oblivion, and it had that famous actor in it. Oh, what’s his name? He’s been in all of those movies, like The Shawshank Redemption and The Dark Knight trilogy. I think he’s even won an Oscar. Oh gosh, I can picture his face in my mind, and hear his distinctive voice, but I just can’t think of his name! This is going to bug me until I can remember it!” This particular error can be so frustrating because you have the information right on the tip of your tongue. Have you ever experienced this? If so, you’ve committed the error known as blocking: you can’t access stored information (Figure 8.15).

A photograph shows Morgan Freeman.
Figure 8.15Blocking is also known as tip-of-the-tongue (TOT) phenomenon. The memory is right there, but you can’t seem to recall it, just like not being able to remember the name of that very famous actor, Morgan Freeman. (credit: modification of work by D. Miller)

Now let’s take a look at the three errors of distortion: misattribution, suggestibility, and bias. Misattribution happens when you confuse the source of your information. Let’s say Alejandra was dating Lucia and they saw the first Hobbit movie together. Then they broke up and Alejandra saw the second Hobbit movie with someone else. Later that year, Alejandra and Lucia get back together. One day, they are discussing how the Hobbit books and movies are different and Alejandra says to Lucia, “I loved watching the second movie with you and seeing you jump out of your seat during that super scary part.” When Lucia responded with a puzzled and then angry look, Alejandra realized she’d committed the error of misattribution.

What if someone is a victim of rape shortly after watching a television program? Is it possible that the victim could actually blame the rape on the person she saw on television because of misattribution? This is exactly what happened to Donald Thomson.

Australian eyewitness expert Donald Thomson appeared on a live TV discussion about the unreliability of eyewitness memory. He was later arrested, placed in a lineup and identified by a victim as the man who had raped her. The police charged Thomson although the rape had occurred at the time he was on TV. They dismissed his alibi that he was in plain view of a TV audience and in the company of the other discussants, including an assistant commissioner of police. . . . Eventually, the investigators discovered that the rapist had attacked the woman as she was watching TV—the very program on which Thomson had appeared. Authorities eventually cleared Thomson. The woman had confused the rapist’s face with the face that she had seen on TV. (Baddeley, 2004, p. 133)

The second distortion error is suggestibility. Suggestibility is similar to misattribution, since it also involves false memories, but it’s different. With misattribution you create the false memory entirely on your own, which is what the victim did in the Donald Thomson case above. With suggestibility, it comes from someone else, such as a therapist or police interviewer asking leading questions of a witness during an interview.

Memories can also be affected by bias, which is the final distortion error. Schacter (2001) says that your feelings and view of the world can actually distort your memory of past events. There are several types of bias:

  • Stereotypical bias involves racial and gender biases. For example, when Asian American and European American research participants were presented with a list of names, they more frequently incorrectly remembered typical African American names such as Jamal and Tyrone to be associated with the occupation basketball player, and they more frequently incorrectly remembered typical White names such as Greg and Howard to be associated with the occupation of politician (Payne et al., 2004).
  • Egocentric bias involves enhancing our memories of the past (Payne et al., 2004). Did you really score the winning goal in that big soccer match, or did you just assist?
  • Hindsight bias happens when we think an outcome was inevitable after the fact. This is the “I knew it all along” phenomenon. The reconstructive nature of memory contributes to hindsight bias (Carli, 1999). We remember untrue events that seem to confirm that we knew the outcome all along.

Have you ever had a song play over and over in your head? How about a memory of a traumatic event, something you really do not want to think about? When you keep remembering something, to the point where you can’t “get it out of your head” and it interferes with your ability to concentrate on other things, it is called persistence. It’s Schacter’s seventh and last memory error. It’s actually a failure of our memory system because we involuntarily recall unwanted memories, particularly unpleasant ones (Figure 8.16). For instance, you witness a horrific car accident on the way to work one morning, and you can’t concentrate on work because you keep remembering the scene.

A photograph shows two soldiers physically fighting.
Figure 8.16 Many veterans of military conflicts involuntarily recall unwanted, unpleasant memories. (credit: Department of Defense photo by U.S. Air Force Tech. Sgt. Michael R. Holzworth)

Interference

Sometimes information is stored in our memory, but for some reason it is inaccessible. This is known as interference, and there are two types: proactive interference and retroactive interference (Figure 8.17). Have you ever gotten a new phone number or moved to a new address, but right after you tell people the old (and wrong) phone number or address? When the new year starts, do you find you accidentally write the previous year? These are examples of proactive interference: when old information hinders the recall of newly learned information. Retroactive interference happens when information learned more recently hinders the recall of older information. For example, this week you are studying memory and learn about the Ebbinghaus forgetting curve. Next week you study lifespan development and learn about Erikson’s theory of psychosocial development, but thereafter have trouble remembering Ebbinghaus’s work because you can only remember Erickson’s theory.

A diagram shows two types of interference. A box with the text “learn combination to high school locker, 17–04–32” is followed by an arrow pointing right toward a box labeled “memory of old locker combination interferes with recall of new gym locker combination, ??–??–??”; the arrow connecting the two boxes contains the text “proactive interference (old information hinders recall of new information.” Beneath that is a second part of the diagram. A box with the text “knowledge of new email address interferes with recall of old email address, nvayala@???” is followed by an arrow pointing left toward the “early event” box and away from another box labeled “learn sibling’s new college email address, npatel@siblingcollege.edu”; the arrow connecting the two boxes contains the text “retroactive interference (new information hinders recall of old information.”
Figure 8.17 Sometimes forgetting is caused by a failure to retrieve information. This can be due to interference, either retroactive or proactive.

Learning Objectives

By the end of this section, you will be able to:

  • Recognize and apply memory-enhancing strategies
  • Recognize and apply effective study techniques

Most of us suffer from memory failures of one kind or another, and most of us would like to improve our memories so that we don’t forget where we put the car keys or, more importantly, the material we need to know for an exam. In this section, we’ll look at some ways to help you remember better, and at some strategies for more effective studying.

Memory-Enhancing Strategies

What are some everyday ways we can improve our memory, including recall? To help make sure information goes from short-term memory to long-term memory, you can use memory-enhancing strategies. One strategy is rehearsal, or the conscious repetition of information to be remembered (Craik & Watkins, 1973). Think about how you learned your multiplication tables as a child. You may recall that 6 x 6 = 36, 6 x 7 = 42, and 6 x 8 = 48. Memorizing these facts is rehearsal.

Another strategy is chunking: you organize information into manageable bits or chunks (Bodie et al., 2006). Chunking is useful when trying to remember information like dates and phone numbers. Instead of trying to remember 5205550467, you remember the number as 520-555-0467. So, if you met an interesting person at a party and you wanted to remember his phone number, you would naturally chunk it, and you could repeat the number over and over, which is the rehearsal strategy.

You could also enhance memory by using elaborative rehearsal: a technique in which you think about the meaning of new information and its relation to knowledge already stored in your memory (Tigner, 1999). Elaborative rehearsal involves both linking the information to knowledge already stored and repeating the information. For example, in this case, you could remember that 520 is an area code for Arizona and the person you met is from Arizona. This would help you better remember the 520 prefix. If the information is retained, it goes into long-term memory.

Mnemonic devices are memory aids that help us organize information for encoding (Figure 8.18). They are especially useful when we want to recall larger bits of information such as steps, stages, phases, and parts of a system (Bellezza, 1981). Brian needs to learn the order of the planets in the solar system, but he’s having a hard time remembering the correct order. His friend Kelly suggests a mnemonic device that can help him remember. Kelly tells Brian to simply remember the name Mr. VEM J. SUN, and he can easily recall the correct order of the planets: Mercury, Venus, Earth, Mars, Jupiter, Saturn, Uranus, and Neptune. You might use a mnemonic device to help you remember someone’s name, a mathematical formula, or the order of mathematical operations.

A photograph shows a person’s two hands clenched into fists so the knuckles show. The knuckles are labeled with the months and the number of days in each month, with the knuckle protrusions corresponding to the months with 31 days, and the indentations between knuckles corresponding to February and the months with 30 days.
Figure 8.18This is a knuckle mnemonic to help you remember the number of days in each month. Months with 31 days are represented by the protruding knuckles and shorter months fall in the spots between knuckles. (credit: modification of work by Cory Zanker)

If you have ever watched the television show Modern Family, you might have seen Phil Dunphy explain how he remembers names:

The other day I met this guy named Carl. Now, I might forget that name, but he was wearing a Grateful Dead t-shirt. What’s a band like the Grateful Dead? Phish. Where do fish live? The ocean. What else lives in the ocean? Coral. Hello, Co-arl. (Wrubel & Spiller, 2010)

It seems the more vivid or unusual the mnemonic, the easier it is to remember. The key to using any mnemonic successfully is to find a strategy that works for you.

What if you want to remember items you need to pick up at the store? Simply say them out loud to yourself. A series of studies (MacLeod et al., 2010) found that saying a word out loud improves your memory for the word because it increases the word’s distinctiveness. Feel silly, saying random grocery items aloud? This technique works equally well if you just mouth the words. Using these techniques increased participants’ memory for the words by more than 10%. These techniques can also be used to help you study.

How to Study Effectively

Based on the information presented in this chapter, here are some strategies and suggestions to help you hone your study techniques (Figure 8.19). The key to any of these strategies is to figure out what works best for you.

A photograph shows students studying.
Figure 8.19Memory techniques can be useful when studying for class. (credit: Barry Pousman)
  • Use elaborative rehearsal: In a famous article, Fergus Craik and Robert Lockhart (1972) discussed their belief that information we process more deeply goes into long-term memory. Their theory is called levels of processing. If we want to remember a piece of information, we should think about it more deeply and link it to other information and memories to make it more meaningful. For example, if we are trying to remember that the hippocampus is involved with memory processing, we might envision a hippopotamus with an excellent memory and then we could better remember the hippocampus.
  • Apply the self-reference effect: As you go through the process of elaborative rehearsal, it would be even more beneficial to make the material you are trying to memorize personally meaningful to you. In other words, make use of the self-reference effect. Write notes in your own words. Write definitions from the text, and then rewrite them in your own words. Relate the material to something you have already learned for another class, or think about how you can apply the concepts to your own life. When you do this, you are building a web of retrieval cues that will help you access the material when you want to remember it.
  • Use distributed practice: Study across time in short durations rather than trying to cram it all in at once. Memory consolidation takes time, and studying across time allows time for memories to consolidate. In addition, cramming can cause the links between concepts to become so active that you get stuck in a link, and it prevents you from accessing the rest of the information that you learned.
  • Rehearse, rehearse, rehearse: Review the material over time, in spaced and organized study sessions. Organize and study your notes, and take practice quizzes/exams. Link the new information to other information you already know well.
  • Study efficiently: Students are great highlighters, but highlighting is not very efficient because students spend too much time studying the things they already learned. Instead of highlighting, use index cards. Write the question on one side and the answer on the other side. When you study, separate your cards into those you got right and those you got wrong. Study the ones you got wrong and keep sorting. Eventually, all your cards will be in the pile you answered correctly.
  • Be aware of interference: To reduce the likelihood of interference, study during a quiet time without interruptions or distractions (like television or music).
  • Keep moving: Of course, you already know that exercise is good for your body, but did you also know it’s also good for your mind? Research suggests that regular aerobic exercise (anything that gets your heart rate elevated) is beneficial for memory (van Praag, 2008). Aerobic exercise promotes neurogenesis: the growth of new brain cells in the hippocampus, an area of the brain known to play a role in memory and learning.
  • Get enough sleep: While you are sleeping, your brain is still at work. During sleep, the brain organizes and consolidates information to be stored in long-term memory (Abel & Bäuml, 2013).
  • Make use of mnemonic devices: As you learned earlier in this chapter, mnemonic devices often help us to remember and recall information. There are different types of mnemonic devices, such as the acronym. An acronym is a word formed by the first letter of each of the words you want to remember. For example, even if you live near one, you might have difficulty recalling the names of all five Great Lakes. What if I told you to think of the word Homes? HOMES is an acronym that represents Huron, Ontario, Michigan, Erie, and Superior: the five Great Lakes. Another type of mnemonic device is an acrostic: you make a phrase of all the first letters of the words. For example, if you are taking a math test and you are having difficulty remembering the order of operations, recalling the following sentence will help you: “Please Excuse My Dear Aunt Sally,” because the order of mathematical operations is Parentheses, Exponents, Multiplication, Division, Addition, Subtraction. There also are jingles, which are rhyming tunes that contain keywords related to the concept, such as i before e, except after c.

Review of MCCCD Course Competencies

After reading this chapter are you better able to do the following?

  • Describe cognitive processes including those related to learning, language, and intelligence
  • Critically evaluate information to help make evidence-based decisions.
  • Apply biopsychosocial principles to real-world situations.
  • Use psychological principles to explain the diversity and complexity of the human experience.

Chapter Review Quiz

An interactive H5P element has been excluded from this version of the text. You can view it online here:
https://open.maricopa.edu/intropsychme/?p=33#h5p-29

8

Thinking and Intelligence

Three side by side images are shown. On the left is a person lying in the grass with a book, looking off into the distance. In the middle is a sculpture of a person sitting on rock, with chin rested on hand, and the elbow of that hand rested on knee. The third is a drawing of a person sitting cross-legged with his head resting on his hand, elbow on knee.
Figure 7.1 Thinking is an important part of our human experience, and one that has captivated people for centuries. Today, it is one area of psychological study. The 19th-century Girl with a Book by José Ferraz de Almeida Júnior, the 20th-century sculpture The Thinker by August Rodin, and Shi Ke’s 10th-century painting Huike Thinking all reflect the fascination with the process of human thought.

What is the best way to solve a problem? How does a person who has never seen or touched snow in real life develop an understanding of the concept of snow? How do young children acquire the ability to learn language with no formal instruction? Psychologists who study thinking explore questions like these and are called cognitive psychologists.

Cognitive psychologists also study intelligence. What is intelligence, and how does it vary from person to person? Are “street smarts” a kind of intelligence, and if so, how do they relate to other types of intelligence? What does an IQ test really measure? These questions and more will be explored in this chapter as you study thinking and intelligence.

In other chapters, we discussed the cognitive processes of perception, learning, and memory. In this chapter, we will focus on high-level cognitive processes. As a part of this discussion, we will consider thinking and briefly explore the development and use of language. We will also discuss problem solving and creativity before ending with a discussion of how intelligence is measured and how our biology and environments interact to affect intelligence. After finishing this chapter, you will have a greater appreciation of the higher-level cognitive processes that contribute to our distinctiveness as a species.

MCCCD Course Competencies

  • Describe cognitive processes including those related to learning, language, and intelligence.
  • Critically evaluate information to help make evidence-based decisions.
  • Apply biopsychosocial principles to real-world situations.
  • Use psychological principles to explain the diversity and complexity of the human experience.

Learning Objectives

By the end of this section, you will be able to:

  • Describe cognition
  • Distinguish concepts and prototypes
  • Explain the difference between natural and artificial concepts
  • Describe how schemata are organized and constructed

Imagine all of your thoughts as if they were physical entities, swirling rapidly inside your mind. How is it possible that the brain is able to move from one thought to the next in an organized, orderly fashion? The brain is endlessly perceiving, processing, planning, organizing, and remembering—it is always active. Yet, you don’t notice most of your brain’s activity as you move throughout your daily routine. This is only one facet of the complex processes involved in cognition. Simply put, cognition is thinking, and it encompasses the processes associated with perception, knowledge, problem solving, judgment, language, and memory. Scientists who study cognition are searching for ways to understand how we integrate, organize, and utilize our conscious cognitive experiences without being aware of all of the unconscious work that our brains are doing (for example, Kahneman, 2011).

Cognition

Upon waking each morning, you begin thinking—contemplating the tasks that you must complete that day. In what order should you run your errands? Should you go to the bank, the cleaners, or the grocery store first? Can you get these things done before you head to class or will they need to wait until school is done? These thoughts are one example of cognition at work. Exceptionally complex, cognition is an essential feature of human consciousness, yet not all aspects of cognition are consciously experienced.

Cognitive psychology is the field of psychology dedicated to examining how people think. It attempts to explain how and why we think the way we do by studying the interactions among human thinking, emotion, creativity, language, and problem solving, in addition to other cognitive processes. Cognitive psychologists strive to determine and measure different types of intelligence, why some people are better at problem solving than others, and how emotional intelligence affects success in the workplace, among countless other topics. They also sometimes focus on how we organize thoughts and information gathered from our environments into meaningful categories of thought, which will be discussed later.

Concepts and Prototypes

The human nervous system is capable of handling endless streams of information. The senses serve as the interface between the mind and the external environment, receiving stimuli and translating them into nerve impulses that are transmitted to the brain. The brain then processes this information and uses the relevant pieces to create thoughts, which can then be expressed through language or stored in memory for future use. To make this process more complex, the brain does not gather information from external environments only. When thoughts are formed, the mind synthesizes information from emotions and memories (Figure 7.2). Emotion and memory are powerful influences on both our thoughts and behaviors.

The outline of a human head is shown. There is a box containing “Information, sensations” in front of the head. An arrow from this box points to another box containing “Emotions, memories” located where the front of the person's brain would be. An arrow from this second box points to a third box containing “Thoughts” located where the back of the person's brain would be. There are two arrows coming from “Thoughts.” One arrow points back to the second box, “Emotions, memories,” and the other arrow points to a fourth box, “Behavior.”
Figure 7.2 Sensations and information are received by our brains, filtered through emotions and memories, and processed to become thoughts.

In order to organize this staggering amount of information, the mind has developed a “file cabinet” of sorts in the mind. The different files stored in the file cabinet are called concepts. Concepts are categories or groupings of linguistic information, images, ideas, or memories, such as life experiences. Concepts are, in many ways, big ideas that are generated by observing details, and categorizing and combining these details into cognitive structures. You use concepts to see the relationships among the different elements of your experiences and to keep the information in your mind organized and accessible.

Concepts are informed by our semantic memory (you will learn more about semantic memory in a later chapter) and are present in every aspect of our lives; however, one of the easiest places to notice concepts is inside a classroom, where they are discussed explicitly. When you study United States history, for example, you learn about more than just individual events that have happened in America’s past. You absorb a large quantity of information by listening to and participating in discussions, examining maps, and reading first-hand accounts of people’s lives. Your brain analyzes these details and develops an overall understanding of American history. In the process, your brain gathers details that inform and refine your understanding of related concepts like democracy, power, and freedom.

Concepts can be complex and abstract, like justice, or more concrete, like types of birds. In psychology, for example, Piaget’s stages of development are abstract concepts. Some concepts, like tolerance, are agreed upon by many people, because they have been used in various ways over many years. Other concepts, like the characteristics of your ideal friend or your family’s birthday traditions, are personal and individualized. In this way, concepts touch every aspect of our lives, from our many daily routines to the guiding principles behind the way governments function.

Another technique used by your brain to organize information is the identification of prototypes for the concepts you have developed. A prototype is the best example or representation of a concept. For example, what comes to your mind when you think of a dog? Most likely your early experiences with dogs will shape what you imagine. If your first pet was a Golden Retriever, there is a good chance that this would be your prototype for the category of dogs.

Natural and Artificial Concepts

In psychology, concepts can be divided into two categories, natural and artificial. Natural concepts are created “naturally” through your experiences and can be developed from either direct or indirect experiences. For example, if you live in Essex Junction, Vermont, you have probably had a lot of direct experience with snow. You’ve watched it fall from the sky, you’ve seen lightly falling snow that barely covers the windshield of your car, and you’ve shoveled out 18 inches of fluffy white snow as you’ve thought, “This is perfect for skiing.” You’ve thrown snowballs at your best friend and gone sledding down the steepest hill in town. In short, you know snow. You know what it looks like, smells like, tastes like, and feels like. If, however, you’ve lived your whole life on the island of Saint Vincent in the Caribbean, you may never have actually seen snow, much less tasted, smelled, or touched it. You know snow from the indirect experience of seeing pictures of falling snow—or from watching films that feature snow as part of the setting. Either way, snow is a natural concept because you can construct an understanding of it through direct observations, experiences with snow, or indirect knowledge (such as from films or books) (Figure 7.3).

Photograph A shows a snow covered landscape with the sun shining over it. Photograph B shows a sphere shaped object perched atop the corner of a cube shaped object. There is also a triangular object shown.
Figure 7.3 (a) Our concept of snow is an example of a natural concept—one that we understand through direct observation and experience. (b) In contrast, artificial concepts are ones that we know by a specific set of characteristics that they always exhibit, such as what defines different basic shapes. (credit a: modification of work by Maarten Takens; credit b: modification of work by “Shayan (USA)”/Flickr)

An artificial concept, on the other hand, is a concept that is defined by a specific set of characteristics. Various properties of geometric shapes, like squares and triangles, serve as useful examples of artificial concepts. A triangle always has three angles and three sides. A square always has four equal sides and four right angles. Mathematical formulas, like the equation for area (length × width) are artificial concepts defined by specific sets of characteristics that are always the same. Artificial concepts can enhance the understanding of a topic by building on one another. For example, before learning the concept of “area of a square” (and the formula to find it), you must understand what a square is. Once the concept of “area of a square” is understood, an understanding of area for other geometric shapes can be built upon the original understanding of area. The use of artificial concepts to define an idea is crucial to communicating with others and engaging in complex thought. According to Goldstone and Kersten (2003), concepts act as building blocks and can be connected in countless combinations to create complex thoughts.

Schemata

schema is a mental construct consisting of a cluster or collection of related concepts (Bartlett, 1932). There are many different types of schemata, and they all have one thing in common: schemata are a method of organizing information that allows the brain to work more efficiently. When a schema is activated, the brain makes immediate assumptions about the person or object being observed.

There are several types of schemata. A role schema makes assumptions about how individuals in certain roles will behave (Callero, 1994). For example, imagine you meet someone who introduces himself as a firefighter. When this happens, your brain automatically activates the “firefighter schema” and begins making assumptions that this person is brave, selfless, and community-oriented. Despite not knowing this person, already you have unknowingly made judgments about him. Schemata also help you fill in gaps in the information you receive from the world around you. While schemata allow for more efficient information processing, there can be problems with schemata, regardless of whether they are accurate: Perhaps this particular firefighter is not brave, he just works as a firefighter to pay the bills while studying to become a children’s librarian.

An event schema, also known as a cognitive script, is a set of behaviors that can feel like a routine. Think about what you do when you walk into an elevator (Figure 7.4). First, the doors open and you wait to let exiting passengers leave the elevator car. Then, you step into the elevator and turn around to face the doors, looking for the correct button to push. You never face the back of the elevator, do you? And when you’re riding in a crowded elevator and you can’t face the front, it feels uncomfortable, doesn’t it? Interestingly, event schemata can vary widely among different cultures and countries. For example, while it is quite common for people to greet one another with a handshake in the United States, in Tibet, you greet someone by sticking your tongue out at them, and in Belize, you bump fists (Cairns Regional Council, n.d.)

A crowded elevator is shown. There are many people standing close to one another.
Figure 7.4 What event schema do you perform when riding in an elevator? (credit: “Gideon”/Flickr)

Because event schemata are automatic, they can be difficult to change. Imagine that you are driving home from work or school. This event schema involves getting in the car, shutting the door, and buckling your seatbelt before putting the key in the ignition. You might perform this script two or three times each day. As you drive home, you hear your phone’s ring tone. Typically, the event schema that occurs when you hear your phone ringing involves locating the phone and answering it or responding to your latest text message. So without thinking, you reach for your phone, which could be in your pocket, in your bag, or on the passenger seat of the car. This powerful event schema is informed by your pattern of behavior and the pleasurable stimulation that a phone call or text message gives your brain. Because it is a schema, it is extremely challenging for us to stop reaching for the phone, even though we know that we endanger our own lives and the lives of others while we do it (Neyfakh, 2013) (Figure 7.5).

A person’s right hand is holding a cellular phone. The person is in the driver’s seat of an automobile while on the road.
Figure 7.5 Texting while driving is dangerous, but it is a difficult event schema for some people to resist.

Remember the elevator? It feels almost impossible to walk in and not face the door. Our powerful event schema dictates our behavior in the elevator, and it is no different with our phones. Current research suggests that it is the habit, or event schema, of checking our phones in many different situations that makes refraining from checking them while driving especially difficult (Bayer & Campbell, 2012). Because texting and driving has become a dangerous epidemic in recent years, psychologists are looking at ways to help people interrupt the “phone schema” while driving. Event schemata like these are the reason why many habits are difficult to break once they have been acquired. As we continue to examine thinking, keep in mind how powerful the forces of concepts and schemata are to our understanding of the world.

Learning Objectives

By the end of this section, you will be able to:

  • Define language and demonstrate familiarity with the components of language
  • Understand the development of language
  • Explain the relationship between language and thinking

Language is a communication system that involves using words and systematic rules to organize those words to transmit information from one individual to another. While language is a form of communication, not all communication is language. Many species communicate with one another through their postures, movements, odors, or vocalizations. This communication is crucial for species that need to interact and develop social relationships with their conspecifics. However, many people have asserted that it is language that makes humans unique among all of the animal species (Corballis & Suddendorf, 2007; Tomasello & Rakoczy, 2003). This section will focus on what distinguishes language as a special form of communication, how the use of language develops, and how language affects the way we think.

Components of Language

Language, be it spoken, signed, or written, has specific components: a lexicon and grammar. Lexicon refers to the words of a given language. Thus, lexicon is a language’s vocabulary. Grammar refers to the set of rules that are used to convey meaning through the use of the lexicon (Fernández & Cairns, 2011). For instance, English grammar dictates that most verbs receive an “-ed” at the end to indicate past tense.

Words are formed by combining the various phonemes that make up the language. A phoneme (e.g., the sounds “ah” vs. “eh”) is a basic sound unit of a given language, and different languages have different sets of phonemes. Phonemes are combined to form morphemes, which are the smallest units of language that convey some type of meaning (e.g., “I” is both a phoneme and a morpheme). We use semantics and syntax to construct language. Semantics and syntax are part of a language’s grammar. Semantics refers to the process by which we derive meaning from morphemes and words. Syntax refers to the way words are organized into sentences (Chomsky, 1965; Fernández & Cairns, 2011).

We apply the rules of grammar to organize the lexicon in novel and creative ways, which allow us to communicate information about both concrete and abstract concepts. We can talk about our immediate and observable surroundings as well as the surface of unseen planets. We can share our innermost thoughts, our plans for the future, and debate the value of a college education. We can provide detailed instructions for cooking a meal, fixing a car, or building a fire. Through our use of words and language, we are able to form, organize, and express ideas, schema, and artificial concepts.

Language Development

Given the remarkable complexity of a language, one might expect that mastering a language would be an especially arduous task; indeed, for those of us trying to learn a second language as adults, this might seem to be true. However, young children master language very quickly with relative ease. B. F. Skinner (1957) proposed that language is learned through reinforcement. Noam Chomsky (1965) criticized this behaviorist approach, asserting instead that the mechanisms underlying language acquisition are biologically determined. The use of language develops in the absence of formal instruction and appears to follow a very similar pattern in children from vastly different cultures and backgrounds. It would seem, therefore, that we are born with a biological predisposition to acquire a language (Chomsky, 1965; Fernández & Cairns, 2011). Moreover, it appears that there is a critical period for language acquisition, such that this proficiency at acquiring language is maximal early in life; generally, as people age, the ease with which they acquire and master new languages diminishes (Johnson & Newport, 1989; Lenneberg, 1967; Singleton, 1995).

Children begin to learn about language from a very early age (Table 7.1). In fact, it appears that this is occurring even before we are born. Newborns show a preference for their mother’s voice and appear to be able to discriminate between the language spoken by their mother and other languages. Babies are also attuned to the languages being used around them and show preferences for videos of faces that are moving in synchrony with the audio of spoken language versus videos that do not synchronize with the audio (Blossom & Morgan, 2006; Pickens, 1994; Spelke & Cortelyou, 1981).

Stages of Language and Communication Development
Stage Age Developmental Language and Communication
1 0–3 months Reflexive communication
2 3–8 months Reflexive communication; interest in others
3 8–13 months Intentional communication; sociability
4 12–18 months First words
5 18–24 months Simple sentences of two words
6 2–3 years Sentences of three or more words
7 3–5 years Complex sentences; has conversations
Table7.1

DIG DEEPER: The Case of Genie

In the fall of 1970, a social worker in the Los Angeles area found a 13-year-old girl who was being raised in extremely neglectful and abusive conditions. The girl, who came to be known as Genie, had lived most of her life tied to a potty chair or confined to a crib in a small room that was kept closed with the curtains drawn. For a little over a decade, Genie had virtually no social interaction and no access to the outside world. As a result of these conditions, Genie was unable to stand up, chew solid food, or speak (Fromkin et al., 1974; Rymer, 1993). The police took Genie into protective custody.

Genie’s abilities improved dramatically following her removal from her abusive environment, and early on, it appeared she was acquiring language—much later than would be predicted by critical period hypotheses that had been posited at the time (Fromkin et al., 1974). Genie managed to amass an impressive vocabulary in a relatively short amount of time. However, she never developed a mastery of the grammatical aspects of language (Curtiss, 1981). Perhaps being deprived of the opportunity to learn language during a critical period impeded Genie’s ability to fully acquire and use language.

 

You may recall that each language has its own set of phonemes that are used to generate morphemes, words, and so on. Babies can discriminate among the sounds that make up a language (for example, they can tell the difference between the “s” in vision and the “ss” in fission); early on, they can differentiate between the sounds of all human languages, even those that do not occur in the languages that are used in their environments. However, by the time that they are about 1 year old, they can only discriminate among those phonemes that are used in the language or languages in their environments (Jensen, 2011; Werker & Lalonde, 1988; Werker & Tees, 1984).

After the first few months of life, babies enter what is known as the babbling stage, during which time they tend to produce single syllables that are repeated over and over. As time passes, more variations appear in the syllables that they produce. During this time, it is unlikely that the babies are trying to communicate; they are just as likely to babble when they are alone as when they are with their caregivers (Fernández & Cairns, 2011). Interestingly, babies who are raised in environments in which sign language is used will also begin to show babbling in the gestures of their hands during this stage (Petitto et al., 2004).

Generally, a child’s first word is uttered sometime between the ages of 1 year to 18 months, and for the next few months, the child will remain in the “one word” stage of language development. During this time, children know a number of words, but they only produce one-word utterances. The child’s early vocabulary is limited to familiar objects or events, often nouns. Although children in this stage only make one-word utterances, these words often carry larger meaning (Fernández & Cairns, 2011). So, for example, a child saying “cookie” could be identifying a cookie or asking for a cookie.

As a child’s lexicon grows, she begins to utter simple sentences and to acquire new vocabulary at a very rapid pace. In addition, children begin to demonstrate a clear understanding of the specific rules that apply to their language(s). Even the mistakes that children sometimes make provide evidence of just how much they understand about those rules. This is sometimes seen in the form of overgeneralization. In this context, overgeneralization refers to an extension of a language rule to an exception to the rule. For example, in English, it is usually the case that an “s” is added to the end of a word to indicate plurality. For example, we speak of one dog versus two dogs. Young children will overgeneralize this rule to cases that are exceptions to the “add an s to the end of the word” rule and say things like “those two gooses” or “three mouses.” Clearly, the rules of the language are understood, even if the exceptions to the rules are still being learned (Moskowitz, 1978).

Language and Thought

When we speak one language, we agree that words are representations of ideas, people, places, and events. The given language that children learn is connected to their culture and surroundings. But can words themselves shape the way we think about things? Psychologists have long investigated the question of whether language shapes thoughts and actions, or whether our thoughts and beliefs shape our language. Two researchers, Edward Sapir and Benjamin Lee Whorf, began this investigation in the 1940s. They wanted to understand how the language habits of a community encourage members of that community to interpret language in a particular manner (Sapir, 1941/1964). Sapir and Whorf proposed that language determines thought. For example, in some languages there are many different words for love. However, in English we use the word love for all types of love. Does this affect how we think about love depending on the language that we speak (Whorf, 1956)? Researchers have since identified this view as too absolute, pointing out a lack of empiricism behind what Sapir and Whorf proposed (Abler, 2013; Boroditsky, 2011; van Troyer, 1994). Today, psychologists continue to study and debate the relationship between language and thought.

WHAT DO YOU THINK? The Meaning of Language

Think about what you know of other languages; perhaps you even speak multiple languages. Imagine for a moment that your closest friend fluently speaks more than one language. Do you think that friend thinks differently, depending on which language is being spoken? You may know a few words that are not translatable from their original language into English. For example, the Portuguese word saudade originated during the 15th century, when Portuguese sailors left home to explore the seas and travel to Africa or Asia. Those left behind described the emptiness and fondness they felt as saudade (Figure 7.6). The word came to express many meanings, including loss, nostalgia, yearning, warm memories, and hope. There is no single word in English that includes all of those emotions in a single description. Do words such as saudade indicate that different languages produce different patterns of thought in people? What do you think??

Photograph A shows a painting of a person leaning against a ledge, slumped sideways over a box. Photograph B shows a painting of a person reading by a window.
Figure 7.6 These two works of art depict saudade. (a) Saudade de Nápoles, which is translated into “missing Naples,” was painted by Bertha Worms in 1895. (b) Almeida Júnior painted Saudade in 1899.

Language may indeed influence the way that we think, an idea known as linguistic determinism. One recent demonstration of this phenomenon involved differences in the way that English and Mandarin Chinese speakers talk and think about time. English speakers tend to talk about time using terms that describe changes along a horizontal dimension, for example, saying something like “I’m running behind schedule” or “Don’t get ahead of yourself.” While Mandarin Chinese speakers also describe time in horizontal terms, it is not uncommon to also use terms associated with a vertical arrangement. For example, the past might be described as being “up” and the future as being “down.” It turns out that these differences in language translate into differences in performance on cognitive tests designed to measure how quickly an individual can recognize temporal relationships. Specifically, when given a series of tasks with vertical priming, Mandarin Chinese speakers were faster at recognizing temporal relationships between months. Indeed, Boroditsky (2001) sees these results as suggesting that “habits in language encourage habits in thought” (p. 12).

One group of researchers who wanted to investigate how language influences thought compared how English speakers and the Dani people of Papua New Guinea think and speak about color. The Dani have two words for color: one word for light and one word for dark. In contrast, the English language has 11 color words. Researchers hypothesized that the number of color terms could limit the ways that the Dani people conceptualized color. However, the Dani were able to distinguish colors with the same ability as English speakers, despite having fewer words at their disposal (Berlin & Kay, 1969). A recent review of research aimed at determining how language might affect something like color perception suggests that language can influence perceptual phenomena, especially in the left hemisphere of the brain. You may recall from earlier chapters that the left hemisphere is associated with language for most people. However, the right (less linguistic hemisphere) of the brain is less affected by linguistic influences on perception (Regier & Kay, 2009)

Learning Objectives

By the end of this section, you will be able to:

  • Describe problem solving strategies
  • Define algorithm and heuristic
  • Explain some common roadblocks to effective problem solving and decision making

People face problems every day—usually, multiple problems throughout the day. Sometimes these problems are straightforward: To double a recipe for pizza dough, for example, all that is required is that each ingredient in the recipe be doubled. Sometimes, however, the problems we encounter are more complex. For example, say you have a work deadline, and you must mail a printed copy of a report to your supervisor by the end of the business day. The report is time-sensitive and must be sent overnight. You finished the report last night, but your printer will not work today. What should you do? First, you need to identify the problem and then apply a strategy for solving the problem.

Problem-Solving Strategies

When you are presented with a problem—whether it is a complex mathematical problem or a broken printer, how do you solve it? Before finding a solution to the problem, the problem must first be clearly identified. After that, one of many problem solving strategies can be applied, hopefully resulting in a solution.

problem-solving strategy is a plan of action used to find a solution. Different strategies have different action plans associated with them (Table 7.2). For example, a well-known strategy is trial and error. The old adage, “If at first you don’t succeed, try, try again” describes trial and error. In terms of your broken printer, you could try checking the ink levels, and if that doesn’t work, you could check to make sure the paper tray isn’t jammed. Or maybe the printer isn’t actually connected to your laptop. When using trial and error, you would continue to try different solutions until you solved your problem. Although trial and error is not typically one of the most time-efficient strategies, it is a commonly used one.

Problem-Solving Strategies
Method Description Example
Trial and error Continue trying different solutions until problem is solved Restarting phone, turning off WiFi, turning off bluetooth in order to determine why your phone is malfunctioning
Algorithm Step-by-step problem-solving formula Instruction manual for installing new software on your computer
Heuristic General problem-solving framework Working backwards; breaking a task into steps
Table7.2

Another type of strategy is an algorithm. An algorithm is a problem-solving formula that provides you with step-by-step instructions used to achieve a desired outcome (Kahneman, 2011). You can think of an algorithm as a recipe with highly detailed instructions that produce the same result every time they are performed. Algorithms are used frequently in our everyday lives, especially in computer science. When you run a search on the Internet, search engines like Google use algorithms to decide which entries will appear first in your list of results. Facebook also uses algorithms to decide which posts to display on your newsfeed. Can you identify other situations in which algorithms are used?

A heuristic is another type of problem solving strategy. While an algorithm must be followed exactly to produce a correct result, a heuristic is a general problem-solving framework (Tversky & Kahneman, 1974). You can think of these as mental shortcuts that are used to solve problems. A “rule of thumb” is an example of a heuristic. Such a rule saves the person time and energy when making a decision, but despite its time-saving characteristics, it is not always the best method for making a rational decision. Different types of heuristics are used in different types of situations, but the impulse to use a heuristic occurs when one of five conditions is met (Pratkanis, 1989):

  • When one is faced with too much information
  • When the time to make a decision is limited
  • When the decision to be made is unimportant
  • When there is access to very little information to use in making the decision
  • When an appropriate heuristic happens to come to mind in the same moment

Working backwards is a useful heuristic in which you begin solving the problem by focusing on the end result. Consider this example: You live in Washington, D.C. and have been invited to a wedding at 4 PM on Saturday in Philadelphia. Knowing that Interstate 95 tends to back up any day of the week, you need to plan your route and time your departure accordingly. If you want to be at the wedding service by 3:30 PM, and it takes 2.5 hours to get to Philadelphia without traffic, what time should you leave your house? You use the working backwards heuristic to plan the events of your day on a regular basis, probably without even thinking about it.

Another useful heuristic is the practice of accomplishing a large goal or task by breaking it into a series of smaller steps. Students often use this common method to complete a large research project or long essay for school. For example, students typically brainstorm, develop a thesis or main topic, research the chosen topic, organize their information into an outline, write a rough draft, revise and edit the rough draft, develop a final draft, organize the references list, and proofread their work before turning in the project. The large task becomes less overwhelming when it is broken down into a series of small steps.

EVERYDAY CONNECTION: Solving Puzzles

Problem-solving abilities can improve with practice. Many people challenge themselves every day with puzzles and other mental exercises to sharpen their problem-solving skills. Sudoku puzzles appear daily in most newspapers. Typically, a sudoku puzzle is a 9×9 grid. The simple sudoku below (Figure 7.7) is a 4×4 grid. To solve the puzzle, fill in the empty boxes with a single digit: 1, 2, 3, or 4. Here are the rules: The numbers must total 10 in each bolded box, each row, and each column; however, each digit can only appear once in a bolded box, row, and column. Time yourself as you solve this puzzle and compare your time with a classmate.

A four column by four row Sudoku puzzle is shown. The top left cell contains the number 3. The top right cell contains the number 2. The bottom right cell contains the number 1. The bottom left cell contains the number 4. The cell at the intersection of the second row and the second column contains the number 4. The cell to the right of that contains the number 1. The cell below the cell containing the number 1 contains the number 2. The cell to the left of the cell containing the number 2 contains the number 3.
Figure 7.7 How long did it take you to solve this sudoku puzzle? (You can see the answer at the end of this section.)

Here is another popular type of puzzle (Figure 7.8) that challenges your spatial reasoning skills. Connect all nine dots with four connecting straight lines without lifting your pencil from the paper:

A square shaped outline contains three rows and three columns of dots with equal space between them.
Figure 7.8 Did you figure it out? (The answer is at the end of this section.) Once you understand how to crack this puzzle, you won’t forget.

Take a look at the “Puzzling Scales” logic puzzle below (Figure 7.9). Sam Loyd, a well-known puzzle master, created and refined countless puzzles throughout his lifetime (Cyclopedia of Puzzles, n.d.).

A puzzle involving a scale is shown. At the top of the figure it reads: “Sam Loyds Puzzling Scales.” The first row of the puzzle shows a balanced scale with 3 blocks and a top on the left and 12 marbles on the right. Below this row it reads: “Since the scales now balance.” The next row of the puzzle shows a balanced scale with just the top on the left, and 1 block and 8 marbles on the right. Below this row it reads: “And balance when arranged this way.” The third row shows an unbalanced scale with the top on the left side, which is much lower than the right side. The right side is empty. Below this row it reads: “Then how many marbles will it require to balance with that top?”
Figure 7.9 What steps did you take to solve this puzzle? You can read the solution at the end of this section.

 

Pitfalls to Problem Solving

Not all problems are successfully solved, however. What challenges stop us from successfully solving a problem? Albert Einstein once said, “Insanity is doing the same thing over and over again and expecting a different result.” Imagine a person in a room that has four doorways. One doorway that has always been open in the past is now locked. The person, accustomed to exiting the room by that particular doorway, keeps trying to get out through the same doorway even though the other three doorways are open. The person is stuck—but she just needs to go to another doorway, instead of trying to get out through the locked doorway. A mental set is where you persist in approaching a problem in a way that has worked in the past but is clearly not working now.

Functional fixedness is a type of mental set where you cannot perceive an object being used for something other than what it was designed for. Duncker (1945) conducted foundational research on functional fixedness. He created an experiment in which participants were given a candle, a book of matches, and a box of thumbtacks. They were instructed to use those items to attach the candle to the wall so that it did not drip wax onto the table below. Participants had to use functional fixedness to solve the problem (Figure 7.10). During the Apollo 13 mission to the moon, NASA engineers at Mission Control had to overcome functional fixedness to save the lives of the astronauts aboard the spacecraft. An explosion in a module of the spacecraft damaged multiple systems. The astronauts were in danger of being poisoned by rising levels of carbon dioxide because of problems with the carbon dioxide filters. The engineers found a way for the astronauts to use spare plastic bags, tape, and air hoses to create a makeshift air filter, which saved the lives of the astronauts.

Figure a shows a book of matches, a box of thumbtacks, and a candle. Figure b shows the candle standing in the box that held the thumbtacks. A thumbtack attaches the box holding the candle to the wall.
Figure 7.10 In Duncker’s classic study, participants were provided the three objects in the top panel and asked to solve the problem. The solution is shown in the bottom portion.

In order to make good decisions, we use our knowledge and our reasoning. Often, this knowledge and reasoning is sound and solid. Sometimes, however, we are swayed by biases or by others manipulating a situation. For example, let’s say you and three friends wanted to rent a house and had a combined target budget of $1,600. The realtor shows you only very run-down houses for $1,600 and then shows you a very nice house for $2,000. Might you ask each person to pay more in rent to get the $2,000 home? Why would the realtor show you the run-down houses and the nice house? The realtor may be challenging your anchoring bias. An anchoring bias occurs when you focus on one piece of information when making a decision or solving a problem. In this case, you’re so focused on the amount of money you are willing to spend that you may not recognize what kinds of houses are available at that price point.

The confirmation bias is the tendency to focus on information that confirms your existing beliefs. For example, if you think that your professor is not very nice, you notice all of the instances of rude behavior exhibited by the professor while ignoring the countless pleasant interactions he is involved in on a daily basis. Hindsight bias leads you to believe that the event you just experienced was predictable, even though it really wasn’t. In other words, you knew all along that things would turn out the way they did. Representative bias describes a faulty way of thinking, in which you unintentionally stereotype someone or something; for example, you may assume that your professors spend their free time reading books and engaging in intellectual conversation, because the idea of them spending their time playing volleyball or visiting an amusement park does not fit in with your stereotypes of professors.

Finally, the availability heuristic is a heuristic in which you make a decision based on an example, information, or recent experience that is that readily available to you, even though it may not be the best example to inform your decision. Biases tend to “preserve that which is already established—to maintain our preexisting knowledge, beliefs, attitudes, and hypotheses” (Aronson, 1995; Kahneman, 2011). These biases are summarized in Table 7.3.

Summary of Decision Biases
Bias Description
Anchoring Tendency to focus on one particular piece of information when making decisions or problem-solving
Confirmation Focuses on information that confirms existing beliefs
Hindsight Belief that the event just experienced was predictable
Representative Unintentional stereotyping of someone or something
Availability Decision is based upon either an available precedent or an example that may be faulty
Table7.3

Learning Objectives

By the end of this section, you will be able to:

  • Define intelligence
  • Explain the triarchic theory of intelligence
  • Identify the difference between intelligence theories
  • Explain emotional intelligence
  • Define creativity

A four-and-a-half-year-old boy sits at the kitchen table with his father, who is reading a new story aloud to him. He turns the page to continue reading, but before he can begin, the boy says, “Wait, Daddy!” He points to the words on the new page and reads aloud, “Go, Pig! Go!” The father stops and looks at his son. “Can you read that?” he asks. “Yes, Daddy!” And he points to the words and reads again, “Go, Pig! Go!”

This father was not actively teaching his son to read, even though the child constantly asked questions about letters, words, and symbols that they saw everywhere: in the car, in the store, on the television. The dad wondered about what else his son might understand and decided to try an experiment. Grabbing a sheet of blank paper, he wrote several simple words in a list: mom, dad, dog, bird, bed, truck, car, tree. He put the list down in front of the boy and asked him to read the words. “Mom, dad, dog, bird, bed, truck, car, tree,” he read, slowing down to carefully pronounce bird and truck. Then, “Did I do it, Daddy?” “You sure did! That is very good.” The father gave his little boy a warm hug and continued reading the story about the pig, all the while wondering if his son’s abilities were an indication of exceptional intelligence or simply a normal pattern of linguistic development. Like the father in this example, psychologists have wondered what constitutes intelligence and how it can be measured.

Classifying Intelligence

What exactly is intelligence? The way that researchers have defined the concept of intelligence has been modified many times since the birth of psychology. British psychologist Charles Spearman believed intelligence consisted of one general factor, called g, which could be measured and compared among individuals. Spearman focused on the commonalities among various intellectual abilities and de-emphasized what made each unique. Long before modern psychology developed, however, ancient philosophers, such as Aristotle, held a similar view (Cianciolo & Sternberg, 2004).

Others psychologists believe that instead of a single factor, intelligence is a collection of distinct abilities. In the 1940s, Raymond Cattell proposed a theory of intelligence that divided general intelligence into two components: crystallized intelligence and fluid intelligence (Cattell, 1963). Crystallized intelligence is characterized as acquired knowledge and the ability to retrieve it. When you learn, remember, and recall information, you are using crystallized intelligence. You use crystallized intelligence all the time in your coursework by demonstrating that you have mastered the information covered in the course. Fluid intelligence encompasses the ability to see complex relationships and solve problems. Navigating your way home after being detoured onto an unfamiliar route because of road construction would draw upon your fluid intelligence. Fluid intelligence helps you tackle complex, abstract challenges in your daily life, whereas crystallized intelligence helps you overcome concrete, straightforward problems (Cattell, 1963).

Other theorists and psychologists believe that intelligence should be defined in more practical terms. For example, what types of behaviors help you get ahead in life? Which skills promote success? Think about this for a moment. Being able to recite all 45 presidents of the United States in order is an excellent party trick, but will knowing this make you a better person?

Robert Sternberg developed another theory of intelligence, which he titled the triarchic theory of intelligence because it sees intelligence as comprised of three parts (Sternberg, 1988): practical, creative, and analytical intelligence (Figure 7.12).

Three boxes are arranged in a triangle. The top box contains “Analytical intelligence; academic problem solving and computation.” There is a line with arrows on both ends connecting this box to another box containing “Practical intelligence; street smarts and common sense.” Another line with arrows on both ends connects this box to another box containing “Creative intelligence; imaginative and innovative problem solving.” Another line with arrows on both ends connects this box to the first box described, completing the triangle.
Figure 7.12 Sternberg’s theory identifies three types of intelligence: practical, creative, and analytical.

Practical intelligence, as proposed by Sternberg, is sometimes compared to “street smarts.” Being practical means you find solutions that work in your everyday life by applying knowledge based on your experiences. This type of intelligence appears to be separate from the traditional understanding of IQ; individuals who score high in practical intelligence may or may not have comparable scores in creative and analytical intelligence (Sternberg, 1988).

This story about the 2007 Virginia Tech shootings illustrates both high and low practical intelligences. During the incident, one student left her class to go get a soda in an adjacent building. She planned to return to class, but when she returned to her building after getting her soda, she saw that the door she used to leave was now chained shut from the inside. Instead of thinking about why there was a chain around the door handles, she went to her class’s window and crawled back into the room. She thus potentially exposed herself to the gunman. Thankfully, she was not shot. On the other hand, a pair of students were walking on campus when they heard gunshots nearby. One friend said, “Let’s go check it out and see what is going on.” The other student said, “No way, we need to run away from the gunshots.” They did just that. As a result, both avoided harm. The student who crawled through the window demonstrated some creative intelligence but did not use common sense. She would have low practical intelligence. The student who encouraged his friend to run away from the sound of gunshots would have much higher practical intelligence.

Analytical intelligence is closely aligned with academic problem solving and computations. Sternberg says that analytical intelligence is demonstrated by an ability to analyze, evaluate, judge, compare, and contrast. When reading a classic novel for literature class, for example, it is usually necessary to compare the motives of the main characters of the book or analyze the historical context of the story. In a science course such as anatomy, you must study the processes by which the body uses various minerals in different human systems. In developing an understanding of this topic, you are using analytical intelligence. When solving a challenging math problem, you would apply analytical intelligence to analyze different aspects of the problem and then solve it section by section.

Creative intelligence is marked by inventing or imagining a solution to a problem or situation. Creativity in this realm can include finding a novel solution to an unexpected problem or producing a beautiful work of art or a well-developed short story. Imagine for a moment that you are camping in the woods with some friends and realize that you’ve forgotten your camp coffee pot. The person in your group who figures out a way to successfully brew coffee for everyone would be credited as having higher creative intelligence.

Multiple Intelligences Theory was developed by Howard Gardner, a Harvard psychologist and former student of Erik Erikson. Gardner’s theory, which has been refined for more than 30 years, is a more recent development among theories of intelligence. In Gardner’s theory, each person possesses at least eight intelligences. Among these eight intelligences, a person typically excels in some and falters in others (Gardner, 1983). Table 7.4 describes each type of intelligence.

Multiple Intelligences
Intelligence Type Characteristics Representative Career
Linguistic intelligence Perceives different functions of language, different sounds and meanings of words, may easily learn multiple languages Journalist, novelist, poet, teacher
Logical-mathematical intelligence Capable of seeing numerical patterns, strong ability to use reason and logic Scientist, mathematician
Musical intelligence Understands and appreciates rhythm, pitch, and tone; may play multiple instruments or perform as a vocalist Composer, performer
Bodily kinesthetic intelligence High ability to control the movements of the body and use the body to perform various physical tasks Dancer, athlete, athletic coach, yoga instructor
Spatial intelligence Ability to perceive the relationship between objects and how they move in space Choreographer, sculptor, architect, aviator, sailor
Interpersonal intelligence Ability to understand and be sensitive to the various emotional states of others Counselor, social worker, salesperson
Intrapersonal intelligence Ability to access personal feelings and motivations, and use them to direct behavior and reach personal goals Key component of personal success over time
Naturalist intelligence High capacity to appreciate the natural world and interact with the species within it Biologist, ecologist, environmentalist
Table7.4

Gardner’s theory is relatively new and needs additional research to better establish empirical support. At the same time, his ideas challenge the traditional idea of intelligence to include a wider variety of abilities, although it has been suggested that Gardner simply relabeled what other theorists called “cognitive styles” as “intelligences” (Morgan, 1996). Furthermore, developing traditional measures of Gardner’s intelligences is extremely difficult (Furnham, 2009; Gardner & Moran, 2006; Klein, 1997).

Gardner’s inter- and intrapersonal intelligences are often combined into a single type: emotional intelligence. Emotional intelligence encompasses the ability to understand the emotions of yourself and others, show empathy, understand social relationships and cues, and regulate your own emotions and respond in culturally appropriate ways (Parker et al., 2009). People with high emotional intelligence typically have well-developed social skills. Some researchers, including Daniel Goleman, the author of Emotional Intelligence: Why It Can Matter More than IQ, argue that emotional intelligence is a better predictor of success than traditional intelligence (Goleman, 1995). However, emotional intelligence has been widely debated, with researchers pointing out inconsistencies in how it is defined and described, as well as questioning results of studies on a subject that is difficult to measure and study empirically (Locke, 2005; Mayer et al., 2004)

An interactive H5P element has been excluded from this version of the text. You can view it online here:
https://open.maricopa.edu/intropsychme/?p=31#h5p-22

The most comprehensive theory of intelligence to date is the Cattell-Horn-Carroll (CHC) theory of cognitive abilities (Schneider & McGrew, 2018). In this theory, abilities are related and arranged in a hierarchy with general abilities at the top, broad abilities in the middle, and narrow (specific) abilities at the bottom. The narrow abilities are the only ones that can be directly measured; however, they are integrated within the other abilities. At the general level is general intelligence. Next, the broad level consists of general abilities such as fluid reasoning, short-term memory, and processing speed. Finally, as the hierarchy continues, the narrow level includes specific forms of cognitive abilities. For example, short-term memory would further break down into memory span and working memory capacity.

Intelligence can also have different meanings and values in different cultures. If you live on a small island, where most people get their food by fishing from boats, it would be important to know how to fish and how to repair a boat. If you were an exceptional angler, your peers would probably consider you intelligent. If you were also skilled at repairing boats, your intelligence might be known across the whole island. Think about your own family’s culture. What values are important for Latinx families? Italian families? In Irish families, hospitality and telling an entertaining story are marks of the culture. If you are a skilled storyteller, other members of Irish culture are likely to consider you intelligent.

Some cultures place a high value on working together as a collective. In these cultures, the importance of the group supersedes the importance of individual achievement. When you visit such a culture, how well you relate to the values of that culture exemplifies your cultural intelligence, sometimes referred to as cultural competence.

Creativity

Creativity is the ability to generate, create, or discover new ideas, solutions, and possibilities. Very creative people often have intense knowledge about something, work on it for years, look at novel solutions, seek out the advice and help of other experts, and take risks. Although creativity is often associated with the arts, it is actually a vital form of intelligence that drives people in many disciplines to discover something new. Creativity can be found in every area of life, from the way you decorate your residence to a new way of understanding how a cell works.

Creativity is often assessed as a function of one’s ability to engage in divergent thinking. Divergent thinking can be described as thinking “outside the box;” it allows an individual to arrive at unique, multiple solutions to a given problem. In contrast, convergent thinking describes the ability to provide a correct or well-established answer or solution to a problem (Cropley, 2006; Gilford, 1967)

EVERYDAY CONNECTION: Creativity

Dr. Tom Steitz, former Sterling Professor of Biochemistry and Biophysics at Yale University, spent his career looking at the structure and specific aspects of RNA molecules and how their interactions could help produce antibiotics and ward off diseases. As a result of his lifetime of work, he won the Nobel Prize in Chemistry in 2009. He wrote, “Looking back over the development and progress of my career in science, I am reminded how vitally important good mentorship is in the early stages of one’s career development and constant face-to-face conversations, debate and discussions with colleagues at all stages of research. Outstanding discoveries, insights and developments do not happen in a vacuum” (Steitz, 2010, para. 39). Based on Steitz’s comment, it becomes clear that someone’s creativity, although an individual strength, benefits from interactions with others. Think of a time when your creativity was sparked by a conversation with a friend or classmate. How did that person influence you and what problem did you solve using creativity?

Learning Objectives

By the end of this section, you will be able to:

  • Explain how intelligence tests are developed
  • Describe the history of the use of IQ tests
  • Describe the purposes and benefits of intelligence testing

While you’re likely familiar with the term “IQ” and associate it with the idea of intelligence, what does IQ really mean? IQ stands for intelligence quotient and describes a score earned on a test designed to measure intelligence. You’ve already learned that there are many ways psychologists describe intelligence (or more aptly, intelligences). Similarly, IQ tests—the tools designed to measure intelligence—have been the subject of debate throughout their development and use.

When might an IQ test be used? What do we learn from the results, and how might people use this information? While there are certainly many benefits to intelligence testing, it is important to also note the limitations and controversies surrounding these tests. For example, IQ tests have sometimes been used as arguments in support of insidious purposes, such as the eugenics movement (Severson, 2011). The infamous Supreme Court Case, Buck v. Bell, legalized the forced sterilization of some people deemed “feeble-minded” through this type of testing, resulting in about 65,000 sterilizations (Buck v. Bell, 274 U.S. 200; Ko, 2016). Today, only professionals trained in psychology can administer IQ tests, and the purchase of most tests requires an advanced degree in psychology. Other professionals in the field, such as social workers and psychiatrists, cannot administer IQ tests. In this section, we will explore what intelligence tests measure, how they are scored, and how they were developed.

Measuring Intelligence

It seems that the human understanding of intelligence is somewhat limited when we focus on traditional or academic-type intelligence. How then, can intelligence be measured? And when we measure intelligence, how do we ensure that we capture what we’re really trying to measure (in other words, that IQ tests function as valid measures of intelligence)? In the following paragraphs, we will explore how intelligence tests were developed and the history of their use.

The IQ test has been synonymous with intelligence for over a century. In the late 1800s, Sir Francis Galton developed the first broad test of intelligence (Flanagan & Kaufman, 2004). Although he was not a psychologist, his contributions to the concepts of intelligence testing are still felt today (Gordon, 1995). Reliable intelligence testing (you may recall from earlier chapters that reliability refers to a test’s ability to produce consistent results) began in earnest during the early 1900s with a researcher named Alfred Binet (Figure 7.13). Binet was asked by the French government to develop an intelligence test to use on children to determine which ones might have difficulty in school; it included many verbally based tasks. American researchers soon realized the value of such testing. Louis Terman, a Stanford professor, modified Binet’s work by standardizing the administration of the test and tested thousands of different-aged children to establish an average score for each age. As a result, the test was normed and standardized, which means that the test was administered consistently to a large enough representative sample of the population that the range of scores resulted in a bell curve (bell curves will be discussed later). Standardization means that the manner of administration, scoring, and interpretation of results is consistent. Norming involves giving a test to a large population so data can be collected comparing groups, such as age groups. The resulting data provide norms, or referential scores, by which to interpret future scores. Norms are not expectations of what a given group should know but a demonstration of what that group does know. Norming and standardizing the test ensures that new scores are reliable. This new version of the test was called the Stanford-Binet Intelligence Scale (Terman, 1916). Remarkably, an updated version of this test is still widely used today.

Photograph A shows a portrait of Alfred Binet. Photograph B shows six sketches of human faces. Above these faces is the label “Guide for Binet-Simon Scale. 223” The faces are arranged in three rows of two, and these rows are labeled “1, 2, and 3.” At the bottom it reads: “The psychological clinic is indebted for the loan of these cuts and those on p. 225 to the courtesy of Dr. Oliver P. Cornman, Associate Superintendent of Schools of Philadelphia, and Chairman of Committee on Backward Children Investigation. See Report of Committee, Dec. 31, 1910, appendix.”
Figure 7.13 French psychologist Alfred Binet helped to develop intelligence testing. (b) This page is from a 1908 version of the Binet-Simon Intelligence Scale. Children being tested were asked which face, of each pair, was prettier.

In 1939, David Wechsler, a psychologist who spent part of his career working with World War I veterans, developed a new IQ test in the United States. Wechsler combined several subtests from other intelligence tests used between 1880 and World War I. These subtests tapped into a variety of verbal and nonverbal skills, because Wechsler believed that intelligence encompassed “the global capacity of a person to act purposefully, to think rationally, and to deal effectively with his environment” (Wechsler, 1958, p. 7). He named the test the Wechsler-Bellevue Intelligence Scale (Wechsler, 1981). This combination of subtests became one of the most extensively used intelligence tests in the history of psychology. Although its name was later changed to the Wechsler Adult Intelligence Scale (WAIS) and has been revised several times, the aims of the test remain virtually unchanged since its inception (Boake, 2002). Today, there are three intelligence tests credited to Wechsler, the Wechsler Adult Intelligence Scale-fourth edition (WAIS-IV), the Wechsler Intelligence Scale for Children (WISC-V), and the Wechsler Preschool and Primary Scale of Intelligence—IV (WPPSI-IV) (Wechsler, 2012). These tests are used widely in schools and communities throughout the United States, and they are periodically normed and standardized as a means of recalibration. As a part of the recalibration process, the WISC-V was given to thousands of children across the country, and children taking the test today are compared with their same-age peers (Figure 7.13).

The WISC-V is composed of 14 subtests, which comprise five indices, which then render an IQ score. The five indices are Verbal Comprehension, Visual Spatial, Fluid Reasoning, Working Memory, and Processing Speed. When the test is complete, individuals receive a score for each of the five indices and a Full Scale IQ score. The method of scoring reflects the understanding that intelligence is comprised of multiple abilities in several cognitive realms and focuses on the mental processes that the child used to arrive at his or her answers to each test item.

Interestingly, the periodic recalibrations have led to an interesting observation known as the Flynn effect. Named after James Flynn, who was among the first to describe this trend, the Flynn effect refers to the observation that each generation has a significantly higher IQ than the last. Flynn himself argues, however, that increased IQ scores do not necessarily mean that younger generations are more intelligent per se (Flynn et al., 2012).

Ultimately, we are still left with the question of how valid intelligence tests are. Certainly, the most modern versions of these tests tap into more than verbal competencies, yet the specific skills that should be assessed in IQ testing, the degree to which any test can truly measure an individual’s intelligence, and the use of the results of IQ tests are still issues of debate (Gresham & Witt, 1997; Flynn et al., 2012; Richardson, 2002; Schlinger, 2003).

WHAT DO YOU THINK? Capital Punishment and Criminals with Intellectual Disabilities

The case of Atkins v. Virginia was a landmark case in the United States Supreme Court. On August 16, 1996, two men, Daryl Atkins and William Jones, robbed, kidnapped, and then shot and killed Eric Nesbitt, a local airman from the U.S. Air Force. A clinical psychologist evaluated Atkins and testified at the trial that Atkins had an IQ of 59. The mean IQ score is 100. The psychologist concluded that Atkins had an intellectual disability.

The jury found Atkins guilty, and he was sentenced to death. Atkins and his attorneys appealed to the Supreme Court. In June 2002, the Supreme Court reversed a previous decision and ruled that executions of people with intellectual disabilities are ‘cruel and unusual punishments’ prohibited by the Eighth Amendment.

The court also decided that there was a state legislature consensus against the execution of people with intellectual disabilities and that this consensus should stand for all of the states. The Supreme Court ruling left it up to the states to determine their own definitions of intellectual disability. The definitions vary among states as to who can be executed. In the Atkins case, a jury decided that because he had many contacts with his lawyers and thus was provided with intellectual stimulation, his IQ had reportedly increased to a level where the state could execute him. He was given an execution date and then received a stay of execution after it was revealed that lawyers for co-defendant, William Jones, coached Jones to “produce a testimony against Mr. Atkins that did match the evidence” (Liptak, 2008). After the revelation of this misconduct, Atkins was re-sentenced to life imprisonment.

Atkins v. Virginia (2002) highlights several issues regarding society’s beliefs around intelligence. In the Atkins case, the Supreme Court decided that intellectual disability does affect decision making and therefore should affect the nature of the punishment such criminals receive. Where, however, should the lines of intellectual disability be drawn? In May 2014, the Supreme Court ruled in a related case (Hall v. Florida) that IQ scores cannot be used as a final determination of a prisoner’s eligibility for the death penalty (Roberts, 2014).

 

The Bell Curve

The results of intelligence tests follow the bell curve, a graph in the general shape of a bell. When the bell curve is used in psychological testing, the graph demonstrates a normal distribution of a trait, in this case, intelligence, in the human population. Many human traits naturally follow the bell curve. For example, if you lined up all your female schoolmates according to height, it is likely that a large cluster of them would be the average height for an American woman: 5’4”–5’6”. This cluster would fall in the center of the bell curve, representing the average height for American women (Figure 7.14). There would be fewer women who stand closer to 4’11”. The same would be true for women of above-average height: those who stand closer to 5’11”. The trick to finding a bell curve in nature is to use a large sample size. Without a large sample size, it is less likely that the bell curve will represent the wider population. A representative sample is a subset of the population that accurately represents the general population. If, for example, you measured the height of the women in your classroom only, you might not actually have a representative sample. Perhaps the women’s basketball team wanted to take this course together, and they are all in your class. Because basketball players tend to be taller than average, the women in your class may not be a good representative sample of the population of American women. But if your sample included all the women at your school, it is likely that their heights would form a natural bell curve.

A graph of a bell curve is labeled “Height of U.S. Women.” The x axis is labeled “Height” and the y axis is labeled “Frequency.” Between the heights of five feet tall and five feet and five inches tall, the frequency rises to a curved peak, then begins dropping off at the same rate until it hits five feet ten inches tall.
Figure 7.14 Are you of below-average, average, or above-average height?

The same principles apply to intelligence tests scores. Individuals earn a score called an intelligence quotient (IQ). Over the years, different types of IQ tests have evolved, but the way scores are interpreted remains the same. The average IQ score on an IQ test is 100. Standard deviations describe how data are dispersed in a population and give context to large data sets. The bell curve uses the standard deviation to show how all scores are dispersed from the average score (Figure 7.15). In modern IQ testing, one standard deviation is 15 points. So a score of 85 would be described as “one standard deviation below the mean.” How would you describe a score of 115 and a score of 70? Any IQ score that falls within one standard deviation above and below the mean (between 85 and 115) is considered average, and 68% of the population has IQ scores in this range. An IQ score of 130 or above is considered a superior level.

A graph of a bell curve is labeled “Intelligence Quotient Score.” The x axis is labeled “IQ,” and the y axis is labeled “Population.” Beginning at an IQ of 60, the population rises to a curved peak at an IQ of 100 and then drops off at the same rate ending near zero at an IQ of 140.
Figure 7.15 The majority of people have an IQ score between 85 and 115.

Only 2.2% of the population has an IQ score below 70 (American Psychological Association [APA], 2013). A score of 70 or below indicates significant cognitive delays. When these are combined with major deficits in adaptive functioning, a person is diagnosed with having an intellectual disability (American Association on Intellectual and Developmental Disabilities, 2013). Formerly known as mental retardation, the accepted term now is intellectual disability, and it has four subtypes: mild, moderate, severe, and profound (Table 7.5). The Diagnostic and Statistical Manual of Psychological Disorders lists criteria for each subgroup (APA, 2013).

Characteristics of Cognitive Disorders
Intellectual Disability Subtype Percentage of Population with Intellectual Disabilities Description
Mild 85% 3rd- to 6th-grade skill level in reading, writing, and math; may be employed and live independently
Moderate 10% Basic reading and writing skills; functional self-care skills; requires some oversight
Severe 5% Functional self-care skills; requires oversight of daily environment and activities
Profound <1% May be able to communicate verbally or nonverbally; requires intensive oversight
Table7.5

On the other end of the intelligence spectrum are those individuals whose IQs fall into the highest ranges. Consistent with the bell curve, about 2% of the population falls into this category. People are considered gifted if they have an IQ score of 130 or higher, or superior intelligence in a particular area. Long ago, popular belief suggested that people of high intelligence were maladjusted. This idea was disproven through a groundbreaking study of gifted children. In 1921, Lewis Terman began a longitudinal study of over 1500 children with IQs over 135 (Terman, 1925). His findings showed that these children became well-educated, successful adults who were, in fact, well-adjusted (Terman & Oden, 1947). Additionally, Terman’s study showed that the subjects were above average in physical build and attractiveness, dispelling an earlier popular notion that highly intelligent people were “weaklings.” Some people with very high IQs elect to join Mensa, an organization dedicated to identifying, researching, and fostering intelligence. Members must have an IQ score in the top 2% of the population, and they may be required to pass other exams in their application to join the group.

DIG DEEPER: What’s in a Name? Intellectual Disability

In the past, individuals with IQ scores below 70 and significant adaptive and social functioning delays were diagnosed with mental retardation. When this diagnosis was first named, it was replacing more negative and insensitive terms, and the title held no social stigma; several prominent research and support organizations even used the word in their names and mission statements. However, members of those populations as well as their families and supporting professionals found that the term was not only inaccurate, but demeaning and insulting. As such, the DSM-5 now labels this diagnosis as “intellectual disability.” Many states once had a Department of Mental Retardation to serve those diagnosed with such cognitive delays, but most have changed their name to Department of Developmental Disabilities or something similar in language. Due to the passage of “Rosa’s Law” in 2010 and to the growing support for changing the terminology, most U.S. federal agencies formally adopted the words “intellectual disability.” While the change was widely supported, you can view in the Federal Register several counterpoints from parents of people with intellectual disabilities, who felt that the new term was imprecise and less applicable to their children. Earlier in the chapter, we discussed how language affects how we think. Do you think changing the title of this department has any impact on how people regard those with developmental disabilities? Does a different name give people more dignity, and if so, how? Do you think the terminology is likely to change again? Why or why not?

Why Measure Intelligence?

The value of IQ testing is most evident in educational or clinical settings. Children who seem to be experiencing learning difficulties or severe behavioral problems can be tested to ascertain whether the child’s difficulties can be partly attributed to an IQ score that is significantly different from the mean for her age group. Without IQ testing—or another measure of intelligence—children and adults needing extra support might not be identified effectively. In addition, IQ testing is used in courts to determine whether a defendant has special or extenuating circumstances that preclude him from participating in some way in a trial. People also use IQ testing results to seek disability benefits from the Social Security Administration.

The following case study demonstrates the usefulness and benefits of IQ testing. Candace, a 14-year-old girl experiencing problems at school in Connecticut, was referred for a court-ordered psychological evaluation. She was in regular education classes in ninth grade and was failing every subject. Candace had never been a stellar student but had always been passed to the next grade. Frequently, she would curse at any of her teachers who called on her in class. She also got into fights with other students and occasionally shoplifted. When she arrived for the evaluation, Candace immediately said that she hated everything about school, including the teachers, the rest of the staff, the building, and the homework. Her parents stated that they felt their daughter was picked on, because she was of a different race than the teachers and most of the other students. When asked why she cursed at her teachers, Candace replied, “They only call on me when I don’t know the answer. I don’t want to say, ‘I don’t know’ all of the time and look like an idiot in front of my friends. The teachers embarrass me.” She was given a battery of tests, including an IQ test. Her score on the IQ test was 68. What does Candace’s score say about her ability to excel or even succeed in regular education classes without assistance? Why were her difficulties never noticed or addressed?

Learning Objectives

By the end of this section, you will be able to:

  • Describe how genetics and environment affect intelligence
  • Explain the relationship between IQ scores and socioeconomic status
  • Describe the difference between a learning disability and a developmental disorder

Learning Objectives

A young girl, born of teenage parents, lives with her grandmother in rural Mississippi. They are poor—in serious poverty—but they do their best to get by with what they have. She learns to read when she is just 3 years old. As she grows older, she longs to live with her mother, who now resides in Wisconsin. She moves there at the age of 6 years. At 9 years of age, she is raped. During the next several years, several different male relatives repeatedly molest her. Her life unravels. She turns to drugs and sex to fill the deep, lonely void inside her. Her mother then sends her to Nashville to live with her father, who imposes strict behavioral expectations upon her, and over time, her wild life settles once again. She begins to experience success in school, and at 19 years old, becomes the youngest and first African-American female news anchor (“Dates and Events,” n.d.). The woman—Oprah Winfrey—goes on to become a media giant known for both her intelligence and her empathy.

High Intelligence: Nature or Nurture?

Where does high intelligence come from? Some researchers believe that intelligence is a trait inherited from a person’s parents. Scientists who research this topic typically use twin studies to determine the heritability of intelligence. The Minnesota Study of Twins Reared Apart is one of the most well-known twin studies. In this investigation, researchers found that identical twins raised together and identical twins raised apart exhibit a higher correlation between their IQ scores than siblings or fraternal twins raised together (Bouchard et al., 1990). The findings from this study reveal a genetic component to intelligence (Figure 7.15). At the same time, other psychologists believe that intelligence is shaped by a child’s developmental environment. If parents were to provide their children with intellectual stimuli from before they are born, it is likely that they would absorb the benefits of that stimulation, and it would be reflected in intelligence levels.

A chart shows correlations of IQs for people of varying relationships. The bottom is labeled “Percent IQ Correlation” and the left side is labeled “Relationship.” The percent IQ Correlation for relationships where no genes are shared, including adoptive parent-child pairs, similarly aged unrelated children raised together, and adoptive siblings are around 21 percent, 30 percent, and 32 percent, respectively. The percent IQ Correlation for relationships where 25 percent of genes are shared, as in half-siblings, is around 33 percent. The percent IQ Correlation for relationships where 50 percent of genes are shared, including parent-children pairs, and fraternal twins raised together, are roughly 44 percent and 62 percent, respectively. A relationship where 100 percent of genes are shared, as in identical twins raised apart, results in a nearly 80 percent IQ correlation.
Figure 7.16 The correlations of IQs of unrelated verses related persons reared apart or together suggest a genetic component to intelligence.

The reality is that aspects of each idea are probably correct. In fact, one study suggests that although genetics seem to be in control of the level of intelligence, the environmental influences provide both stability and change to trigger the manifestation of cognitive abilities (Bartels et al., 2002). Certainly, there are behaviors that support the development of intelligence, but the genetic component of high intelligence should not be ignored. As with all heritable traits, however, it is not always possible to isolate how and when high intelligence is passed on to the next generation.

Range of Reaction is the theory that each person responds to the environment in a unique way based on his or her genetic makeup. According to this idea, your genetic potential is a fixed quantity, but whether you reach your full intellectual potential is dependent upon the environmental stimulation you experience, especially in childhood. Think about this scenario: A couple adopts a child who has average genetic intellectual potential. They raise her in an extremely stimulating environment. What will happen to the couple’s new daughter? It is likely that the stimulating environment will improve her intellectual outcomes over the course of her life. But what happens if this experiment is reversed? If a child with an extremely strong genetic background is placed in an environment that does not stimulate him: What happens? Interestingly, according to a longitudinal study of highly gifted individuals, it was found that “the two extremes of optimal and pathological experience are both represented disproportionately in the backgrounds of creative individuals”; however, those who experienced supportive family environments were more likely to report being happy (Csikszentmihalyi & Csikszentmihalyi, 1993, p. 187).

Another challenge to determining the origins of high intelligence is the confounding nature of our human social structures. It is troubling to note that some ethnic groups perform better on IQ tests than others—and it is likely that the results do not have much to do with the quality of each ethnic group’s intellect. The same is true for socioeconomic status. Children who live in poverty experience more pervasive, daily stress than children who do not worry about the basic needs of safety, shelter, and food. These worries can negatively affect how the brain functions and develops, causing a dip in IQ scores. Mark Kishiyama and his colleagues determined that children living in poverty demonstrated reduced prefrontal brain functioning comparable to children with damage to the lateral prefrontal cortex (Kishyama et al., 2009).

The debate around the foundations and influences on intelligence exploded in 1969, when an educational psychologist named Arthur Jensen published the article “How Much Can We Boost I.Q. and Achievement” in the Harvard Educational Review. Jensen had administered IQ tests to diverse groups of students, and his results led him to the conclusion that IQ is determined by genetics. He also posited that intelligence was made up of two types of abilities: Level I and Level II. In his theory, Level I is responsible for rote memorization, whereas Level II is responsible for conceptual and analytical abilities. According to his findings, Level I remained consistent among the human race. Level II, however, exhibited differences among ethnic groups (Modgil & Routledge, 1987). Jensen’s most controversial conclusion was that Level II intelligence is prevalent among Asians, then Caucasians, then African Americans. Robert Williams was among those who called out racial bias in Jensen’s results (Williams, 1970).

Obviously, Jensen’s interpretation of his own data caused an intense response in a nation that continued to grapple with the effects of racism (Fox, 2012). However, Jensen’s ideas were not solitary or unique; rather, they represented one of many examples of psychologists asserting racial differences in IQ and cognitive ability. In fact, Rushton and Jensen (2005) reviewed three decades’ worth of research on the relationship between race and cognitive ability. Jensen’s belief in the inherited nature of intelligence and the validity of the IQ test to be the truest measure of intelligence are at the core of his conclusions. If, however, you believe that intelligence is more than Levels I and II, or that IQ tests do not control for socioeconomic and cultural differences among people, then perhaps you can dismiss Jensen’s conclusions as a single window that looks out on the complicated and varied landscape of human intelligence.

In a related story, parents of African American students filed a case against the State of California in 1979, because they believed that the testing method used to identify students with learning disabilities was culturally unfair as the tests were normed and standardized using white children (Larry P. v. Riles). The testing method used by the state disproportionately identified African American children as mentally retarded. This resulted in many students being incorrectly classified as “mentally retarded.” According to a summary of the case, Larry P. v. Riles:

In violation of Title VI of the Civil Rights Act of 1964, the Rehabilitation Act of 1973, and the Education for All Handicapped Children Act of 1975, defendants have utilized standardized intelligence tests that are racially and culturally biased, have a discriminatory impact against black children, and have not been validated for the purpose of essentially permanent placements of black children into educationally dead-end, isolated, and stigmatizing classes for the so-called educable mentally retarded. Further, these federal laws have been violated by defendants’ general use of placement mechanisms that, taken together, have not been validated and result in a large over-representation of black children in the special E.M.R. classes. (Larry P. v. Riles, par. 6)

Once again, the limitations of intelligence testing were revealed.

What are Learning Disabilities?

Learning disabilities are cognitive disorders that affect different areas of cognition, particularly language or reading. It should be pointed out that learning disabilities are not the same thing as intellectual disabilities. Learning disabilities are considered specific neurological impairments rather than global intellectual or developmental disabilities. A person with a language disability has difficulty understanding or using spoken language, whereas someone with a reading disability, such as dyslexia, has difficulty processing what he or she is reading.

Often, learning disabilities are not recognized until a child reaches school age. One confounding aspect of learning disabilities is that they most often affect children with average to above-average intelligence. In other words, the disability is specific to a particular area and not a measure of overall intellectual ability. At the same time, learning disabilities tend to exhibit comorbidity with other disorders, like attention-deficit hyperactivity disorder (ADHD). Anywhere between 30–70% of individuals with diagnosed cases of ADHD also have some sort of learning disability (Riccio et al., 1994). Let’s take a look at three examples of common learning disabilities: dysgraphia, dyslexia, and dyscalculia.

Dysgraphia

Children with dysgraphia have a learning disability that results in a struggle to write legibly. The physical task of writing with a pen and paper is extremely challenging for the person. These children often have extreme difficulty putting their thoughts down on paper (Smits-Engelsman & Van Galen, 1997). This difficulty is inconsistent with a person’s IQ. That is, based on the child’s IQ and/or abilities in other areas, a child with dysgraphia should be able to write, but can’t. Children with dysgraphia may also have problems with spatial abilities.

Students with dysgraphia need academic accommodations to help them succeed in school. These accommodations can provide students with alternative assessment opportunities to demonstrate what they know (Barton, 2003). For example, a student with dysgraphia might be permitted to take an oral exam rather than a traditional paper-and-pencil test. Treatment is usually provided by an occupational therapist, although there is some question as to how effective such treatment is (Zwicker, 2005).

Dyslexia

Dyslexia is the most common learning disability in children. An individual with dyslexia exhibits an inability to correctly process letters. The neurological mechanism for sound processing does not work properly in someone with dyslexia. As a result, dyslexic children may not understand sound-letter correspondence. A child with dyslexia may mix up letters within words and sentences—letter reversals, such as those shown in Figure 7.17, are a hallmark of this learning disability—or skip whole words while reading. A dyslexic child may have difficulty spelling words correctly while writing. Because of the disordered way that the brain processes letters and sound, learning to read is a frustrating experience. Some dyslexic individuals cope by memorizing the shapes of most words, but they never actually learn to read (Berninger, 2008).

Two columns and five rows all containing the word “teapot” are shown. “Teapot” is written ten times with the letters jumbled, sometimes appearing backwards and upside down.
Figure 7.17 These written words show variations of the word “teapot” as written by individuals with dyslexia.

Dyscalculia