Introduction to Psychology

Introduction to Psychology

PSY 101

Julie Lazzara

Maricopa Community Colleges

Introduction to Psychology

Icon for the Creative Commons Attribution 4.0 International License

Introduction to Psychology by Julie Lazzara is licensed under a Creative Commons Attribution 4.0 International License, except where otherwise noted.

This text is a derivative of Psychology 2e, Noba Project, and Lumen Learning. Modification, adaptation, and original content authored by: Prof. Julie Lazzara

Acknowledgements

1

This text, based on Psychology 2e, an OpenStax resource, includes additional material from the Noba Project, and Lumen Learning. Modification, adaptation, and original content authored by: Prof. Julie Lazzara

A print version of this text can be purchased at cost (charges for printing and shippping).  However, this is not required.

The textbook can be downloaded in different formats on the homepage of the book at no cost.

About OpenStax

OpenStax is a nonprofit based at Rice University, and it’s our mission to improve student access to education. Our first openly licensed college textbook was published in 2012, and our library has since scaled to over 35 books for college and AP® courses used by hundreds of thousands of students. OpenStax Tutor, our low-cost personalized learning tool, is being piloted in college courses throughout the country. Through our partnerships with philanthropic foundations and our alliance with other educational resource organizations, OpenStax is breaking down the most common barriers to learning and empowering students and instructors to succeed.  This textbook was written to increase student access to high-quality learning materials, maintaining the highest standards of academic rigor at little to no cost.

About Psychology 2e

Psychology 2e is designed to meet scope and sequence requirements for the single-semester introduction to psychology course. The book offers a comprehensive treatment of core concepts, grounded in both classic studies and current and emerging research. The text also includes coverage of the DSM-5 in examinations of psychological disorders. Psychology 2e incorporates discussions that reflect the diversity within the discipline, as well as the diversity of cultures and communities across the globe.  Psychology 2e is licensed under a Creative Commons Attribution 4.0 International (CC BY) license, which means that you can distribute, remix, and build upon the content, as long as you provide attribution to OpenStax and its content contributors.

The first edition of Psychology has been used by thousands of faculty and hundreds of thousands of students since its publication in 2015. OpenStax mined our adopters’ extensive and helpful feedback to identify the most significant revision needs while maintaining the organization that many instructors had incorporated into their courses. Specific surveys, pre-revision reviews, and customization analysis, as well as analytical data from OpenStax partners and online learning environments, all aided in planning the revision.

The result is a book that thoroughly treats psychology’s foundational concepts while adding current and meaningful coverage in specific areas. Psychology 2e retains its manageable scope and contains ample features to draw learners into the discipline.  Structurally, the textbook remains similar to the first edition, with no chapter reorganization and very targeted changes at the section level.

About the authors

Senior contributing authors

Rose M. Spielman (Content Lead)
Dr. Rose Spielman has been teaching psychology and working as a licensed clinical psychologist for 20 years. Her academic career has included positions at Quinnipiac University, Housatonic Community College, and Goodwin College. As a licensed clinical psychologist, educator, and volunteer director, Rose is able to connect with people from diverse backgrounds and facilitate treatment, advocacy, and education. In her years of work as a teacher, therapist, and administrator, she has helped thousands of students and clients and taught them to advocate for themselves and move their lives forward to become more productive citizens and family members.

William J. Jenkins, Mercer University
Marilyn D. Lovett, Spelman College

Contributing Authors

Mara Aruguete, Lincoln University
Laura Bryant, Eastern Gateway Community College
Barbara Chappell, Walden University
Kathryn Dumper, Bainbridge State College
Arlene Lacombe, Saint Joseph’s University
Julie Lazzara, Paradise Valley Community College
Tammy McClain, West Liberty University
Barbara B. Oswald, Miami University
Marion Perlmutter, University of Michigan
Mark D. Thomas, Albany State University

The Science of Psychology

1

WHY STUDY PSYCHOLOGY?

Often, students take their first psychology course because they are interested in helping others and want to learn more about themselves and why they act the way they do. Sometimes, students take a psychology course because it either satisfies a general education requirement or is required for a program of study such as nursing or pre-med. Many of these students develop such an interest in the area that they go on to declare psychology as their major.

An education in psychology is valuable for a number of reasons. Psychology students hone critical thinking skills and are trained in the use of the scientific method. Critical thinking is the active application of a set of skills to information for the understanding and evaluation of that information. The evaluation of information—assessing its reliability and usefulness— is an important skill in a world full of competing “facts,” many of which are designed to be misleading. For example, critical thinking involves maintaining an attitude of skepticism, recognizing internal biases, making use of logical thinking, asking appropriate questions, and making observations. Psychology students also can develop better communication skills during the course of their undergraduate coursework (American Psychological Association, 2011). Together, these factors increase students’ scientific literacy and prepare students to critically evaluate the various sources of information they encounter.

In addition to these broad-based skills, psychology students come to understand the complex factors that shape one’s behavior. They appreciate the interaction of our biology, our environment, and our experiences in determining who we are and how we will behave. They learn about basic principles that guide how we think and behave, and they come to recognize the tremendous diversity that exists across individuals and across cultural boundaries (American Psychological Association, 2011).

What is creativity? Why do some people become homeless? What are prejudice and discrimination? What is consciousness? The field of psychology explores questions like these. Psychology refers to the scientific study of the mind and behavior. The primary goals of psychology are to describe, explain, predict, and control behavior.

Relief of Psamtik I making an offering to Ra-HorakhtyThe earliest records of a psychological experiment go all the way back to the Pharaoh Psamtik I of Egypt in the 7th Century B.C. [Image: Neithsabes, CC0 Public Domain, https://goo.gl/m25gce]

Learning Objectives

  • Describe the precursors to the establishment of the science of psychology.
  • Identify key individuals and events in the history of American psychology.
  • Describe the rise of professional psychology in America.
  • Develop a basic understanding of the processes of scientific development and change.

It is always a difficult question to ask, where to begin to tell the story of the history of psychology. Some would start with ancient Greece; others would look to a demarcation in the late 19th century when the science of psychology was formally proposed and instituted. These two perspectives, and all that is in between, are appropriate for describing a history of psychology. For the purposes of this chapter, we will examine the development of psychology in America and use the mid-19th century as our starting point which we refer to as a history of modern psychology.

Psychology is an exciting field and the history of psychology offers the opportunity to make sense of how it has grown and developed. The history of psychology also provides perspective. Rather than a dry collection of names and dates, the history of psychology tells us about the important intersection of time and place that defines who we are. Consider what happens when you meet someone for the first time. The conversation usually begins with a series of questions such as, “Where did you grow up?” “How long have you lived here?” “Where did you go to school?” The importance of history in defining who we are cannot be overstated. Whether you are seeing a physician, talking with a counselor, or applying for a job, everything begins with a history. The same is true for studying the history of psychology; getting a history of the field helps to make sense of where we are and how we got here.

A Prehistory of Psychology

Precursors to American psychology can be found in philosophy and physiology. Philosophers such as John Locke (1632–1704) and Thomas Reid (1710–1796) promoted empiricism, the idea that all knowledge comes from experience. The work of Locke, Reid, and others emphasized the role of the human observer and the primacy of the senses in defining how the mind comes to acquire knowledge. In American colleges and universities in the early 1800s, these principles were taught as courses on mental and moral philosophy. Most often these courses taught about the mind based on the faculties of intellect, will, and the senses (Fuchs, 2000).

Physiology and Psychophysics

Philosophical questions about the nature of mind and knowledge were matched in the 19th century by physiological investigations of the sensory systems of the human observer. German physiologist Hermann von Helmholtz (1821–1894) measured the speed of the neural impulse and explored the physiology of hearing and vision. His work indicated that our senses can deceive us and are not a mirror of the external world. Such work showed that even though the human senses were fallible, the mind could be measured using the methods of science. In all, it suggested that a science of psychology was feasible.

An important implication of Helmholtz’s work was that there is a psychological reality and a physical reality and that the two are not identical. This was not a new idea; philosophers like John Locke had written extensively on the topic, and in the 19th century, philosophical speculation about the nature of mind became subject to the rigors of science.

The question of the relationship between the mental (experiences of the senses) and the material (external reality) was investigated by a number of German researchers including Ernst Weber and Gustav Fechner. Their work was called psychophysics, and it introduced methods for measuring the relationship between physical stimuli and human perception that would serve as the basis for the new science of psychology (Fancher & Rutherford, 2011).

Wilhelm Wundt
Wilhelm Wundt is considered one of the founding figures of modern psychology. [CC0 Public Domain, https://goo.gl/m25gce]

The formal development of modern psychology is usually credited to the work of German physician, physiologist, and philosopher Wilhelm Wundt (1832–1920). Wundt helped to establish the field of experimental psychology by serving as a strong promoter of the idea that psychology could be an experimental field and by providing classes, textbooks, and a laboratory for training students. In 1875, he joined the faculty at the University of Leipzig and quickly began to make plans for the creation of a program of experimental psychology. In 1879, he complemented his lectures on experimental psychology with a laboratory experience: an event that has served as the popular date for the establishment of the science of psychology.

The response to the new science was immediate and global. Wundt attracted students from around the world to study the new experimental psychology and work in his lab. Students were trained to offer detailed self-reports of their reactions to various stimuli, a procedure known as introspection. The goal was to identify the elements of consciousness. In addition to the study of sensation and perception, research was done on mental chronometry, more commonly known as reaction time. The work of Wundt and his students demonstrated that the mind could be measured and the nature of consciousness could be revealed through scientific means. It was an exciting proposition and one that found great interest in America. After the opening of Wundt’s lab in 1879, it took just four years for the first psychology laboratory to open in the United States (Benjamin, 2007).

Scientific Psychology Comes to the United States

Wundt’s version of psychology arrived in America most visibly through the work of Edward Bradford Titchener (1867–1927). A student of Wundt’s, Titchener brought to America a brand of experimental psychology referred to as “structuralism.” Structuralists were interested in the contents of the mind—what the mind is. For Titchener, the general adult mind was the proper focus for the new psychology, and he excluded from study those with mental deficiencies, children, and animals (Evans, 1972Titchener, 1909).

Experimental psychology spread rather rapidly throughout North America. By 1900, there were more than 40 laboratories in the United States and Canada (Benjamin, 2000). Psychology in America also organized early with the establishment of the American Psychological Association (APA) in 1892. Titchener felt that this new organization did not adequately represent the interests of experimental psychology, so, in 1904, he organized a group of colleagues to create what is now known as the Society of Experimental Psychologists (Goodwin, 1985). The group met annually to discuss research in experimental psychology. Reflecting the times, women researchers were not invited (or welcome). It is interesting to note that Titchener’s first doctoral student was a woman, Margaret Floy Washburn (1871–1939).

Striking a balance between the science and practice of psychology continues to this day. In 1988, the American Psychological Society (now known as the Association for Psychological Science) was founded with the central mission of advancing psychological science.

Toward a Functional Psychology

Photograph of William James from 1902.
William James was one of the leading figures in a new perspective on psychology called functionalism. [Image: Notman Studios, CC0 Public Domain, https://goo.gl/m25gce]

While Titchener and his followers adhered to a structural psychology, others in America were pursuing different approaches. William James, G. Stanley Hall, and James McKeen Cattell were among a group that became identified with “functionalism.” Influenced by Darwin’s evolutionary theory, functionalists were interested in the activities of the mind—what the mind does. An interest in functionalism opened the way for the study of a wide range of approaches, including animal and comparative psychology (Benjamin, 2007).

William James (1842–1910) is regarded as writing perhaps the most influential and important book in the field of psychology, Principles of Psychology, published in 1890. Opposed to the reductionist ideas of Titchener, James proposed that consciousness is ongoing and continuous; it cannot be isolated and reduced to elements. For James, consciousness helped us adapt to our environment in such ways as allowing us to make choices and have personal responsibility over those choices.

At Harvard, James occupied a position of authority and respect in psychology and philosophy. Through his teaching and writing, he influenced psychology for generations. One of his students, Mary Whiton Calkins (1863–1930), faced many of the challenges that confronted Margaret Floy Washburn and other women interested in pursuing graduate education in psychology. With much persistence, Calkins was able to study with James at Harvard. She eventually completed all the requirements for the doctoral degree, but Harvard refused to grant her a diploma because she was a woman. Despite these challenges, Calkins went on to become an accomplished researcher and the first woman elected president of the American Psychological Association in 1905 (Scarborough & Furumoto, 1987).

G. Stanley Hall (1844–1924) made substantial and lasting contributions to the establishment of psychology in the United States. At Johns Hopkins University, he founded the first psychological laboratory in America in 1883. In 1887, he created the first journal of psychology in America, American Journal of Psychology. In 1892, he founded the American Psychological Association (APA); in 1909, he invited and hosted Freud at Clark University (the only time Freud visited America). Influenced by evolutionary theory, Hall was interested in the process of adaptation and human development. Using surveys and questionnaires to study children, Hall wrote extensively on child development and education. While graduate education in psychology was restricted for women in Hall’s time, it was all but non-existent for African Americans. In another first, Hall mentored Francis Cecil Sumner (1895–1954) who, in 1920, became the first African American to earn a Ph.D. in psychology in America (Guthrie, 2003).

James McKeen Cattell (1860–1944) received his Ph.D. with Wundt but quickly turned his interests to the assessment of individual differences. Influenced by the work of Darwin’s cousin, Frances Galton, Cattell believed that mental abilities such as intelligence were inherited and could be measured using mental tests. Like Galton, he believed society was better served by identifying those with superior intelligence and supported efforts to encourage them to reproduce. Such beliefs were associated with eugenics (the promotion of selective breeding) and fueled early debates about the contributions of heredity and environment in defining who we are. At Columbia University, Cattell developed a department of psychology that became world-famous also promoting psychological science through advocacy and as a publisher of scientific journals and reference works (Fancher, 1987Sokal, 1980).

The Growth of Psychology

Throughout the first half of the 20th century, psychology continued to grow and flourish in America. It was large enough to accommodate varying points of view on the nature of mind and behavior. Gestalt psychology is a good example. The Gestalt movement began in Germany with the work of Max Wertheimer (1880–1943). Opposed to the reductionist approach of Wundt’s laboratory psychology, Wertheimer and his colleagues believed that studying the whole of any experience was richer than studying individual aspects of that experience. The saying “the whole is greater than the sum of its parts” is a Gestalt perspective. Consider that a melody is an additional element beyond the collection of notes that comprise it. The Gestalt psychologists proposed that the mind often processes information simultaneously rather than sequentially. For instance, when you look at a photograph, you see a whole image, not just a collection of pixels of color. Using Gestalt principles, Wertheimer and his colleagues also explored the nature of learning and thinking. Most of the German Gestalt psychologists were Jewish and were forced to flee the Nazi regime due to the threats posed on both academic and personal freedoms. In America, they were able to introduce a new audience to the Gestalt perspective, demonstrating how it could be applied to perception and learning (Wertheimer, 1938). In many ways, the work of the Gestalt psychologists served as a precursor to the rise of cognitive psychology in America (Benjamin, 2007).

Behaviorism emerged early in the 20th century and became a major force in American psychology. Championed by psychologists such as John B. Watson (1878–1958) and B. F. Skinner (1904–1990), behaviorism rejected any reference to mind and viewed overt and observable behavior as the proper subject matter of psychology. Through the scientific study of behavior, it was hoped that laws of learning could be derived that would promote the prediction and control of behavior. Russian physiologist Ivan Pavlov (1849–1936) influenced early behaviorism in America. His work on conditioned learning, popularly referred to as classical conditioning, provided support for the notion that learning and behavior were controlled by events in the environment and could be explained with no reference to mind or consciousness (Fancher, 1987).

For decades, behaviorism dominated American psychology. By the 1960s, psychologists began to recognize that behaviorism was unable to fully explain human behavior because it neglected mental processes. The turn toward a cognitive psychology was not new. In the 1930s, British psychologist Frederic C. Bartlett (1886–1969) explored the idea of the constructive mind, recognizing that people use their past experiences to construct frameworks in which to understand new experiences. In the 1950s, Bruner conducted pioneering studies on cognitive aspects of sensation and perception. Brown conducted original research on language and memory, coined the term “flashbulb memory,” and figured out how to study the tip-of-the-tongue phenomenon (Benjamin, 2007). Miller’s research on working memory is legendary. His 1956 paper “The Magic Number Seven, Plus or Minus Two: Some Limits on Our Capacity for Processing Information”is one of the most highly cited papers in psychology. A popular interpretation of Miller’s research was that the number of bits of information an average human can hold in working memory is 7 ± 2. Around the same time, the study of computer science was growing and was used as an analogy to explore and understand how the mind works. The work of Miller and others in the 1950s and 1960s has inspired tremendous interest in cognition and neuroscience, both of which dominate much of contemporary American psychology.

Applied Psychology in America

In America, there has always been an interest in the application of psychology to everyday life. Mental testing is an important example. Modern intelligence tests were developed by the French psychologist Alfred Binet (1857–1911). His goal was to develop a test that would identify schoolchildren in need of educational support. His test, which included tasks of reasoning and problem solving, was introduced in the United States by Henry Goddard (1866–1957) and later standardized by Lewis Terman (1877–1956) at Stanford University. The assessment and meaning of intelligence has fueled debates in American psychology and society for nearly 100 years. Much of this is captured in the nature-nurture debate that raises questions about the relative contributions of heredity and environment in determining intelligence (Fancher, 1987).

Applied psychology was not limited to mental testing. What psychologists were learning in their laboratories was applied in many settings including the military, business, industry, and education. The early 20th century was witness to rapid advances in applied psychology.

Clinical psychology was also an early application of experimental psychology in America. Lightner Witmer (1867–1956) received his Ph.D. in experimental psychology with Wilhelm Wundt and returned to the University of Pennsylvania, where he opened a psychological clinic in 1896. Witmer believed that because psychology dealt with the study of sensation and perception, it should be of value in treating children with learning and behavioral problems. He is credited as the founder of both clinical and school psychology (Benjamin & Baker, 2004).

Psychology as a Profession

Careers in Psychology Learning Objectives

By the end of this section, you will be able to:

  • Understand educational requirements for careers in academic settings
  • Understand the demands of a career in an academic setting
  • Understand career options outside of academic settings

Psychologists can work in many different places doing many different things. In general, anyone wishing to continue a career in psychology at a 4-year institution of higher education will have to earn a doctoral degree in psychology for some specialties and at least a master’s degree for others. In most areas of psychology, this means earning a Ph.D. in a relevant area of psychology. Literally, Ph.D. refers to a doctor of philosophy degree, but here, philosophy does not refer to the field of philosophy per se. Rather, philosophy in this context refers to many different disciplinary perspectives that would be housed in a traditional college of liberal arts and sciences.

The requirements to earn a Ph.D. vary from country to country and even from school to school, but usually, individuals earning this degree must complete a dissertation. A dissertation is essentially a long research paper or bundled published articles describing research that was conducted as a part of the candidate’s doctoral training. In the United States, a dissertation generally has to be defended before a committee of expert reviewers before the degree is conferred (Figure 1.17).

A photograph shows several people are gathered outdoors wearing caps and gowns in a graduation ceremony.
Figure 1.17Doctoral degrees are generally conferred in formal ceremonies involving special attire and rites. (credit: Public Affairs Office Fort Wainwright)

Once someone earns a Ph.D., they may seek a faculty appointment at a college or university. Being on the faculty of a college or university often involves dividing time between teaching, research, and service to the institution and profession. The amount of time spent on each of these primary responsibilities varies dramatically from school to school, and it is not uncommon for faculty to move from place to place in search of the best personal fit among various academic environments. The previous section detailed some of the major areas that are commonly represented in psychology departments around the country; thus, depending on the training received, an individual could be anything from a biological psychologist to a clinical psychologist in an academic setting (Figure 1.18).

A pie chart is labeled “Percent of 2009 Psychology Doctorates Employed in Different Sectors.” The percentage breakdown is University: 26%, Hospital or health service: 25%, Government/VA medical center: 16%, Business or nonprofit: 10%, Other educational institutions: 8%, and Medical school: 6%, Independent practice: 6%. Beneath the pie chart, the label reads: “Source: Michalski, Kohout, Wicherski, & Hart, 2011.”
Figure 1.18Individuals earning a Ph.D. in psychology have a range of employment options.

Often times, schools offer more courses in psychology than their full-time faculty can teach. In these cases, it is not uncommon to bring in an adjunct faculty member or instructor. Adjunct faculty members and instructors usually have an advanced degree in psychology, but they often have primary careers outside of academia and serve in this role as a secondary job. Alternatively, they may not hold the doctoral degree required by most 4-year institutions and use these opportunities to gain experience in teaching. Furthermore, many 2-year colleges and schools need faculty to teach their courses in psychology. In general, many of the people who pursue careers at these institutions have master’s degrees in psychology, although some PhDs make careers at these institutions as well.

Some people earning PhDs may enjoy research in an academic setting. However, they may not be interested in teaching. These individuals might take on faculty positions that are exclusively devoted to conducting research. This type of position would be more likely an option at large, research-focused universities.

In some areas in psychology, it is common for individuals who have recently earned their Ph.D. to seek out positions in postdoctoral training programs that are available before going on to serve as faculty. In most cases, young scientists will complete one or two postdoctoral programs before applying for a full-time faculty position. Postdoctoral training programs allow young scientists to further develop their research programs and broaden their research skills under the supervision of other professionals in the field.

Career Options Outside of Academic Settings

Individuals who wish to become practicing clinical psychologists have another option for earning a doctoral degree, which is known as a PsyD. A PsyD is a doctor of psychology degree that is increasingly popular among individuals interested in pursuing careers in clinical psychology. PsyD programs generally place less emphasis on research-oriented skills and focus more on the application of psychological principles in the clinical context (Norcorss & Castle, 2002).

Regardless of whether earning a Ph.D. or PsyD, in most states, an individual wishing to practice as a licensed clinical or counseling psychologist may complete postdoctoral work under the supervision of a licensed psychologist. Within the last few years, however, several states have begun to remove this requirement, which would allow people to get an earlier start in their careers (Munsey, 2009). After an individual has met the state requirements, their credentials are evaluated to determine whether they can sit for the licensure exam. Only individuals that pass this exam can call themselves licensed clinical or counseling psychologists (Norcross, n.d.). Licensed clinical or counseling psychologists can then work in a number of settings, ranging from private clinical practice to hospital settings. It should be noted that clinical psychologists and psychiatrists do different things and receive different types of education. While both can conduct therapy and counseling, clinical psychologists have a Ph.D. or a PsyD, whereas psychiatrists have a doctor of medicine degree (MD). As such, licensed clinical psychologists can administer and interpret psychological tests, while psychiatrists can prescribe medications.

Individuals earning a Ph.D. can work in a variety of settings, depending on their areas of specialization. For example, someone trained as a biopsychologist might work in a pharmaceutical company to help test the efficacy of a new drug. Someone with a clinical background might become a forensic psychologist and work within the legal system to make recommendations during criminal trials and parole hearings, or serve as an expert in a court case.

While earning a doctoral degree in psychology is a lengthy process, usually taking between 5–6 years of graduate study (DeAngelis, 2010), there are a number of careers that can be attained with a master’s degree in psychology. People who wish to provide psychotherapy can become licensed to serve as various types of professional counselors (Hoffman, 2012). Relevant master’s degrees are also sufficient for individuals seeking careers as school psychologists (National Association of School Psychologists, n.d.), in some capacities related to sport psychology (American Psychological Association, 2014), or as consultants in various industrial settings (Landers, 2011, June 14). Undergraduate coursework in psychology may be applicable to other careers such as psychiatric social work or psychiatric nursing, where assessments and therapy may be a part of the job.

As mentioned in the opening section of this chapter, an undergraduate education in psychology is associated with a knowledge base and skill set that many employers find quite attractive. It should come as no surprise, then, that individuals earning bachelor’s degrees in psychology find themselves in a number of different careers, as shown in Table 1.1. Examples of a few such careers can involve serving as case managers, working in sales, working in human resource departments, and teaching in high schools. The rapidly growing realm of healthcare professions is another field in which an education in psychology is helpful and sometimes required. For example, the Medical College Admission Test (MCAT) exam that people must take to be admitted to medical school now includes a section on the psychological foundations of behavior.

Top Occupations Employing Graduates with a BA in Psychology (Fogg, Harrington, Harrington, & Shatkin, 2012)
Ranking Occupation
1 Mid- and top-level management (executive, administrator)
2 Sales
3 Social work
4 Other management positions
5 Human resources (personnel, training)
6 Other administrative positions
7 Insurance, real estate, business
8 Marketing and sales
9 Healthcare (nurse, pharmacist, therapist)
10 Finance (accountant, auditor)
Table 1.1
A clinical psychologist meets with a client during an office visit.
Although this is what most people see in their mind’s eye when asked to envision a “psychologist” the APA recognizes as many as 58 different divisions of psychology. [Image: Bliusa, https://goo.gl/yrSUCr, CC BY-SA 4.0, https://goo.gl/6pvNbx]

Psychology and Society

Given that psychology deals with the human condition, it is not surprising that psychologists would involve themselves in social issues. For more than a century, psychology and psychologists have been agents of social action and change. Using the methods and tools of science, psychologists have challenged assumptions, stereotypes, and stigma. Founded in 1936, the Society for the Psychological Study of Social Issues (SPSSI) has supported research and action on a wide range of social issues. Individually, there have been many psychologists whose efforts have promoted social change. Helen Thompson Woolley (1874–1947) and Leta S. Hollingworth (1886–1939) were pioneers in research on the psychology of sex differences. Working in the early 20th century, when women’s rights were marginalized, Thompson examined the assumption that women were overemotional compared to men and found that emotion did not influence women’s decisions any more than it did men’s. Hollingworth found that menstruation did not negatively impact women’s cognitive or motor abilities. Such work combatted harmful stereotypes and showed that psychological research could contribute to social change (Scarborough & Furumoto, 1987).

A group of African-Americans demonstrate for integrated education in New York City circa 1964.
Mamie Phipps Clark and Kenneth Clark studied the negative impacts of segregated education on African-American children. [Image: Penn State Special Collection, https://goo.gl/WP7Dgc, CC BY-NC-SA 2.0, https://goo.gl/Toc0ZF]

Growth and expansion have been a constant in American psychology. In the latter part of the 20th century, areas such as social, developmental, and personality psychology made major contributions to our understanding of what it means to be human. Today neuroscience is enjoying tremendous interest and growth.

As mentioned at the beginning of the chapter, it is a challenge to cover all the history of psychology in such a short space. Errors of omission and commission are likely in such a selective review. The history of psychology helps to set a stage upon which the story of psychology can be told. This brief summary provides some glimpse into the depth and rich content offered by the history of psychology. It is hoped that you will be able to see these connections and have a greater understanding and appreciation for both the unity and diversity of the field of psychology.

Learning Objectives

  • Describe the key characteristics of the scientific approach.
  • Discuss a few of the benefits, as well as problems that have been created by science.
  • Describe several ways that psychological science has improved the world.
  • Describe a number of the ethical guidelines that psychologists follow.
  • Describe the different research methods used by psychologists

Psychology as a Science

Even in modern times, many people are skeptical that psychology is really a science. To some degree, this doubt stems from the fact that many psychological phenomena such as depression, intelligence, and prejudice do not seem to be directly observable in the same way that we can observe the changes in ocean tides or the speed of light. Because thoughts and feelings are invisible many early psychological researchers chose to focus on behavior. You might have noticed that some people act in a friendly and outgoing way while others appear to be shy and withdrawn. If you have made these types of observations then you are acting just like early psychologists who used behavior to draw inferences about various types of personality. By using behavioral measures and rating scales it is possible to measure thoughts and feelings. This is similar to how other researchers explore “invisible” phenomena such as the way that educators measure academic performance or economists measure quality of life.

One important pioneering researcher was Francis Galton, a cousin of Charles Darwin who lived in England during the late 1800s. Galton used patches of color to test people’s ability to distinguish between them. He also invented the self-report questionnaire, in which people offered their own expressed judgments or opinions on various matters. Galton was able to use self-reports to examine—among other things—people’s differing ability to accurately judge distances.

Two young twin brothers sit together and smile.
In 1875 Francis Galton did pioneering studies of twins to determine how much the similarities and differences in twins were affected by their life experiences. In the course of this work he coined the phrase “Nature versus Nurture”. [Image: XT Inc., https://goo.gl/F1Wvu7, CC BY-NC-SA 2.0, https://goo.gl/Toc0ZF]

Although he lacked a modern understanding of genetics Galton also had the idea that scientists could look at the behaviors of identical and fraternal twins to estimate the degree to which genetic and social factors contribute to personality; a puzzling issue we currently refer to as the “nature-nurture question.”

In modern times psychology has become more sophisticated. Researchers now use better measures, more sophisticated study designs and better statistical analyses to explore human nature. Simply take the example of studying the emotion of happiness. How would you go about studying happiness? One straightforward method is to simply ask people about their happiness and to have them use a numbered scale to indicate their feelings. There are, of course, several problems with this. People might lie about their happiness, might not be able to accurately report on their own happiness, or might not use the numerical scale in the same way. With these limitations in mind modern psychologists employ a wide range of methods to assess happiness. They use, for instance, “peer report measures” in which they ask close friends and family members about the happiness of a target individual. Researchers can then compare these ratings to the self-report ratings and check for discrepancies. Researchers also use memory measures, with the idea that dispositionally positive people have an easier time recalling pleasant events and negative people have an easier time recalling unpleasant events. Modern psychologists even use biological measures such as saliva cortisol samples (cortisol is a stress related hormone) or fMRI images of brain activation (the left pre-frontal cortex is one area of brain activity associated with good moods).

Despite our various methodological advances it is true that psychology is still a very young science. While physics and chemistry are hundreds of years old psychology is barely a hundred and fifty years old and most of our major findings have occurred only in the last 60 years. There are legitimate limits to psychological science but it is a science nonetheless.

What Is Science?

What is this process we call “science,” which has so dramatically changed the world? Ancient people were more likely to believe in magical and supernatural explanations for natural phenomena such as solar eclipses or thunderstorms. By contrast, scientifically minded people try to figure out the natural world through testing and observation. Specifically, science is the use of systematic observation in order to acquire knowledge. For example, children in a science class might combine vinegar and baking soda to observe the bubbly chemical reaction. These empirical methods are wonderful ways to learn about the physical and biological world. Science is not magic—it will not solve all human problems, and might not answer all our questions about behavior. Nevertheless, it appears to be the most powerful method we have for acquiring knowledge about the observable world. The essential elements of science are as follows:

  1. Systematic observation is the core of science. Scientists observe the world, in a very organized way. We often measure the phenomenon we are observing. We record our observations so that memory biases are less likely to enter in to our conclusions. We are systematic in that we try to observe under controlled conditions, and also systematically vary the conditions of our observations so that we can see variations in the phenomena and understand when they occur and do not occur.
    A study participant wears an EEG cap and uses a touch pad to react to images on a computer screen. An experimenter stands by observing.
    Systematic observation is the core of science. [Image: Cvl Neuro, https://goo.gl/Avbju7, CC BY-SA 3.0, https://goo.gl/uhHola]
  2. Observation leads to hypotheses we can test. When we develop hypotheses and theories, we state them in a way that can be tested. For example, you might make the claim that candles made of paraffin wax burn more slowly than do candles of the exact same size and shape made from bee’s wax. This claim can be readily tested by timing the burning speed of candles made from these materials.
  3. Science is democratic. People in ancient times may have been willing to accept the views of their kings or pharaohs as absolute truth. These days, however, people are more likely to want to be able to form their own opinions and debate conclusions. Scientists are skeptical and have open discussions about their observations and theories. These debates often occur as scientists publish competing findings with the idea that the best data will win the argument.
  4. Science is cumulative. We can learn the important truths discovered by earlier scientists and build on them. Any physics student today knows more about physics than Sir Isaac Newton did even though Newton was possibly the most brilliant physicist of all time. A crucial aspect of scientific progress is that after we learn of earlier advances, we can build upon them and move farther along the path of knowledge.

Psychological Science is Useful

Psychological science is useful for creating interventions that help people live better lives. A growing body of research is concerned with determining which therapies are the most and least effective for the treatment of psychological disorders.

An older woman rests her head on her hand with a sad look on her face.
Cognitive Behavioral Therapy has shown to be effective in treating a variety of conditions, including depression. [Image: SalFalco, https://goo.gl/3knLoJ, CC BY-NC 2.0, https://goo.gl/HEXbAA]

For example, many studies have shown that cognitive behavioral therapy can help many people suffering from depression and anxiety disorders (Butler, Chapman, Forman, & Beck, 2006Hoffman & Smits, 2008). In contrast, research reveals that some types of therapies actually might be harmful on average (Lilienfeld, 2007).

In organizational psychology, a number of psychological interventions have been found by researchers to produce greater productivity and satisfaction in the workplace (e.g., Guzzo, Jette, & Katzell, 1985). Human factor engineers have greatly increased the safety and utility of the products we use. For example, the human factors psychologist Alphonse Chapanis and other researchers redesigned the cockpit controls of aircraft to make them less confusing and easier to respond to, and this led to a decrease in pilot errors and crashes.

Forensic sciences have made courtroom decisions more valid. We all know of the famous cases of imprisoned persons who have been exonerated because of DNA evidence. Equally dramatic cases hinge on psychological findings. For instance, psychologist Elizabeth Loftus has conducted research demonstrating the limits and unreliability of eyewitness testimony and memory. Thus, psychological findings are having practical importance in the world outside the laboratory. Psychological science has experienced enough success to demonstrate that it works, but there remains a huge amount yet to be learned.

Ethics of Scientific Psychology

Diagram of the positions of the experimenter, teacher, and learner in the Milgram experiment. The experimenter and teacher sit at separate desks in one room, while the learner sits at a desk in another room. The learner is connected by a wire to the shock machine which sits on the teacher's desk.
Diagram of the Milgram Experiment in which the “teacher” (T) was asked to deliver a (supposedly) painful electric shock to the “learner”(L). Would this experiment be approved by a review board today? [Image: Fred the Oyster, https://goo.gl/ZIbQz1, CC BY-SA 4.0, https://goo.gl/X3i0tq]

Psychology differs somewhat from the natural sciences such as chemistry in that researchers conduct studies with human research participants. Because of this there is a natural tendency to want to guard research participants against potential psychological harm. For example, it might be interesting to see how people handle ridicule but it might not be advisable to ridicule research participants.

Scientific psychologists follow a specific set of guidelines for research known as a code of ethics. There are extensive ethical guidelines for how human participants should be treated in psychological research (Diener & Crandall, 1978Sales & Folkman, 2000). Following are a few highlights:

  1. Informed consent. In general, people should know when they are involved in research, and understand what will happen to them during the study. They should then be given a free choice as to whether to participate.
  2. Confidentiality. Information that researchers learn about individual participants should not be made public without the consent of the individual.
  3. Privacy. Researchers should not make observations of people in private places such as their bedrooms without their knowledge and consent. Researchers should not seek confidential information from others, such as school authorities, without consent of the participant or his or her guardian.
  4. Benefits. Researchers should consider the benefits of their proposed research and weigh these against potential risks to the participants. People who participate in psychological studies should be exposed to risk only if they fully understand these risks and only if the likely benefits clearly outweigh the risks.
  5. Deception. Some researchers need to deceive participants in order to hide the true nature of the study. This is typically done to prevent participants from modifying their behavior in unnatural ways. Researchers are required to “debrief” their participants after they have completed the study. Debriefing is an opportunity to educate participants about the true nature of the study.

Why Learn About Scientific Psychology?

Often, students take their first psychology course because they are interested in helping others and want to learn more about themselves and why they act the way they do. Sometimes, students take a psychology course because it either satisfies a general education requirement or is required for a program of study such as nursing or pre-med. Many of these students develop such an interest in the area that they go on to declare psychology as their major. As a result, psychology is one of the most popular majors on college campuses across the United States (Johnson & Lubin, 2011). A number of well-known individuals were psychology majors. Just a few famous names on this list are Facebook’s creator Mark Zuckerberg, television personality and political satirist Jon Stewart, actress Natalie Portman, and filmmaker Wes Craven (Halonen, 2011). About 6 percent of all bachelor degrees granted in the United States are in the discipline of psychology (U.S. Department of Education, 2016).

An education in psychology is valuable for a number of reasons. Psychology students hone critical thinking skills and are trained in the use of the scientific method. Critical thinking is the active application of a set of skills to information for the understanding and evaluation of that information. The evaluation of information—assessing its reliability and usefulness— is an important skill in a world full of competing “facts,” many of which are designed to be misleading. For example, critical thinking involves maintaining an attitude of skepticism, recognizing internal biases, making use of logical thinking, asking appropriate questions, and making observations. Psychology students also can develop better communication skills during the course of their undergraduate coursework (American Psychological Association, 2011). Together, these factors increase students’ scientific literacy and prepare students to critically evaluate the various sources of information they encounter.

In addition to these broad-based skills, psychology students come to understand the complex factors that shape one’s behavior. They appreciate the interaction of our biology, our environment, and our experiences in determining who we are and how we will behave. They learn about basic principles that guide how we think and behave, and they come to recognize the tremendous diversity that exists across individuals and across cultural boundaries (American Psychological Association, 2011).

Scientific Versus Everyday Reasoning

Each day, people offer statements as if they are facts, such as, “It looks like rain today,” or, “Dogs are very loyal.” These conclusions represent hypotheses about the world: best guesses as to how the world works. Scientists also draw conclusions, claiming things like, “There is an 80% chance of rain today,” or, “Dogs tend to protect their human companions.” You’ll notice that the two examples of scientific claims use less certain language and are more likely to be associated with probabilities. Understanding the similarities and differences between scientific and every day (non-scientific) statements is essential to our ability to accurately evaluate the trustworthiness of various claims.

Scientific and everyday reasoning both employ induction: drawing general conclusions from specific observations. For example, a person’s opinion that cramming for a test increases performance may be based on her memory of passing an exam after pulling an all-night study session. Similarly, a researcher’s conclusion against cramming might be based on studies comparing the test performances of people who studied the material in different ways (e.g., cramming versus study sessions spaced out over time). In these scenarios, both scientific and everyday conclusions are drawn from a limited sample of potential observations.

The process of induction, alone, does not seem suitable enough to provide trustworthy information—given the contradictory results. What should a student who wants to perform well on exams do? One source of information encourages her to cram, while another suggests that spacing out her studying time is the best strategy. To make the best decision with the information at hand, we need to appreciate the differences between personal opinions and scientific statements, which requires an understanding of science and the nature of scientific reasoning.

There are generally agreed-upon features that distinguish scientific thinking—and the theories and data generated by it—from everyday thinking. A short list of some of the commonly cited features of scientific theories and data is shown in Table 1.

Table 1. Features of good scientific theories (Kuhn, 2011)

One additional feature of modern science not included in this list but prevalent in scientists’ thinking and theorizing is falsifiability, a feature that has so permeated scientific practice that it warrants additional clarification. In the early 20th century, Karl Popper (1902-1994) suggested that science can be distinguished from pseudoscience (or just everyday reasoning) because scientific claims are capable of being falsified. That is, a claim can be conceivably demonstrated to be untrue. For example, a person might claim that “all people are right-handed.” This claim can be tested and—ultimately—thrown out because it can be shown to be false: There are people who are left-handed. An easy rule of thumb is to not get confused by the term “falsifiable” but to understand that—more or less—it means testable.

On the other hand, some claims cannot be tested and falsified. Imagine, for instance, that a magician claims that he can teach people to move objects with their minds. The trick, he explains, is to truly believe in one’s ability for it to work. When his students fail to budge chairs with their minds, the magician scolds, “Obviously, you don’t truly believe.” The magician’s claim does not qualify as falsifiable because there is no way to disprove it. It is unscientific.

Popper was particularly irritated about nonscientific claims because he believed they were a threat to the science of psychology. Specifically, he was dissatisfied with Freud’s explanations for mental illness. Freud believed that when a person suffers a mental illness it is often due to problems stemming from childhood. For instance, imagine a person who grows up to be an obsessive perfectionist. If she were raised by messy, relaxed parents, Freud might argue that her adult perfectionism is a reaction to her early family experiences—an effort to maintain order and routine instead of chaos. Alternatively, imagine the same person being raised by harsh, orderly parents. In this case, Freud might argue that her adult tidiness is simply her internalizing her parents’ way of being. As you can see, according to Freud’s rationale, both opposing scenarios are possible; no matter what the disorder, Freud’s theory could explain its childhood origin—thus failing to meet the principle of falsifiability.

Popper argued against statements that could not be falsified. He claimed that they blocked scientific progress: There was no way to advance, refine, or refute knowledge based on such claims. Popper’s solution was a powerful one: If science showed all the possibilities that were not true, we would be left only with what is true. That is, we need to be able to articulate—beforehand—the kinds of evidence that will disprove our hypothesis and cause us to abandon it.

This may seem counterintuitive. For example, if a scientist wanted to establish a comprehensive understanding of why car accidents happen, she would systematically test all potential causes: alcohol consumption, speeding, using a cell phone, fiddling with the radio, wearing sandals, eating, chatting with a passenger, etc. A complete understanding could only be achieved once all possible explanations were explored and either falsified or not. After all the testing was concluded, the evidence would be evaluated against the criteria for falsification, and only the real causes of accidents would remain. The scientist could dismiss certain claims (e.g., sandals lead to car accidents) and keep only those supported by research (e.g., using a mobile phone while driving increases risk). It might seem absurd that a scientist would need to investigate so many alternative explanations, but it is exactly how we rule out bad claims. Of course, many explanations are complicated and involve multiple causes—as with car accidents, as well as psychological phenomena.

Can It Be Falsified?

Although the idea of falsification remains central to scientific data and theory development, these days it’s not used strictly the way Popper originally envisioned it. To begin with, scientists aren’t solely interested in demonstrating what isn’t. Scientists are also interested in providing descriptions and explanations for the way things are. We want to describe different causes and the various conditions under which they occur. We want to discover when young children start speaking in complete sentences, for example, or whether people are happier on the weekend, or how exercise impacts depression. These explorations require us to draw conclusions from limited samples of data. In some cases, these data seem to fit with our hypotheses and in others, they do not. This is where interpretation and probability come in.

The Interpretation of Research Results

Imagine a researcher wanting to examine the hypothesis—a specific prediction based on previous research or scientific theory—that caffeine enhances memory. She knows there are several published studies that suggest this might be the case, and she wants to further explore the possibility. She designs an experiment to test this hypothesis. She randomly assigns some participants a cup of fully caffeinated tea and some a cup of herbal tea. All the participants are instructed to drink up, study a list of words, then complete a memory test. There are three possible outcomes of this proposed study:

A diagram showing that two groups, one caffeinated and one decaffeinated, are asked to study and then given a memory test.
  1. The caffeine group performs better (support for the hypothesis).
  2. The no-caffeine group performs better (evidence against the hypothesis).
  3. There is no difference in the performance between the two groups (also evidence against the hypothesis).

Let’s look, from a scientific point of view, at how the researcher should interpret each of these three possibilities.

First, if the results of the memory test reveal that the caffeine group performs better, this is a piece of evidence in favor of the hypothesis: It appears, at least in this case, that caffeine is associated with better memory. It does not, however, prove that caffeine is associated with better memory. There are still many questions left unanswered. How long does the memory boost last? Does caffeine work the same way with people of all ages? Is there a difference in memory performance between people who drink caffeine regularly and those who never drink it? Could the results be a freak occurrence? Because of these uncertainties, we do not say that a study—especially a single study—proves a hypothesis. Instead, we say the results of the study offer evidence in support of the hypothesis. Even if we tested this across 10 thousand or 100 thousand people we still could not use the word “proven” to describe this phenomenon. This is because inductive reasoning is based on probabilities. Probabilities are always a matter of degree; they may be extremely likely or unlikely. Science is better at shedding light on the likelihood—or probability—of something than at proving it. In this way, data is still highly useful even if it doesn’t fit Popper’s absolute standards.

The science of meteorology helps illustrate this point. You might look at your local weather forecast and see a high likelihood of rain. This is because the meteorologist has used inductive reasoning to create her forecast. She has taken current observations—lots of dense clouds coming toward your city—and compared them to historical weather patterns associated with rain, making a reasonable prediction of a high probability of rain. The meteorologist has not proven it will rain, however, by pointing out the oncoming clouds.

Proof is more associated with deductive reasoning. Deductive reasoning starts with general principles that are applied to specific instances (the reverse of inductive reasoning). When the general principles, or premises, are true, and the structure of the argument is valid, the conclusion is, by definition, proven; it must be so. A deductive truth must apply in all relevant circumstances. For example, all living cells contain DNA. From this, you can reason—deductively—that any specific living cell (of an elephant, or a person, or a snake) will therefore contain DNA. Given the complexity of psychological phenomena, which involve many contributing factors, it is nearly impossible to make these types of broad statements with certainty.

The second possible result from the caffeine-memory study is that the group who had no caffeine demonstrates better memory. This result is the opposite of what the researcher expects to find (her hypothesis). Here, the researcher must admit the evidence does not support her hypothesis. She must be careful, however, not to extend that interpretation to other claims. For example, finding increased memory in the no-caffeine group would not be evidence that caffeine harms memory. Again, there are too many unknowns. Is this finding a freak occurrence, perhaps based on an unusual sample? Is there a problem with the design of the study? The researcher doesn’t know. She simply knows that she was not able to observe support for her hypothesis.

There is at least one additional consideration: The researcher originally developed her caffeine-benefits-memory hypothesis based on conclusions drawn from previous research. That is, previous studies found results that suggested caffeine boosts memory. The researcher’s single study should not outweigh the conclusions of many studies. Perhaps the earlier research employed participants of different ages or who had different baseline levels of caffeine intake. This new study simply becomes a piece of fabric in the overall quilt of studies of the caffeine-memory relationship. It does not, on its own, definitively falsify the hypothesis.

Finally, it’s possible that the results show no difference in memory between the two groups. How should the researcher interpret this? How would you? In this case, the researcher once again has to admit that she has not found support for her hypothesis.

Interpreting the results of a study—regardless of outcome—rests on the quality of the observations from which those results are drawn. If you learn, say, that each group in a study included only four participants, or that they were all over 90 years old, you might have concerns. Specifically, you should be concerned that the observations, even if accurate, aren’t representative of the general population. This is one of the defining differences between conclusions drawn from personal anecdotes and those drawn from scientific observations. Anecdotal evidence—derived from personal experience and unsystematic observations (e.g., “common sense,”)—is limited by the quality and representativeness of observations, and by memory shortcomings. Well-designed research, on the other hand, relies on observations that are systematically recorded, of high quality, and representative of the population it claims to describe.

One of the important steps in scientific inquiry is to test our research questions, otherwise known as hypotheses. However, there are many ways to test hypotheses in psychological research. Which method you choose will depend on the type of questions you are asking, as well as what resources are available to you. All methods have limitations, which is why the best research uses a variety of methods. Most psychological research can be divided into two types: experimental and correlational research.

Experimental Research

If somebody gave you $20 that absolutely had to be spent today, how would you choose to spend it? Would you spend it on an item you’ve been eyeing for weeks, or would you donate the money to charity? Which option do you think would bring you the most happiness? If you’re like most people, you’d choose to spend the money on yourself (duh, right?). Our intuition is that we’d be happier if we spent the money on ourselves.

Coffee shop owner Josh cooks shows 100 dollars that were donated by a generous customer to buy drinks for strangers.
At the Corner Perk Cafe customers routinely pay for the drinks of strangers. Is this the way to get the most happiness out of a cup of coffee? Elizabeth Dunn’s research shows that spending money on others may affect our happiness differently than spending money on ourselves. [Image: The Island Packet, https://goo.gl/DMxA5n]

Knowing that our intuition can sometimes be wrong, Professor Elizabeth Dunn (2008) at the University of British Columbia set out to conduct an experiment on spending and happiness. She gave each of the participants in her experiment $20 and then told them they had to spend the money by the end of the day. Some of the participants were told they must spend the money on themselves, and some were told they must spend the money on others (either charity or a gift for someone). At the end of the day she measured participants’ levels of happiness using a self-report questionnaire. (But wait, how do you measure something like happiness when you can’t really see it? Psychologists measure many abstract concepts, such as happiness and intelligence, by beginning with operational definitions of the concepts.

In an experiment, researchers manipulate, or cause changes, in the independent variable, and observe or measure any impact of those changes in the dependent variable. The independent variable is the one under the experimenter’s control, or the variable that is intentionally altered between groups. In the case of Dunn’s experiment, the independent variable was whether participants spent the money on themselves or on others. The dependent variable is the variable that is not manipulated at all, or the one where the effect happens. One way to help remember this is that the dependent variable “depends” on what happens to the independent variable. In our example, the participants’ happiness (the dependent variable in this experiment) depends on how the participants spend their money (the independent variable). Thus, any observed changes or group differences in happiness can be attributed to whom the money was spent on. What Dunn and her colleagues found was that, after all the spending had been done, the people who had spent the money on others were happier than those who had spent the money on themselves. In other words, spending on others causes us to be happier than spending on ourselves. Do you find this surprising?

But wait! Doesn’t happiness depend on a lot of different factors—for instance, a person’s upbringing or life circumstances? What if some people had happy childhoods and that’s why they’re happier? Or what if some people dropped their toast that morning and it fell jam-side down and ruined their whole day? It is correct to recognize that these factors and many more can easily affect a person’s level of happiness. So how can we accurately conclude that spending money on others causes happiness, as in the case of Dunn’s experiment?

The most important thing about experiments is random assignment. Participants don’t get to pick which condition they are in (e.g., participants didn’t choose whether they were supposed to spend the money on themselves versus others). The experimenter assigns them to a particular condition based on the flip of a coin or the roll of a die or any other random method. Why do researchers do this? With Dunn’s study, there is the obvious reason: you can imagine which condition most people would choose to be in, if given the choice. But another equally important reason is that random assignment makes it so the groups, on average, are similar on all characteristics except what the experimenter manipulates.

By randomly assigning people to conditions (self-spending versus other-spending), some people with happy childhoods should end up in each condition. Likewise, some people who had dropped their toast that morning (or experienced some other disappointment) should end up in each condition. As a result, the distribution of all these factors will generally be consistent across the two groups, and this means that on average the two groups will be relatively equivalent on all these factors. Random assignment is critical to experimentation because if the only difference between the two groups is the independent variable, we can infer that the independent variable is the cause of any observable difference (e.g., in the amount of happiness they feel at the end of the day).

Correlational Designs

When scientists passively observe and measure phenomena it is called correlational research. Here, we do not intervene and change behavior, as we do in experiments. In correlational research, we identify patterns of relationships, but we usually cannot infer what causes what. Importantly, with correlational research, you can examine only two variables at a time, no more and no less.

So, what if you wanted to test whether spending on others is related to happiness, but you don’t have $20 to give to each participant? You could use a correlational design—which is exactly what Professor Dunn did, too. She asked people how much of their income they spent on others or donated to charity, and later she asked them how happy they were. Do you think these two variables were related? Yes, they were! The more money people reported spending on others, the happier they were.

If generosity and happiness are positively correlated, should we conclude that being generous causes happiness? Similarly, if height and pathogen prevalence are negatively correlated, should we conclude that disease causes shortness? From a correlation alone, we can’t be certain. For example, in the first case it may be that happiness causes generosity, or that generosity causes happiness. Or, a third variable might cause both happiness and generosity, creating the illusion of a direct link between the two. For example, wealth could be the third variable that causes both greater happiness and greater generosity. This is why correlation does not mean causation—an often repeated phrase among psychologists.

Qualitative Designs

Just as correlational research allows us to study topics we can’t experimentally manipulate (e.g., whether you have a large or small income), there are other types of research designs that allow us to investigate these harder-to-study topics. Qualitative designs, including participant observation, case studies, and narrative analysis are examples of such methodologies. Although something as simple as “observation” may seem like it would be a part of all research methods, participant observation is a distinct methodology that involves the researcher embedding him- or herself into a group in order to study its dynamics. For example, Festinger, Riecken, and Shacter (1956) were very interested in the psychology of a particular cult. However, this cult was very secretive and wouldn’t grant interviews to outside members. So, in order to study these people, Festinger and his colleagues pretended to be cult members, allowing them access to the behavior and psychology of the cult. Despite this example, it should be noted that the people being observed in a participant observation study usually know that the researcher is there to study them.

Another qualitative method for research is the case study, which involves an intensive examination of specific individuals or specific contexts. Sigmund Freud, the father of psychoanalysis, was famous for using this type of methodology; however, more current examples of case studies usually involve brain injuries. For instance, imagine that researchers want to know how a very specific brain injury affects people’s experience of happiness. Obviously, the researchers can’t conduct experimental research that involves inflicting this type of injury on people. At the same time, there are too few people who have this type of injury to conduct correlational research. In such an instance, the researcher may examine only one person with this brain injury, but in doing so, the researcher will put the participant through a very extensive round of tests. Hopefully what is learned from this one person can be applied to others; however, even with thorough tests, there is the chance that something unique about this individual (other than the brain injury) will affect his or her happiness. But with such a limited number of possible participants, a case study is really the only type of methodology suitable for researching this brain injury.

Why Should I Trust Science If It Can’t Prove Anything?

Why ought we trust the scientific inductive process, even when it relies on limited samples that don’t offer absolute “proof.” It’s because the methodologies in science are generally trustworthy. Not all claims and explanations are equal; some conclusions are better bets, so to speak. Scientific claims are more likely to be correct and predict real outcomes than “common sense” opinions and personal anecdotes. This is because researchers consider how to best prepare and measure their subjects, systematically collect data from large and—ideally—representative samples, and test their findings against probability.

A male and female student work together at a table and focus on details in a notebook in front of them.
Is there a relationship between student age and academic performance? How could we research this question? How confident can we be that our observations reflect reality? [Image: Jeremy Wilburn, https://goo.gl/i9MoJb, CC BY-NC-ND 2.0, https://goo.gl/SjTsDg]

Scientific Theories

The knowledge generated from research is organized according to scientific theories. A scientific theory is a comprehensive framework for making sense of evidence regarding a particular phenomenon. When scientists talk about a theory, they mean something different from how the term is used in everyday conversation. In common usage, a theory is an educated guess—as in, “I have a theory about which team will make the playoffs,” or, “I have a theory about why my sister is always running late for appointments.” Both of these beliefs are liable to be heavily influenced by many untrustworthy factors, such as personal opinions and memory biases. A scientific theory, however, enjoys support from many research studies, collectively providing evidence, including, but not limited to, that which has falsified competing explanations. A key component of good theories is that they describe, explain, and predict in a way that can be empirically tested and potentially falsified.

An illustration depicting the understanding of the ancient Greeks that the Sun, Moon, and planets all orbited around the earth in perfect circles.
Early theories placed the Earth at the center of the solar system. We now know that the Earth revolves around the sun. [Image: Pearson Scott Foresman, https://goo.gl/W3izMR, Public Domain]

Theories are open to revision if new evidence comes to light that compels reexamination of the accumulated, relevant data. In ancient times, for instance, people thought the Sun traveled around the Earth. This seemed to make sense and fit with many observations. In the 16th century, however, astronomers began systematically charting visible objects in the sky, and, over a 50-year period, with repeated testing, critique, and refinement, they provided evidence for a revised theory: The Earth and other cosmic objects revolve around the Sun. In science, we believe what the best and most data tell us. If better data come along, we must be willing to change our views in accordance with the new evidence.

Is Science Objective?

Thomas Kuhn (2012), a historian of science, argued that science, as an activity conducted by humans, is a social activity. As such, it is—according to Kuhn—subject to the same psychological influences of all human activities. Specifically, Kuhn suggested that there is no such thing as objective theory or data; all of science is informed by values. Scientists cannot help but let personal/cultural values, experiences, and opinions influence the types of questions they ask and how they make sense of what they find in their research. Kuhn’s argument highlights a distinction between facts (information about the world), and values (beliefs about the way the world is or ought to be). This distinction is an important one, even if it is not always clear.

The primary point of this illustration is that (contrary to the image of scientists as outside observers to the facts, gathering them neutrally and without bias from the natural world) all science—especially social sciences like psychology—involves values and interpretation. As a result, science functions best when people with diverse values and backgrounds work collectively to understand complex natural phenomena.

Four levels of analysis - biological, cognitive, behavioral, social/cultural.

Indeed, science can benefit from multiple perspectives. One approach to achieving this is through levels of analysis. Levels of analysis is the idea that a single phenomenon may be explained at different levels simultaneously. Remember the question concerning cramming for a test versus studying over time? It can be answered at a number of different levels of analysis. At a low level, we might use brain scanning technologies to investigate whether biochemical processes differ between the two study strategies. At a higher level—the level of thinking—we might investigate processes of decision making (what to study) and ability to focus, as they relate to cramming versus spaced practice. At even higher levels, we might be interested in real world behaviors, such as how long people study using each of the strategies. Similarly, we might be interested in how the presence of others influences learning across these two strategies. Levels of analysis suggests that one level is not more correct—or truer—than another; their appropriateness depends on the specifics of the question asked. Ultimately, levels of analysis would suggest that we cannot understand the world around us, including human psychology, by reducing the phenomenon to only the biochemistry of genes and dynamics of neural networks. But, neither can we understand humanity without considering the functions of the human nervous system.

Science in Context

There are many ways to interpret the world around us. People rely on common sense, personal experience, and faith, in combination and to varying degrees. All of these offer legitimate benefits to navigating one’s culture, and each offers a unique perspective, with specific uses and limitations. Science provides another important way of understanding the world and, while it has many crucial advantages, as with all methods of interpretation, it also has limitations. Understanding the limits of science—including its subjectivity and uncertainty—does not render it useless. Because it is systematic, using testable, reliable data, it can allow us to determine causality and can help us generalize our conclusions. By understanding how scientific conclusions are reached, we are better equipped to use science as a tool of knowledge.

 

Additional Supplemental Resources

Websites

Videos

 

Biopsychology

2

Three brain-imaging scans are shown.
Figure 3.1 Different brain imaging techniques provide scientists with insight into different aspects of how the human brain functions. Left to right, PET scan (positron emission tomography), CT scan (computerized tomography), and fMRI (functional magnetic resonance imaging) are three types of scans. (credit “left”: modification of work by Health and Human Services Department, National Institutes of Health; credit “center”: modification of work by “Aceofhearts1968″/Wikimedia Commons; credit “right”: modification of work by Kim J, Matthews NL, Park S.)

Any textbook on psychology would be incomplete without reference to the brain. Every behavior, thought, or experience described in the other modules must be implemented in the brain. A detailed understanding of the human brain can help us make sense of human experience and behavior. For example, one well-established fact about human cognition is that it is limited. We cannot do two complex tasks at once: We cannot read and carry on a conversation at the same time, text and drive, or surf the Internet while listening to a lecture, at least not successfully or safely. We cannot even pat our head and rub our stomach at the same time (with exceptions, see “A Brain Divided”). Why is this? Many people have suggested that such limitations reflect the fact that the behaviors draw on the same resource; if one behavior uses up most of the resource there is not enough resource left for the other. But what might this limited resource be in the brain?

An MRI of the human brain delineating three major structures: the cerebral hemispheres, brain stem, and cerebellum.
Figure 1. An MRI of the human brain delineating three major structures: the cerebral hemispheres, brain stem, and cerebellum.

The brain uses oxygen and glucose, delivered via the blood. The brain is a large consumer of these metabolites, using 20% of the oxygen and calories we consume despite being only 2% of our total weight. However, as long as we are not oxygen-deprived or malnourished, we have more than enough oxygen and glucose to fuel the brain. Thus, insufficient “brain fuel” cannot explain our limited capacity. Nor is it likely that our limitations reflect too few neurons. The average human brain contains 100 billion neurons. It is also not the case that we use only 10% of our brain, a myth that was likely started to imply we had untapped potential. Modern neuroimaging (see “Studying the Human Brain”) has shown that we use all parts of brain, just at different times, and certainly more than 10% at any one time.

If we have an abundance of brain fuel and neurons, how can we explain our limited cognitive abilities? Why can’t we do more at once? The most likely explanation is the way these neurons are wired up. We know, for instance, that many neurons in the visual cortex (the part of the brain responsible for processing visual information) are hooked up in such a way as to inhibit each other (Beck & Kastner, 2009). When one neuron fires, it suppresses the firing of other nearby neurons. If two neurons that are hooked up in an inhibitory way both fire, then neither neuron can fire as vigorously as it would otherwise. This competitive behavior among neurons limits how much visual information the brain can respond to at the same time. Similar kinds of competitive wiring among neurons may underlie many of our limitations. Thus, although talking about limited resources provides an intuitive description of our limited capacity behavior, a detailed understanding of the brain suggests that our limitations more likely reflect the complex way in which neurons talk to each other rather than the depletion of any specific resource.

Have you ever taken a device apart to find out how it works? Many of us have done so, whether to attempt a repair or simply to satisfy our curiosity. A device’s internal workings are often distinct from its user interface on the outside. For example, we don’t think about microchips and circuits when we turn up the volume on a mobile phone; instead, we think about getting the volume just right. Similarly, the inner workings of the human body are often distinct from the external expression of those workings. It is the job of psychologists to find the connection between these—for example, to figure out how the firings of millions of neurons become a thought.

This chapter strives to explain the biological mechanisms that underlie behavior. These physiological and anatomical foundations are the basis for many areas of psychology. In this chapter, you will become familiar with the structure and function of the nervous system. And, finally, you will learn how the nervous system interacts with the endocrine system.

Learning Objectives

By the end of this section, you will be able to:

  • Identify the basic parts of a neuron
  • Describe how neurons communicate with each other
  • Explain how drugs act as agonists or antagonists for a given neurotransmitter system

Psychologists striving to understand the human mind may study the nervous system. Learning how the body’s cells and organs function can help us understand the biological basis of human psychology. The nervous system is composed of two basic cell types: glial cells (also known as glia) and neurons. Glial cells are traditionally thought to play a supportive role to neurons, both physically and metabolically. Glial cells provide scaffolding on which the nervous system is built, help neurons line up closely with each other to allow neuronal communication, provide insulation to neurons, transport nutrients, and waste products, and mediate immune responses. For years, researchers believed that there were many more glial cells than neurons; however, more recent work from Suzanna Herculano-Houzel’s laboratory has called this long-standing assumption into question and has provided important evidence that there may be a nearly 1:1 ratio of glial cells to neurons. This is important because it suggests that human brains are more similar to other primate brains than previously thought (Azevedo et al., 2009; Hercaulano-Houzel, 2012; Herculano-Houzel, 2009). Neurons, on the other hand, serve as interconnected information processors that are essential for all of the tasks of the nervous system. This section briefly describes the structure and function of neurons.

Imagine trying to string words together into a meaningful sentence without knowing the meaning of each word or its function (i.e., Is it a verb, a noun, or an adjective?). In a similar fashion, to appreciate how groups of cells work together in a meaningful way in the brain as a whole, we must first understand how individual cells in the brain function. Much like words, brain cells, called neurons, have an underlying structure that provides the foundation for their functional purpose. Have you ever seen a neuron? Did you know that the basic structure of a neuron is similar whether it is from the brain of a rat or a human? How do the billions of neurons in our brain allow us to do all the fun things we enjoy, such as texting a friend, cheering on our favorite sports team, or laughing?

Three drawings depicting hundreds of individual neurons as observed through a microscope.
Figure 1. Three drawings by Santiago Ramón y Cajal, taken from “Comparative study of the sensory areas of the human cortex”, pages 314, 361, and 363. Left: Nissl-stained visual cortex of a human adult. Middle: Nissl-stained motor cortex of a human adult. Right: Golgi-stained cortex of a 1 1/2 month old infant. [Image: Santiago Ramon y Cajal, https://goo.gl/zOb2l1, CC0 Public Domain, https://goo.gl/m25gce]

Neuron Structure

Neurons are the central building blocks of the nervous system, 100 billion strong at birth. Like all cells, neurons consist of several different parts, each serving a specialized function (Figure 3.8). A neuron’s outer surface is made up of a semipermeable membrane. This membrane allows smaller molecules and molecules without an electrical charge to pass through it while stopping larger or highly charged molecules.

An illustration shows a neuron with labeled parts for the cell membrane, dendrite, cell body, axon, and terminal buttons. A myelin sheath covers part of the neuron.
Figure 3.8 This illustration shows a prototypical neuron, which is being myelinated by a glial cell.

The nucleus of the neuron is located in the soma or cell body. The soma has branching extensions known as dendrites. The neuron is a small information processor, and dendrites serve as input sites where signals are received from other neurons. These signals are transmitted electrically across the soma and down a major extension from the soma known as the axon, which ends at multiple terminal buttons. The terminal buttons contain synaptic vesicles that house neurotransmitters, the chemical messengers of the nervous system.

Axons range in length from a fraction of an inch to several feet. In some axons, glial cells form a fatty substance known as the myelin sheath, which coats the axon and acts as an insulator, increasing the speed at which the signal travels. The myelin sheath is not continuous and there are small gaps that occur down the length of the axon. These gaps in the myelin sheath are known as the Nodes of Ranvier. The myelin sheath is crucial for the normal operation of the neurons within the nervous system: the loss of the insulation it provides can be detrimental to normal function. To understand how this works, let’s consider an example. PKU, a genetic disorder discussed earlier, causes a reduction in myelin and abnormalities in white matter cortical and subcortical structures. The disorder is associated with a variety of issues including severe cognitive deficits, exaggerated reflexes, and seizures (Anderson & Leuzzi, 2010; Huttenlocher, 2000). Another disorder, multiple sclerosis (MS), an autoimmune disorder, involves a large-scale loss of the myelin sheath on axons throughout the nervous system. The resulting interference in the electrical signal prevents the quick transmittal of information by neurons and can lead to a number of symptoms, such as dizziness, fatigue, loss of motor control, and sexual dysfunction. While some treatments may help to modify the course of the disease and manage certain symptoms, there is currently no known cure for multiple sclerosis.

In healthy individuals, the neuronal signal moves rapidly down the axon to the terminal buttons, where synaptic vesicles release neurotransmitters into the synaptic cleft (Figure 3.9). The synaptic cleft is a very small space between two neurons and is an important site where communication between neurons occurs. Once neurotransmitters are released into the synaptic cleft, they travel across it and bind with corresponding receptors on the dendrite of an adjacent neuron. Receptors, proteins on the cell surface where neurotransmitters attach, vary in shape, with different shapes “matching” different neurotransmitters.

How does a neurotransmitter “know” which receptor to bind to? The neurotransmitter and the receptor have what is referred to as a lock-and-key relationship. Specific neurotransmitters fit specific receptors similar to how a key fits a lock. The neurotransmitter binds to any receptor that it fits.

Image (a) shows the synaptic space between two neurons, with neurotransmitters being released into the synapse and attaching to receptors. Image (b) is a micrograph showing a spherical terminal button with part of the exterior removed, revealing a solid interior of small round parts.
Figure 3.9(a) The synaptic cleft is the space between the terminal button of one neuron and the dendrite of another neuron. (b) In this pseudo-colored image from a scanning electron microscope, a terminal button (green) has been opened to reveal the synaptic vesicles (orange and blue) inside. Each vesicle contains about 10,000 neurotransmitter molecules. (credit b: modification of work by Tina Carvalho, NIH-NIGMS; scale-bar data from Matt Russell)

The action potential is an all-or-none phenomenon. In simple terms, this means that an incoming signal from another neuron is either sufficient or insufficient to reach the threshold of excitation. There is no in-between, and there is no turning off an action potential once it starts. Think of it as sending an email or a text message. You can think about sending it all you want, but the message is not sent until you hit the send button. Furthermore, once you send the message, there is no stopping it.

Because it is all or none, the action potential is recreated, or propagated, at its full strength at every point along the axon. Much like the lit fuse of a firecracker, it does not fade away as it travels down the axon. It is this all-or-none property that explains the fact that your brain perceives an injury to a distant body part like your toe as equally painful as one to your nose.

As noted earlier, when the action potential arrives at the terminal button, the synaptic vesicles release their neurotransmitters into the synaptic cleft. The neurotransmitters travel across the synapse and bind to receptors on the dendrites of the adjacent neuron, and the process repeats itself in the new neuron (assuming the signal is sufficiently strong to trigger an action potential). Once the signal is delivered, excess neurotransmitters in the synaptic cleft drift away, are broken down into inactive fragments or are reabsorbed in a process known as reuptake. Reuptake involves the neurotransmitter being pumped back into the neuron that released it, in order to clear the synapse (Figure 3.12). Clearing the synapse serves both to provide a clear “on” and “off” state between signals and to regulate the production of neurotransmitter (full synaptic vesicles provide signals that no additional neurotransmitters need to be produced).

The synaptic space between two neurons is shown. Some neurotransmitters that have been released into the synapse are attaching to receptors while others undergo reuptake into the axon terminal.
Figure 3.12 Reuptake involves moving a neurotransmitter from the synapse back into the axon terminal from which it was released.

There are several different types of neurotransmitters released by different neurons, and we can speak in broad terms about the kinds of functions associated with different neurotransmitters (Table 3.1). Much of what psychologists know about the functions of neurotransmitters come from research on the effects of drugs in psychological disorders. Psychologists who take a biological perspective and focus on the physiological causes of behavior assert that psychological disorders like depression and schizophrenia are associated with imbalances in one or more neurotransmitter systems. In this perspective, psychotropic medications can help improve the symptoms associated with these disorders. Psychotropic medications are drugs that treat psychiatric symptoms by restoring neurotransmitter balance.

Major Neurotransmitters and How They Affect Behavior
Neurotransmitter Involved in Potential Effect on Behavior
Acetylcholine Muscle action, memory Increased arousal, enhanced cognition
Beta-endorphin Pain, pleasure Decreased anxiety, decreased tension
Dopamine Mood, sleep, learning Increased pleasure, suppressed appetite
Gamma-aminobutyric acid (GABA) Brain function, sleep Decreased anxiety, decreased tension
Glutamate Memory, learning Increased learning, enhanced memory
Norepinephrine Heart, intestines, alertness Increased arousal, suppressed appetite
Serotonin Mood, sleep Modulated mood, suppressed appetite
Table3.1

Psychoactive drugs can act as agonists or antagonists for a given neurotransmitter system. Agonists are chemicals that mimic a neurotransmitter at the receptor site. An antagonist, on the other hand, blocks or impedes the normal activity of a neurotransmitter at the receptor. Agonists and antagonists represent drugs that are prescribed to correct the specific neurotransmitter imbalances underlying a person’s condition. For example, Parkinson’s disease, a progressive nervous system disorder, is associated with low levels of dopamine. Therefore, a common treatment strategy for Parkinson’s disease involves using dopamine agonists, which mimic the effects of dopamine by binding to dopamine receptors.

Certain symptoms of schizophrenia are associated with overactive dopamine neurotransmission. The antipsychotics used to treat these symptoms are antagonists for dopamine—they block dopamine’s effects by binding its receptors without activating them. Thus, they prevent dopamine released by one neuron from signaling information to adjacent neurons.

In contrast to agonists and antagonists, which both operate by binding to receptor sites, reuptake inhibitors prevent unused neurotransmitters from being transported back to the neuron. This allows neurotransmitters to remain active in the synaptic cleft for longer durations, increasing their effectiveness. Depression, which has been consistently linked with reduced serotonin levels, is commonly treated with selective serotonin reuptake inhibitors (SSRIs). By preventing reuptake, SSRIs strengthen the effect of serotonin, giving it more time to interact with serotonin receptors on dendrites. Common SSRIs on the market today include Prozac, Paxil, and Zoloft. The drug LSD is structurally very similar to serotonin, and it affects the same neurons and receptors as serotonin. Psychotropic drugs are not instant solutions for people suffering from psychological disorders. Often, an individual must take a drug for several weeks before seeing improvement, and many psychoactive drugs have significant negative side effects. Furthermore, individuals vary dramatically in how they respond to the drugs. To improve chances for success, it is not uncommon for people receiving pharmacotherapy to undergo psychological and/or behavioral therapies as well. Some research suggests that combining drug therapy with other forms of therapy tends to be more effective than any one treatment alone (for one such example, see March et al., 2007).

Learning Objectives

By the end of this section, you will be able to:

  • Describe the difference between the central and peripheral nervous systems
  • Explain the difference between the somatic and autonomic nervous systems
  • Differentiate between the sympathetic and parasympathetic divisions of the autonomic nervous system
  • Distinguish between gray and white matter of the cerebral hemispheres.

The mammalian nervous system is a complex biological organ, which enables many animals including humans to function in a coordinated fashion. The original design of this system is preserved across many animals through evolution; thus, adaptive physiological and behavioral functions are similar across many animal species. Comparative study of physiological functioning in the nervous systems of different animals lend insights to their behavior and their mental processing and make it easier for us to understand the human brain and behavior. In addition, studying the development of the nervous system in a growing human provides a wealth of information about the change in its form and behaviors that result from this change. The nervous system is divided into central and peripheral nervous systems, and the two heavily interact with one another. The peripheral nervous system controls volitional (somatic nervous system) and nonvolitional (autonomic nervous system) behaviors using cranial and spinal nerves. The central nervous system is divided into forebrain, midbrain, and hindbrain, and each division performs a variety of tasks; for example, the cerebral cortex in the forebrain houses sensory, motor, and associative areas that gather sensory information, process information for perception and memory, and produce responses based on incoming and inherent information. To study the nervous system, a number of methods have evolved over time; these methods include examining brain lesions, microscopy, electrophysiology, electroencephalography, and many scanning technologies.

Evolution of the Nervous System

Many scientists and thinkers (Cajal, 1937Crick & Koch, 1990Edelman, 2004) believe that the human nervous system is the most complex machine known to man. Its complexity points to one undeniable fact—that it has evolved slowly over time from simpler forms. Evolution of the nervous system is intriguing not because we can marvel at this complicated biological structure, but it is fascinating because it inherits a lineage of a long history of many less complex nervous systems (Figure 1), and it documents a record of adaptive behaviors observed in life forms other than humans. Thus, evolutionary study of the nervous system is important, and it is the first step in understanding its design, its workings, and its functional interface with the environment.

The brains of various animals – mouse, cat, dog, rhesus monkey, chimpanzee and human presented in order of complexity with mouse having the least and human having the most folds and convolutions in the brain.
Figure 1 The brains of various animals

The brains of some animals, like apes, monkeys, and rodents, are structurally similar to humans (Figure 1), while others are not (e.g., invertebrates, single-celled organisms). Does anatomical similarity of these brains suggest that behaviors that emerge in these species are also similar? Indeed, many animals display behaviors that are similar to humans, e.g., apes use nonverbal communication signals with their hands and arms that resemble nonverbal forms of communication in humans (Gardner & Gardner, 1969Goodall, 1986Knapp & Hall, 2009). If we study very simple behaviors, like physiological responses made by individual neurons, then brain-based behaviors of invertebrates (Kandel & Schwartz, 1982) look very similar to humans, suggesting that from time immemorial such basic behaviors have been conserved in the brains of many simple animal forms and in fact are the foundation of more complex behaviors in animals that evolved later (Bullock, 1984).

Even at the micro-anatomical level, we note that individual neurons differ in complexity across animal species. Human neurons exhibit more intricate complexity than other animals; for example, neuronal processes (dendrites) in humans have many more branch points, branches, and spines.

Complexity in the structure of the nervous system, both at the macro- and micro-levels, give rise to complex behaviors. We can observe similar movements of the limbs, as in nonverbal communication, in apes and humans, but the variety and intricacy of nonverbal behaviors using hands in humans surpasses apes. Deaf individuals who use American Sign Language (ASL) express themselves in English nonverbally; they use this language with such fine gradation that many accents of ASL exist (Walker, 1987). Complexity of behavior with increasing complexity of the nervous system, especially the cerebral cortex, can be observed in the genus Homo (Figure 2). If we compare sophistication of material culture in Homo habilis (2 million years ago; brain volume ~650 cm3) and Homo sapiens (300,000 years to now; brain volume ~1400 cm3), the evidence shows that Homo habilis used crude stone tools compared with modern tools used by Homo sapiens to erect cities, develop written languages, embark on space travel, and study her own self. All of this is due to increasing complexity of the nervous system.

Changes in cerebral volume across evolution – from Ardipithecus to Australopithecus to Homo Habilis to Homo Erectus to Homo Sapiens with the former having the least brain volume and the latter having the most.
Figure 2 Changes in cerebral volume across evolution

What has led to the complexity of the brain and nervous system through evolution, to its behavioral and cognitive refinement? Darwin (18591871) proposed two forces of natural and sexual selection as work engines behind this change. He prophesied, “psychology will be based on a new foundation, that of the necessary acquirement of each mental power and capacity by gradation” that is, psychology will be based on evolution (Rosenzweig, Breedlove, & Leiman, 2002).

Development of the Nervous System

Where the study of change in the nervous system over eons is immensely captivating, studying the change in a single brain during individual development is no less engaging. In many ways the ontogeny (development) of the nervous system in an individual mimics the evolutionary advancement of this structure observed across many animal species. During development, the nervous tissue emerges from the ectoderm (one of the three layers of the mammalian embryo) through the process of neural induction. This process causes the formation of the neural tube, which extends in a rostrocaudal (head-to-tail) plane. The tube, which is hollow, seams itself in the rostrocaudal direction. In some disease conditions, the neural tube does not close caudally and results in an abnormality called spina bifida. In this pathological condition, the lumbar and sacral segments of the spinal cord are disrupted.

As gestation progresses, the neural tube balloons up (cephalization) at the rostral end, and forebrain, midbrain, hindbrain, and the spinal cord can be visually delineated (day 40). About 50 days into gestation, six cephalic areas can be anatomically discerned (also see below for a more detailed description of these areas).

The progenitor cells (neuroblasts) that form the lining (neuroepithelium) of the neural tube generate all the neurons and glial cells of the central nervous system. During early stages of this development, neuroblasts rapidly divide and specialize into many varieties of neurons and glial cells, but this proliferation of cells is not uniform along the neural tube—that is why we see the forebrain and hindbrain expand into larger cephalic tissues than the midbrain. The neuroepithelium also generates a group of specialized cells that migrate outside the neural tube to form the neural crest. This structure gives rise to sensory and autonomic neurons in the peripheral nervous system.

The Structure of the Nervous System

The mammalian nervous system is divided into central and peripheral nervous systems.

The Peripheral Nervous System

The various components of the peripheral nervous system – the peripheral nervous system consists of two parts – the somatic and the autonomic nervous system. Somatic nervous system is comprised of cranial nerves and spinal nerves which process sensory information and control the voluntary muscle movements. And the autonomic nervous system is comprised of the sympathetic nervous system and the parasympathetic nervous system which control other muscles and visceral organs.
Figure 3 The various components of the peripheral nervous system

The peripheral nervous system is divided into somatic and autonomic nervous systems (Figure 3). Where the somatic nervous system consists of cranial nerves (12 pairs) and spinal nerves (31 pairs) and is under the volitional control of the individual in maneuvering bodily muscles, the autonomic nervous system also running through these nerves lets the individual have little control over muscles and glands. Main divisions of the autonomic nervous system that control visceral structures are the sympathetic and parasympathetic nervous systems.

At an appropriate cue (say a fear-inducing object like a snake), the sympathetic division generally energizes many muscles (e.g., heart) and glands (e.g., adrenals), causing activity and release of hormones that lead the individual to negotiate the fear-causing snake with fight-or-flight responses. Whether the individual decides to fight the snake or run away from it, either action requires energy; in short, the sympathetic nervous system says “go, go, go.” The parasympathetic nervous system, on the other hand, curtails undue energy mobilization into muscles and glands and modulates the response by saying “stop, stop, stop.” This push–pull tandem system regulates fight-or-flight responses in all of us.

While it is clear that such a response would be critical for survival for our ancestors, who lived in a world full of real physical threats, many of the high-arousal situations we face in the modern world are more psychological in nature. For example, think about how you feel when you have to stand up and give a presentation in front of a roomful of people, or right before taking a big test. You are in no real physical danger in those situations, and yet you have evolved to respond to a perceived threat with the fight or flight response. This kind of response is not nearly as adaptive in the modern world; in fact, we suffer negative health consequences when faced constantly with psychological threats that we can neither fight nor flee. Recent research suggests that an increase in susceptibility to heart disease (Chandola, Brunner, & Marmot, 2006) and impaired function of the immune system (Glaser & Kiecolt-Glaser, 2005) are among the many negative consequences of persistent and repeated exposure to stressful situations. Some of this tendency for stress reactivity can be wired by early experiences of trauma.

Once the threat has been resolved, the parasympathetic nervous system takes over and returns bodily functions to a relaxed state. Our hunter’s heart rate and blood pressure return to normal, his pupils constrict, he regains control of his bladder, and the liver begins to store glucose in the form of glycogen for future use. These restorative processes are associated with the activation of the parasympathetic nervous system.

The Central Nervous System

The central nervous system and its components – the CNS is comprised of 4 major distinct components: the Forebrain, the Midbrain, the Hindbrain and the Spinal Cord. The Forebrain, or Prosencephalon is comprised of Thalamus (Diencephalon), Cerebrum (Telencephalon), and the Cerebrum, is in turn comprised of Isocortex, the Limbic System and Basal Ganglia. The Midbrain or Mesencephalon is composed of Superior and Inferior Colliculi and the Reticular Activation System. The Hindbrain, or Rhombencephalon contains Medulla, Pons and Cerebellum with the latter two being a part of the Metencephalon.
Figure 4 the central nervous system and its components

The central nervous system is divided into a number of important parts (see Figure 4), including the spinal cord, each specialized to perform a set of specific functions. Telencephalon or cerebrum is a newer development in the evolution of the mammalian nervous system. In humans, it is about the size of a large napkin and when crumpled into the skull, it forms furrows called sulci (singular form, sulcus). The bulges between sulci are called gyri (singular form, gyrus). The cortex is divided into two hemispheres, and each hemisphere is further divided into four lobes (Figure 5a), which have specific functions. The division of these lobes is based on two delineating sulci: the central sulcus divides the hemisphere into frontal and parietal-occipital lobes and the lateral sulcus marks the temporal lobe, which lies below.

Four lobes of the brain – in the front we have the Frontal Lobe which contains the olfactory bulb and that is separated from the Parietal Lobe by the Precentral and Postcentral Gyri with Central Sulcus in the middle. Further at the back of the brain the Occipital Lobe, and further down, inferior to the Occipital Lobe is the Cerebellum and the Temporal lobe separated from the Frontal Lobe by Sylvian Fissure.
Figure 5a The lobes of the brain

Just in front of the central sulcus lies an area called the primary motor cortex (precentral gyrus), which connects to the muscles of the body, and on volitional command moves them. From mastication to movements in the genitalia, the body map is represented on this strip (Figure 5b).

Some body parts, like fingers, thumbs, and lips, occupy a greater representation on the strip than, say, the trunk. This disproportionate representation of the body on the primary motor cortex is called the magnification factor (Rolls & Cowey, 1970) and is seen in other motor and sensory areas. At the lower end of the central sulcus, close to the lateral sulcus, lies the Broca’s area (Figure 6b) in the left frontal lobe, which is involved with language production. Damage to this part of the brain led Pierre Paul Broca, a French neuroscientist in 1861, to document many different forms of aphasias, in which his patients would lose the ability to speak or would retain partial speech impoverished in syntax and grammar (AAAS, 1880). It is no wonder that others have found subvocal rehearsal and central executive processes of working memory in this frontal lobe (Smith & Jonides, 19971999).

Specific body parts being mapped out onto the brain’s Central Sulcus. Like throat and surrounding areas, face and surrounding areas, neck, hand, arm, digits and the surrounding areas, trunk, the leg and surrounding areas, and genitalia.
Figure 5b. Specific body parts like the tongue or fingers are mapped onto certain areas of the brain including the primary motor cortex.

Just behind the central gyrus, in the parietal lobe, lies the primary somatosensory cortex (Figure 6a) on the postcentral gyrus, which represents the whole body receiving inputs from the skin and muscles. The primary somatosensory cortex parallels, abuts, and connects heavily to the primary motor cortex and resembles it in terms of areas devoted to bodily representation. All spinal and some cranial nerves (e.g., the facial nerve) send sensory signals from skin (e.g., touch) and muscles to the primary somatosensory cortex. Close to the lower (ventral) end of this strip, curved inside the parietal lobe, is the taste area (secondary somatosensory cortex), which is involved with taste experiences that originate from the tongue, pharynx, epiglottis, and so forth.

The Primary Somatosensory Cortex provides innervation to face, hand, arm among other body parts.
Figure 6a The Primary Somatosensory Cortex

Just below the parietal lobe, and under the caudal end of the lateral fissure, in the temporal lobe, lies the Wernicke’s area (Demonet et al., 1992). This area is involved with language comprehension and is connected to the Broca’s area through the arcuate fasciculus, nerve fibers that connect these two regions. Damage to the Wernicke’s area (Figure 6b) results in many kinds of agnosias; agnosia is defined as an inability to know or understand language and speech-related behaviors. So an individual may show word deafness, which is an inability to recognize spoken language, or word blindness, which is an inability to recognize written or printed language. Close in proximity to the Wernicke’s area is the primary auditory cortex, which is involved with audition, and finally the brain region devoted to smell (olfaction) is tucked away inside the primary olfactory cortex (prepyriform cortex).

This image depicts Broca’s and Wernicke's areas in the brain with Broca’s being more anterior and Wernicke’s more posterior relative to one another with Arcuate Fasciculus in-between.
Figure 6b Wernicke’s area

At the very back of the cerebral cortex lies the occipital lobe housing the primary visual cortex. Optic nerves travel all the way to the thalamus (lateral geniculate nucleus, LGN) and then to visual cortex, where images that are received on the retina are projected (Hubel, 1995).

In the past 50 to 60 years, visual sense and visual pathways have been studied extensively, and our understanding about them has increased manifold. We now understand that all objects that form images on the retina are transformed (transduction) in neural language handed down to the visual cortex for further processing. In the visual cortex, all attributes (features) of the image, such as the color, texture, and orientation, are decomposed and processed by different visual cortical modules (Van Essen, Anderson & Felleman, 1992) and then recombined to give rise to singular perception of the image in question.

If we cut the cerebral hemispheres in the middle, a new set of structures come into view. Many of these perform different functions vital to our being. For example, the limbic system contains a number of nuclei that process memory (hippocampus and fornix) and attention and emotions (cingulate gyrus); the globus pallidus is involved with motor movements and their coordination; the hypothalamus and thalamus are involved with drives, motivations, and trafficking of sensory and motor throughputs. The hypothalamus plays a key role in regulating endocrine hormones in conjunction with the pituitary gland that extends from the hypothalamus through a stalk (infundibulum).

The interior of the brain with organelles including: Olfactory Bulb, Fornix, Cingulate Cortex, Thalamus, Mammillary Body, Hippocampus, and Amygdala.
Figure 7 The interior of the brain

As we descend down the thalamus, the midbrain comes into view with superior and inferior colliculi, which process visual and auditory information, as does the substantia nigra, which is involved with notorious Parkinson’s disease, and the reticular formation regulating arousal, sleep, and temperature. A little lower, the hindbrain with the pons processes sensory and motor information employing the cranial nerves, works as a bridge that connects the cerebral cortex with the medulla, and reciprocally transfers information back and forth between the brain and the spinal cord. The medulla oblongata processes breathing, digestion, heart and blood vessel function, swallowing, and sneezing. The cerebellum controls motor movement coordination, balance, equilibrium, and muscle tone.

The midbrain and the hindbrain, which make up the brain stem, culminate in the spinal cord. Whereas inside the cerebral cortex, the gray matter (neuronal cell bodies) lies outside and white matter (myelinated axons) inside; in the spinal cord this arrangement reverses, as the gray matter resides inside and the white matter outside. Paired nerves (ganglia) exit the spinal cord, some closer in direction towards the back (dorsal) and others towards the front (ventral). The dorsal nerves (afferent) receive sensory information from skin and muscles, and ventral nerves (efferent) send signals to muscles and organs to respond.

Gray Versus White Matter

The cerebral hemispheres contain both grey and white matter, so called because they appear grayish and whitish in dissections or in an MRI (magnetic resonance imaging; see, “Studying the Human Brain”). The gray matter is composed of the neuronal cell bodies (see module, “Neurons”). The cell bodies (or soma) contain the genes of the cell and are responsible for metabolism (keeping the cell alive) and synthesizing proteins. In this way, the cell body is the workhorse of the cell. The white matter is composed of the axons of the neurons, and, in particular, axons that are covered with a sheath of myelin (fatty support cells that are whitish in color). Axons conduct the electrical signals from the cell and are, therefore, critical to cell communication. People use the expression “use your gray matter” when they want a person to think harder. The “gray matter” in this expression is probably a reference to the cerebral hemispheres more generally; the gray cortical sheet (the convoluted surface of the cortex) being the most visible. However, both the gray matter and white matter are critical to proper functioning of the mind. Losses of either result in deficits in language, memory, reasoning, and other mental functions. See Figure 3 for MRI slices showing both the inner white matter that connects the cell bodies in the gray cortical sheet.

image

Figure 3. MRI slices of the human brain. Both the outer gray matter and inner white matter are visible in each image. The brain is a three-dimensional (3-D) structure, but an image is two-dimensional (2-D). Here, we show example slices of the three possible 2-D cuts through the brain: a saggital slice (top image), a horizontal slice (bottom left), which is also known as a transverse or axial slice, and a coronal slice (bottom right). The bottom two images are color-coded to match the illustration of the relative orientations of the three slices in the top image.

Studying the Nervous System

The study of the nervous system involves anatomical and physiological techniques that have improved over the years in efficiency and caliber. Clearly, gross morphology of the nervous system requires an eye-level view of the brain and the spinal cord. However, to resolve minute components, optical and electron microscopic techniques are needed.

Light microscopes and, later, electron microscopes have changed our understanding of the intricate connections that exist among nerve cells. For example, modern staining procedures (immunocytochemistry) make it possible to see selected neurons that are of one type or another or are affected by growth. With better resolution of the electron microscopes, fine structures like the synaptic cleft between the pre- and post-synaptic neurons can be studied in detail. Along with the neuroanatomical techniques, a number of other methodologies aid neuroscientists in studying the function and physiology of the nervous system. These methods will be explored later on in the chapter.

Understanding the nervous system has been a long journey of inquiry, spanning several hundreds of years of meticulous studies carried out by some of the most creative and versatile investigators in the fields of philosophy, evolution, biology, physiology, anatomy, neurology, neuroscience, cognitive sciences, and psychology. Despite our profound understanding of this organ, its mysteries continue to surprise us, and its intricacies make us marvel at this complex structure unmatched in the universe.

 

Learning Objectives

By the end of this section, you will be able to:

  • Explain the functions of the spinal cord
  • Identify the hemispheres of the brain
  • Name and describe the basic function of the four cerebral lobes: occipital, temporal, parietal, and frontal cortex.
  • Describe a split-brain patient and at least two important aspects of brain function that these patients reveal.

The brain is a remarkably complex organ comprised of billions of interconnected neurons and glia. It is a bilateral, or two-sided, structure that can be separated into distinct lobes. Each lobe is associated with certain types of functions, but, ultimately, all of the areas of the brain interact with one another to provide the foundation for our thoughts and behaviors. In this section, we discuss the overall organization of the brain and the functions associated with different brain areas, beginning with what can be seen as an extension of the brain, the spinal cord.

The Spinal Cord

It can be said that the spinal cord is what connects the brain to the outside world. Because of it, the brain can act. The spinal cord is like a relay station, but a very smart one. It not only routes messages to and from the brain, but it also has its own system of automatic processes, called reflexes.

The top of the spinal cord is a bundle of nerves that merges with the brain stem, where the basic processes of life are controlled, such as breathing and digestion. In the opposite direction, the spinal cord ends just below the ribs—contrary to what we might expect, it does not extend all the way to the base of the spine.

The spinal cord is functionally organized in 30 segments, corresponding with the vertebrae. Each segment is connected to a specific part of the body through the peripheral nervous system. Nerves branch out from the spine at each vertebra. Sensory nerves bring messages in; motor nerves send messages out to the muscles and organs. Messages travel to and from the brain through every segment.

Some sensory messages are immediately acted on by the spinal cord, without any input from the brain. Withdrawal from a hot object and the knee jerk are two examples. When a sensory message meets certain parameters, the spinal cord initiates an automatic reflex. The signal passes from the sensory nerve to a simple processing center, which initiates a motor command. Seconds are saved because messages don’t have to go the brain, be processed, and get sent back. In matters of survival, the spinal reflexes allow the body to react extraordinarily fast.

The spinal cord is protected by bony vertebrae and cushioned in cerebrospinal fluid, but injuries still occur. When the spinal cord is damaged in a particular segment, all lower segments are cut off from the brain, causing paralysis. Therefore, the lower on the spine damage occurs, the fewer functions an injured individual will lose.

Neuroplasticity

Bob Woodruff, a reporter for ABC, suffered a traumatic brain injury after a bomb exploded next to the vehicle he was in while covering a news story in Iraq. As a consequence of these injuries, Woodruff experienced many cognitive deficits including difficulties with memory and language. However, over time and with the aid of intensive amounts of cognitive and speech therapy, Woodruff has shown an incredible recovery of function (Fernandez, 2008, October 16).

One of the factors that made this recovery possible was neuroplasticity. Neuroplasticity refers to how the nervous system can change and adapt. Neuroplasticity can occur in a variety of ways including personal experiences, developmental processes, or, as in Woodruff’s case, in response to some sort of damage or injury that has occurred. Neuroplasticity can involve the creation of new synapses, pruning of synapses that are no longer used, changes in glial cells, and even the birth of new neurons. Because of neuroplasticity, our brains are constantly changing and adapting, and while our nervous system is most plastic when we are very young, as Woodruff’s case suggests, it is still capable of remarkable changes later in life.

The Two Hemispheres

The surface of the brain, known as the cerebral cortex, is very uneven, characterized by a distinctive pattern of folds or bumps, known as gyri (singular: gyrus), and grooves, known as sulci (singular: sulcus), shown in Figure 3.15. These gyri and sulci form important landmarks that allow us to separate the brain into functional centers. The most prominent sulcus, known as the longitudinal fissure, is the deep groove that separates the brain into two halves or hemispheres: the left hemisphere and the right hemisphere.

An illustration of the brain’s exterior surface shows the ridges and depressions, and the deep fissure that runs through the center.
Figure 3.15The surface of the brain is covered with gyri and sulci. A deep sulcus is called a fissure, such as the longitudinal fissure that divides the brain into left and right hemispheres. (credit: modification of work by Bruce Blaus)

There is evidence of specialization of function—referred to as lateralization—in each hemisphere, mainly regarding differences in language functions. The left hemisphere controls the right half of the body, and the right hemisphere controls the left half of the body. Decades of research on lateralization of function by Michael Gazzaniga and his colleagues suggest that a variety of functions ranging from cause-and-effect reasoning to self-recognition may follow patterns that suggest some degree of hemispheric dominance (Gazzaniga, 2005). For example, the left hemisphere has been shown to be superior for forming associations in memory, selective attention, and positive emotions. The right hemisphere, on the other hand, has been shown to be superior in pitch perception, arousal, and negative emotions (Ehret, 2006). However, it should be pointed out that research on which hemisphere is dominant in a variety of different behaviors has produced inconsistent results, and therefore, it is probably better to think of how the two hemispheres interact to produce a given behavior rather than attributing certain behaviors to one hemisphere versus the other (Banich & Heller, 1998).

The two hemispheres are connected by a thick band of neural fibers known as the corpus callosum, consisting of about 200 million axons. The corpus callosum allows the two hemispheres to communicate with each other and allows for information being processed on one side of the brain to be shared with the other side.

Normally, we are not aware of the different roles that our two hemispheres play in day-to-day functions, but there are people who come to know the capabilities and functions of their two hemispheres quite well. In some cases of severe epilepsy, doctors elect to sever the corpus callosum as a means of controlling the spread of seizures (Figure 3.16). While this is an effective treatment option, it results in individuals who have “split brains.” After surgery, these split-brain patients show a variety of interesting behaviors. For instance, a split-brain patient is unable to name a picture that is shown in the patient’s left visual field because the information is only available in the largely nonverbal right hemisphere. However, they are able to recreate the picture with their left hand, which is also controlled by the right hemisphere. When the more verbal left hemisphere sees the picture that the hand drew, the patient is able to name it (assuming the left hemisphere can interpret what was drawn by the left hand).

Illustrations (a) and (b) show the corpus callosum’s location in the brain in front and side views. Photograph (c) shows the corpus callosum in a dissected brain.
Figure 3.16(a, b) The corpus callosum connects the left and right hemispheres of the brain. (c) A scientist spreads this dissected sheep brain apart to show the corpus callosum between the hemispheres. (credit c: modification of work by Aaron Bornstein)

Much of what we know about the functions of different areas of the brain comes from studying changes in the behavior and ability of individuals who have suffered damage to the brain. For example, researchers study the behavioral changes caused by strokes to learn about the functions of specific brain areas. A stroke, caused by an interruption of blood flow to a region in the brain, causes a loss of brain function in the affected region. The damage can be in a small area, and, if it is, this gives researchers the opportunity to link any resulting behavioral changes to a specific area. The types of deficits displayed after a stroke will be largely dependent on where in the brain the damage occurred.

Consider Theona, an intelligent, self-sufficient woman, who is 62 years old. Recently, she suffered a stroke in the front portion of her right hemisphere. As a result, she has great difficulty moving her left leg. (As you learned earlier, the right hemisphere controls the left side of the body; also, the brain’s main motor centers are located at the front of the head, in the frontal lobe.) Theona has also experienced behavioral changes. For example, while in the produce section of the grocery store, she sometimes eats grapes, strawberries, and apples directly from their bins before paying for them. This behavior—which would have been very embarrassing to her before the stroke—is consistent with damage in another region in the frontal lobe—the prefrontal cortex, which is associated with judgment, reasoning, and impulse control.

Forebrain Structures

The two hemispheres of the cerebral cortex are part of the forebrain (Figure 3.17), which is the largest part of the brain. The forebrain contains the cerebral cortex and a number of other structures that lie beneath the cortex (called subcortical structures): thalamus, hypothalamus, pituitary gland, and the limbic system (a collection of structures). The cerebral cortex, which is the outer surface of the brain, is associated with higher-level processes such as consciousness, thought, emotion, reasoning, language, and memory. Each cerebral hemisphere can be subdivided into four lobes, each associated with different functions.

An illustration shows the position and size of the forebrain (the largest portion), midbrain (a small central portion), and hindbrain (a portion in the lower back part of the brain).
Figure 3.17The brain and its parts can be divided into three main categories: the forebrain, midbrain, and hindbrain.

Lobes of the Brain

The four lobes of the brain are the frontal, parietal, temporal, and occipital lobes (Figure 3.18). The frontal lobe is located in the forward part of the brain, extending back to a fissure known as the central sulcus. The frontal lobe is involved in reasoning, motor control, emotion, and language. It contains the motor cortex, which is involved in planning and coordinating movement; the prefrontal cortex, which is responsible for higher-level cognitive functioning; and Broca’s area, which is essential for language production.

An illustration shows the four lobes of the brain.
Figure 3.18The lobes of the brain are shown.

People who suffer damage to Broca’s area have great difficulty producing language of any form (Figure 3.18). For example, Padma was an electrical engineer who was socially active and a caring, involved parent. About twenty years ago, she was in a car accident and suffered damage to her Broca’s area. She completely lost the ability to speak and form any kind of meaningful language. There is nothing wrong with her mouth or her vocal cords, but she is unable to produce words. She can follow directions but can’t respond verbally, and she can read but no longer write. She can do routine tasks like running to the market to buy milk, but she could not communicate verbally if a situation called for it.

Probably the most famous case of frontal lobe damage is that of a man by the name of Phineas Gage. On September 13, 1848, Gage (age 25) was working as a railroad foreman in Vermont. He and his crew were using an iron rod to tamp explosives down into a blasting hole to remove rock along the railway’s path. Unfortunately, the iron rod created a spark and caused the rod to explode out of the blasting hole, into Gage’s face, and through his skull (Figure 3.19). Although lying in a pool of his own blood with brain matter emerging from his head, Gage was conscious and able to get up, walk, and speak.

However, there is some debate on what long-term effects Gage experienced after the accident. Gage’s case occurred in the midst of a 19th-century debate over localization—regarding whether certain areas of the brain are associated with particular functions. On the basis of extremely limited information about Gage, the extent of his injury, and his life before and after the accident, scientists tended to find support for their own views, on whichever side of the debate they fell (Macmillan, 1999). What we can conclude from his accident is that he was able to live a full life after his brain injury and that the brain is incredibly resilient.

Image (a) is a photograph of Phineas Gage holding a metal rod. Image (b) is an illustration of a skull with a metal rod passing through it from the cheek area to the top of the skull.
Figure 3.19(a) Phineas Gage holds the iron rod that penetrated his skull in an 1848 railroad construction accident. (b) Gage’s prefrontal cortex was severely damaged in the left hemisphere. The rod entered Gage’s face on the left side, passed behind his eye, and exited through the top of his skull, before landing about 80 feet away. (credit a: modification of work by Jack and Beverly Wilgus)

The brain’s parietal lobe is located immediately behind the frontal lobe and is involved in processing information from the body’s senses. It contains the somatosensory cortex, which is essential for processing sensory information from across the body, such as touch, temperature, and pain. The somatosensory cortex is organized topographically, which means that spatial relationships that exist in the body are generally maintained on the surface of the somatosensory cortex (Figure 3.20). For example, the portion of the cortex that processes sensory information from the hand is adjacent to the portion that processes information from the wrist.

A diagram shows the organization in the somatosensory cortex, with functions for these parts in this proximal sequential order: toes, ankles, knees, hips, trunk, shoulders, elbows, wrists, hands, fingers, thumbs, neck, eyebrows and eyelids, eyeballs, face, lips, jaw, tongue, salivation, chewing, and swallowing.
Figure 3.20 Spatial relationships in the body are mirrored in the organization of the somatosensory cortex.

The temporal lobe is located on the side of the head (temporal means “near the temples”), and is associated with hearing, memory, emotion, and some aspects of language. The auditory cortex, the main area responsible for processing auditory information, is located within the temporal lobe. Wernicke’s area, important for speech comprehension, is also located here. Whereas individuals with damage to Broca’s area have difficulty producing language, those with damage to Wernicke’s area can produce sensible language, but they are unable to understand it (Figure 3.21).

An illustration shows the locations of Broca’s and Wernicke’s areas.
Figure 3.21Damage to either Broca’s area or Wernicke’s area can result in language deficits. The types of deficits are very different, however, depending on which area is affected.

The occipital lobe is located at the very back of the brain and contains the primary visual cortex, which is responsible for interpreting incoming visual information. The occipital cortex is organized retinotopically, which means there is a close relationship between the position of an object in a person’s visual field and the position of that object’s representation on the cortex. You will learn much more about how visual information is processed in the occipital lobe when you study sensation and perception.

Other Areas of the Forebrain

Other areas of the forebrain, located beneath the cerebral cortex, include the thalamus and the limbic system. The thalamus is a sensory relay for the brain. All of our senses, with the exception of smell, are routed through the thalamus before being directed to other areas of the brain for processing (Figure 3.22).

An illustration shows the location of the thalamus in the brain.
Figure 3.22The thalamus serves as the relay center of the brain where most senses are routed for processing.

The limbic system is involved in processing both emotion and memory. Interestingly, the sense of smell projects directly to the limbic system; therefore, not surprisingly, the sense of smell can evoke emotional responses in ways that other sensory modalities cannot. The limbic system is made up of a number of different structures, but three of the most important are the hippocampus, the amygdala, and the hypothalamus (Figure 3.23). The hippocampus is an essential structure for learning and memory. The amygdala is involved in our experience of emotion and in tying emotional meaning to our memories. The hypothalamus regulates a number of homeostatic processes, including the regulation of body temperature, appetite, and blood pressure. The hypothalamus also serves as an interface between the nervous system and the endocrine system and in the regulation of sexual motivation and behavior.

An illustration shows the locations of parts of the brain involved in the limbic system: the hypothalamus, amygdala, and hippocampus.
Figure 3.23The limbic system is involved in mediating emotional response and memory.

The Case of Henry Molaison (H.M.)

In 1953, Henry Gustav Molaison (H. M.) was a 27-year-old man who experienced severe seizures. In an attempt to control his seizures, H. M. underwent brain surgery to remove his hippocampus and amygdala. Following the surgery, H.M’s seizures became much less severe, but he also suffered some unexpected—and devastating—consequences of the surgery: he lost his ability to form many types of new memories. For example, he was unable to learn new facts, such as who was president of the United States. He was able to learn new skills, but afterward, he had no recollection of learning them. For example, while he might learn to use a computer, he would have no conscious memory of ever having used one. He could not remember new faces, and he was unable to remember events, even immediately after they occurred. Researchers were fascinated by his experience, and he is considered one of the most studied cases in medical and psychological history (Hardt, Einarsson, & Nader, 2010; Squire, 2009). Indeed, his case has provided tremendous insight into the role that the hippocampus plays in the consolidation of new learning into explicit memory.

Midbrain and Hindbrain Structures

The midbrain is comprised of structures located deep within the brain, between the forebrain and the hindbrain. The reticular formation is centered in the midbrain, but it actually extends up into the forebrain and down into the hindbrain. The reticular formation is important in regulating the sleep/wake cycle, arousal, alertness, and motor activity.

The substantia nigra (Latin for “black substance”) and the ventral tegmental area (VTA) are also located in the midbrain (Figure 3.24). Both regions contain cell bodies that produce the neurotransmitter dopamine, and both are critical for movement. Degeneration of the substantia nigra and VTA is involved in Parkinson’s disease. In addition, these structures are involved in mood, reward, and addiction (Berridge & Robinson, 1998; Gardner, 2011; George, Le Moal, & Koob, 2012).

An illustration shows the location of the substantia nigra and VTA in the brain.
Figure 3.24The substantia nigra and ventral tegmental area (VTA) are located in the midbrain.

The hindbrain is located at the back of the head and looks like an extension of the spinal cord. It contains the medulla, pons, and cerebellum (Figure 3.25). The medulla controls the automatic processes of the autonomic nervous system, such as breathing, blood pressure, and heart rate. The word pons literally means “bridge,” and as the name suggests, the pons serves to connect the hindbrain to the rest of the brain. It also is involved in regulating brain activity during sleep. The medulla, pons, and various structures are known as the brainstem, and aspects of the brainstem span both the midbrain and the hindbrain.

An illustration shows the location of the pons, medulla, and cerebellum.
Figure 3.25The pons, medulla, and cerebellum make up the hindbrain.

The cerebellum (Latin for “little brain”) receives messages from muscles, tendons, joints, and structures in our ear to control balance, coordination, movement, and motor skills. The cerebellum is also thought to be an important area for processing some types of memories. In particular, procedural memory, or memory involved in learning and remembering how to perform tasks, is thought to be associated with the cerebellum. Recall that H. M. was unable to form new explicit memories, but he could learn new tasks. This is likely due to the fact that H. M.’s cerebellum remained intact.

Learning Objectives

  • Name and describe the most common approaches to studying the human brain.
  • Distinguish among neuroimaging methods

Studying the Human Brain

How do we know what the brain does? We have gathered knowledge about the functions of the brain from many different methods. Each method is useful for answering distinct types of questions, but the strongest evidence for a specific role or function of a particular brain area is converging evidence; that is, similar findings reported from multiple studies using different methods.

One of the first organized attempts to study the functions of the brain was phrenology, a popular field of study in the first half of the 19th century. Phrenologists assumed that various features of the brain, such as its uneven surface, are reflected on the skull; therefore, they attempted to correlate bumps and indentations of the skull with specific functions of the brain. For example, they would claim that a very artistic person has ridges on the head that vary in size and location from those of someone who is very good at spatial reasoning. Although the assumption that the skull reflects the underlying brain structure has been proven wrong, phrenology nonetheless significantly impacted current-day neuroscience and its thinking about the functions of the brain. That is, different parts of the brain are devoted to very specific functions that can be identified through scientific inquiry.

Neuroanatomy

Dissection of the brain, in either animals or cadavers, has been a critical tool of neuroscientists since 340 BC when Aristotle first published his dissections. Since then this method has advanced considerably with the discovery of various staining techniques that can highlight particular cells. Because the brain can be sliced very thinly, examined under the microscope, and particular cells highlighted, this method is especially useful for studying specific groups of neurons or small brain structures; that is, it has a very high spatial resolution. Dissections allow scientists to study changes in the brain that occur due to various diseases or experiences (e.g., exposure to drugs or brain injuries).

Virtual dissection studies with living humans are also conducted. Here, the brain is imaged using computerized axial tomography (CAT) or MRI scanners; they reveal with very high precision the various structures in the brain and can help detect changes in gray or white matter. These changes in the brain can then be correlated with behavior, such as performance on memory tests, and, therefore, implicate specific brain areas in certain cognitive functions.

Some researchers induce lesions or ablate (i.e., remove) parts of the brain in animals. If the animal’s behavior changes after the lesion, we can infer that the removed structure is important for that behavior. Lesions of human brains are studied in patient populations only; that is, patients who have lost a brain region due to a stroke or other injury, or who have had surgical removal of a structure to treat a particular disease (e.g., a callosotomy to control epilepsy, as in split-brain patients). From such case studies, we can infer brain function by measuring changes in the behavior of the patients before and after the lesion.

Neuroimaging

You have learned how brain injury can provide information about the functions of different parts of the brain. Increasingly, however, we are able to obtain that information using brain imaging techniques on individuals who have not suffered a brain injury. In this section, we take a more in-depth look at some of the techniques that are available for imaging the brain, including techniques that rely on radiation, magnetic fields, or electrical activity within the brain.

Techniques Involving Radiation

computerized tomography (CT) scan involves taking a number of x-rays of a particular section of a person’s body or brain (Figure 3.26). The x-rays pass through tissues of different densities at different rates, allowing a computer to construct an overall image of the area of the body being scanned. A CT scan is often used to determine whether someone has a tumor or significant brain atrophy.

Image (a) shows a brain scan where the brain matter’s appearance is fairly uniform. Image (b) shows a section of the brain that looks different from the surrounding tissue and is labeled “tumor.”
Figure 3.26A CT scan can be used to show brain tumors. (a) The image on the left shows a healthy brain, whereas (b) the image on the right indicates a brain tumor in the left frontal lobe. (credit a: modification of work by “Aceofhearts1968″/Wikimedia Commons; credit b: modification of work by Roland Schmitt et al)

Positron emission tomography (PET) scans create pictures of the living, active brain (Figure 3.27). An individual receiving a PET scan drinks or is injected with a mildly radioactive substance called a tracer. Once in the bloodstream, the amount of tracer in any given region of the brain can be monitored. As a brain area becomes more active, more blood flows to that area. A computer monitors the movement of the tracer and creates a rough map of active and inactive areas of the brain during a given behavior. PET scans show little detail, are unable to pinpoint events precisely in time, and require that the brain be exposed to radiation; therefore, this technique has been replaced by the fMRI as an alternative diagnostic tool. However, combined with CT, PET technology is still being used in certain contexts. For example, CT/PET scans allow better imaging of the activity of neurotransmitter receptors and open new avenues in schizophrenia research. In this hybrid CT/PET technology, CT contributes clear images of brain structures, while PET shows the brain’s activity.

A brain scan shows different parts of the brain in different colors.
Figure 3.27A PET scan is helpful for showing activity in different parts of the brain. (credit: Health and Human Services Department, National Institutes of Health)

Techniques Involving Magnetic Fields

In magnetic resonance imaging (MRI), a person is placed inside a machine that generates a strong magnetic field. The magnetic field causes the hydrogen atoms in the body’s cells to move. When the magnetic field is turned off, the hydrogen atoms emit electromagnetic signals as they return to their original positions. Tissues of different densities give off different signals, which a computer interprets and displays on a monitor. Functional magnetic resonance imaging (fMRI) operates on the same principles, but it shows changes in brain activity over time by tracking blood flow and oxygen levels. The fMRI provides more detailed images of the brain’s structure, as well as better accuracy in time than is possible in PET scans (Figure 3.28). With their high level of detail, MRI and fMRI are often used to compare the brains of healthy individuals to the brains of individuals diagnosed with psychological disorders. This comparison helps determine what structural and functional differences exist between these populations.

A brain scan shows brain tissue in gray with some small areas highlighted red.
Figure 3.28An fMRI shows activity in the brain over time. This image represents a single frame from an fMRI. (credit: modification of work by Kim J, Matthews NL, Park S.)
A researcher studies fMRI images on a computer monitor.
A researcher looking at the areas of activation in the brain of a study participant who had an fMRI scan – areas of brain activation are determined by the amount of blood flow to a certain area – the more blood flow, the higher the activation of that area of the brain. [Image: National Institute of Mental Health, CC0 Public Domain, https://goo.gl/m25gce]

In some situations, it is helpful to gain an understanding of the overall activity of a person’s brain, without needing information on the actual location of the activity. Electroencephalography (EEG) serves this purpose by providing a measure of a brain’s electrical activity. An array of electrodes is placed around a person’s head (Figure 3.29). The signals received by the electrodes result in a printout of the electrical activity of his or her brain, or brainwaves, showing both the frequency (number of waves per second) and amplitude (height) of the recorded brainwaves, with an accuracy within milliseconds. Such information is especially helpful to researchers studying sleep patterns among individuals with sleep disorders.

A photograph depicts a person looking at a computer screen and using the keyboard and mouse. The person wears a white cap covered in electrodes and wires.
Figure 3.29Using caps with electrodes, modern EEG research can study the precise timing of overall brain activities. (credit: SMI Eye Tracking)

Learning Objectives

By the end of this section, you will be able to:

  • Identify the major glands of the endocrine system
  • Identify the hormones secreted by each gland
  • Describe each hormone’s role in regulating bodily functions

This module describes the relationship between hormones and behavior. Many readers are likely already familiar with the general idea that hormones can affect behavior. Students are generally familiar with the idea that sex-hormone concentrations increase in the blood during puberty and decrease as we age, especially after about 50 years of age. Sexual behavior shows a similar pattern. Most people also know about the relationship between aggression and anabolic steroid hormones, and they know that administration of artificial steroid hormones sometimes results in uncontrollable, violent behavior called “roid rage.” Many different hormones can influence several types of behavior, but for the purpose of this module, we will restrict our discussion to just a few examples of hormones and behaviors. For example, are behavioral sex differences the result of hormones, the environment, or some combination of factors? Why are men much more likely than women to commit aggressive acts? Are hormones involved in mediating the so-called maternal “instinct”? Behavioral endocrinologists are interested in how the general physiological effects of hormones alter the development and expression of behavior and how behavior may influence the effects of hormones. This module describes, both phenomenologically and functionally, how hormones affect behavior.

To understand the hormone-behavior relationship, it is important briefly to describe hormones. Hormones are organic chemical messengers produced and released by specialized glands called endocrine glands. Hormones are released from these glands into the blood, where they may travel to act on target structures at some distance from their origin. Hormones are similar in function to neurotransmitters, the chemicals used by the nervous system in coordinating animals’ activities. However, hormones can operate over a greater distance and over a much greater temporal range than neurotransmitters (Focus Topic 1). Examples of hormones that influence behavior include steroid hormones such as testosterone (a common type of androgen), estradiol (a common type of estrogen), progesterone (a common type of progestin), and cortisol (a common type of glucocorticoid) (Table 1, A-B). Several types of protein or peptide (small protein) hormones also influence behavior, including oxytocin, vasopressin, prolactin, and leptin.

Focus Topic 1:
Neural Transmission versus Hormonal Communication

Although neural and hormonal communication both rely on chemical signals, several prominent differences exist. Communication in the nervous system is analogous to traveling on a train. You can use the train in your travel plans as long as tracks exist between your proposed origin and destination. Likewise, neural messages can travel only to destinations along existing nerve tracts. Hormonal communication, on the other hand, is like traveling in a car. You can drive to many more destinations than train travel allows because there are many more roads than railroad tracks. Similarly, hormonal messages can travel anywhere in the body via the circulatory system; any cell receiving blood is potentially able to receive a hormonal message.

Neural and hormonal communication differ in other ways as well. To illustrate them, consider the differences between digital and analog technologies. Neural messages are digital, all-or-none events that have rapid onset and offset: neural signals can take place in milliseconds. Accordingly, the nervous system mediates changes in the body that are relatively rapid. For example, the nervous system regulates immediate food intake and directs body movement. In contrast, hormonal messages are analog, graded events that may take seconds, minutes, or even hours to occur. Hormones can mediate long-term processes, such as growth, development, reproduction, and metabolism.

Hormonal and neural messages are both chemical in nature, and they are released and received by cells in a similar manner; however, there are important differences as well. Neurotransmitters, the chemical messengers used by neurons, travel a distance of only 20–30 nanometers (30 X 10–9 m)—to the membrane of the postsynaptic neuron, where they bind with receptors. Hormones enter the circulatory system and may travel from 1 millimeter to >2 meters before arriving at a target cell, where they bind with specific receptors.

Another distinction between neural and hormonal communication is the degree of voluntary control that can be exerted over their functioning. In general, there is more voluntary control of neural than of hormonal signals. It is virtually impossible to will a change in your thyroid hormone levels, for example, whereas moving your limbs on command is easy.

Although these are significant differences, the division between the nervous system and the endocrine system is becoming more blurred as we learn more about how the nervous system regulates hormonal communication. A better understanding of the interface between the endocrine system and the nervous system, called neuroendocrinology, is likely to yield important advances in the future study of the interaction between hormones and behavior.

image
Table 1-A: Prominent Hormones That Influence Behavior
image
Table 1-B: Prominent Hormones That Influence Behavior

Hormones coordinate the physiology and behavior of individuals by regulating, integrating, and controlling bodily functions. Over evolutionary time, hormones have often been co-opted by the nervous system to influence behavior to ensure reproductive success. For example, the same hormones, testosterone and estradiol, that cause gamete (egg or sperm) maturation also promote mating behavior. This dual hormonal function ensures that mating behavior occurs when animals have mature gametes available for fertilization. Another example of endocrine regulation of physiological and behavioral function is provided by pregnancy. Estrogens and progesterone concentrations are elevated during pregnancy, and these hormones are often involved in mediating maternal behavior in the mothers.

Not all cells are influenced by each and every hormone. Rather, any given hormone can directly influence only cells that have specific hormone receptors for that particular hormone. Cells that have these specific receptors are called target cells for the hormone. The interaction of a hormone with its receptor begins a series of cellular events that eventually lead to activation of enzymatic pathways or, alternatively, turns on or turns off gene activation that regulates protein synthesis. The newly synthesized proteins may activate or deactivate other genes, causing yet another cascade of cellular events. Importantly, sufficient numbers of appropriate hormone receptors must be available for a specific hormone to produce any effects. For example, testosterone is important for male sexual behavior. If men have too little testosterone, then sexual motivation may be low, and it can be restored by testosterone treatment. However, if men have normal or even elevated levels of testosterone yet display low sexual drive, then it might be possible for a lack of receptors to be the cause and treatment with additional hormones will not be effective.

The endocrine system consists of a series of glands that produce chemical substances known as hormones (Figure 3.30). Like neurotransmitters, hormones are chemical messengers that must bind to a receptor in order to send their signal. However, unlike neurotransmitters, which are released in close proximity to cells with their receptors, hormones are secreted into the bloodstream and travel throughout the body, affecting any cells that contain receptors for them. Thus, whereas neurotransmitters’ effects are localized, the effects of hormones are widespread. Also, hormones are slower to take effect, and tend to be longer-lasting.

A diagram of the human body illustrates the locations of the thymus, several parts within the brain (pineal gland, thalamus, hypothalamus, pituitary gland), several parts within the thyroid (cartilage of the larynx, thyroid gland, parathyroid glands, trachea), the adrenal glands, pancreas, uterus (female), ovaries (female), and testes (male).
Figure 3.30The major glands of the endocrine system are shown.

Hormones are involved in regulating all sorts of bodily functions, and they are ultimately controlled through interactions between the hypothalamus (in the central nervous system) and the pituitary gland (in the endocrine system). Imbalances in hormones are related to a number of disorders. This section explores some of the major glands that make up the endocrine system and the hormones secreted by these glands (Table 3.2).

Major Glands

The pituitary gland descends from the hypothalamus at the base of the brain and acts in close association with it. The pituitary is often referred to as the “master gland” because its messenger hormones control all the other glands in the endocrine system, although it mostly carries out instructions from the hypothalamus. In addition to messenger hormones, the pituitary also secretes growth hormone, endorphins for pain relief, and a number of key hormones that regulate fluid levels in the body.

Located in the neck, the thyroid gland releases hormones that regulate growth, metabolism, and appetite. In hyperthyroidism or Grave’s disease, the thyroid secretes too much of the hormone thyroxine, causing agitation, bulging eyes, and weight loss. In hypothyroidism, reduced hormone levels cause sufferers to experience tiredness, and they often complain of feeling cold. Fortunately, thyroid disorders are often treatable with medications that help reestablish a balance in the hormones secreted by the thyroid.

The adrenal glands sit atop our kidneys and secrete hormones involved in the stress response, such as epinephrine (adrenaline) and norepinephrine (noradrenaline). The pancreas is an internal organ that secretes hormones that regulate blood sugar levels: insulin and glucagon. These pancreatic hormones are essential for maintaining stable levels of blood sugar throughout the day by lowering blood glucose levels (insulin) or raising them (glucagon). People who suffer from diabetes do not produce enough insulin; therefore, they must take medications that stimulate or replace insulin production, and they must closely control the amount of sugars and carbohydrates they consume.

The gonads secrete sexual hormones, which are important in reproduction, and mediate both sexual motivation and behavior. The female gonads are the ovaries; the male gonads are the testes. Ovaries secrete estrogens and progesterone, and the testes secrete androgens, such as testosterone.

Major Endocrine Glands and Associated Hormone Functions
Endocrine Gland Associated Hormones Function
Hypothalamus Releasing and inhibiting hormones, such as oxytocin Regulate hormone release from pituitary gland
Pituitary Growth hormone, releasing and inhibiting hormones (such as thyroid stimulating hormone) Regulate growth, regulate hormone release
Thyroid Thyroxine, triiodothyronine Regulate metabolism and appetite
Pineal Melatonin Regulate some biological rhythms such as sleep cycles
Adrenal Epinephrine, norepinephrine Stress response, increase metabolic activities
Pancreas Insulin, glucagon Regulate blood sugar levels
Ovaries Estrogen, progesterone Mediate sexual motivation and behavior, reproduction
Testes Androgens, such as testosterone Mediate sexual motivation and behavior, reproduction
Table 3.2

Additional Supplemental Resources

Websites

  • Areas and Function of the Brain
    • Students will interact with the map and chart to review major areas of the brain and their functions. Toggle down on the top left menu to choose different structures to explore.
  • Explore the UCLA Laboratory of Neuro Imaging
    • We’ve built a diverse team of neurobiologists, mathematicians, and computer scientists, and a worldwide network of collaborators sharing data. Our goal is to increase the pace of discovery in neuroscience by better understanding how the brain works when it’s healthy and what goes wrong in disease.
  • Brain Museum
    • This web site provides browsers with images and information from one of the world’s largest collection of well-preserved, sectioned and stained brains of mammals. Viewers can see and download photographs of brains of over 100 different species of mammals (including humans) representing over 20 Mammalian Orders.

Videos

  • Self Reflected
    • Neuroscientist and artist Greg Dunn have created a map of the brain’s neural pathways and animated the firing of neurons. After you have learned about brain regions and neurons, this will provide a beautiful capstone to what you have learned. Closed captioning not available.
  • 3 Clues to Understanding Your Brain
    • Vilayanur Ramachandran tells us what brain damage can reveal about the connection between cerebral tissue and the mind, using three startling delusions as examples.
  • Crossing the Divide: How Neurons Talk to One Another
    • View the process of neurotransmission up close in this short video clip. Closed captioning available.
  • In-Depth Fight or Flight Response
    • This in-depth video explains the cellular processes that result in common “fight or flight” physiological responses, including hair-raising, sweating, and increased respiration. Includes play-by-play printout but not a full transcript.
  • Severed Corpus Callosum
    • Alan Alda interviews Michael Gazzaniga and a split-brain patient to determine the peculiarities of having a severed corpus callosum. Closed captioning available.
  • What is a Neuron?
    • This video includes information on topics such as the structure of a neuron. It is hosted by neuroscientist Alie Caldwell.  Check out her other videos in the series as well. Closed captioning available.
  • Modern ways of studying the brain |Khan Academy

    • This video gives an overview of some of the most common brain imaging tests including CAT, MRI, fMRI, MEG, EEG, and PET.
  • Navy SEALs Mental Training
    • Video segment from “The Brain: Mystery Explained” documentary featured on The History Channel. Navy SEALs Mental Training: – Goal Setting – Mental Rehearsal (aka Visualization) – Self Talk – Arousal Control
  • What Percentage of Your Brain do you Use?- TED-Ed
    • Two-thirds of the population believes a myth that has been propagated for over a century: that we use only 10% of our brains. Hardly! Our neuron-dense brains have evolved to use the least amount of energy while carrying the most information possible — a feat that requires the entire brain. Richard E. Cytowic debunks this neurological myth (and explains why we aren’t so good at multitasking).

States of Consciousness

3

A painting shows two children sleeping.
Figure 4.1 Sleep, which we all experience, is a quiet and mysterious pause in our daily lives. Two sleeping children are depicted in this 1895 oil painting titled Zwei schlafende Mädchen auf der Ofenbank, which translates as “two sleeping girls on the stove,” by Swiss painter Albert Anker.

Our lives involve regular, dramatic changes in the degree to which we are aware of our surroundings and our internal states. While awake, we feel alert and aware of the many important things going on around us. Our experiences change dramatically while we are in deep sleep and once again when we are dreaming. Some people also experience altered states of consciousness through meditation, hypnosis, or alcohol and other drugs.

This chapter will discuss states of consciousness with a particular emphasis on sleep. The different stages of sleep will be identified, and sleep disorders will be described. The chapter will close with discussions of altered states of consciousness produced by psychoactive drugs, hypnosis, and meditation.

Learning Objectives

By the end of this section, you will be able to:

  • Understand what is meant by consciousness
  • Explain how circadian rhythms are involved in regulating the sleep-wake cycle, and how circadian cycles can be disrupted
  • Discuss the concept of sleep debt

Consciousness describes our awareness of internal and external stimuli. Awareness of internal stimuli includes feeling pain, hunger, thirst, sleepiness, and being aware of our thoughts and emotions. Awareness of external stimuli includes experiences such as seeing the light from the sun, feeling the warmth of a room, and hearing the voice of a friend.

We experience different states of consciousness and different levels of awareness on a regular basis. We might even describe consciousness as a continuum that ranges from full awareness to a deep sleep. Sleep is a state marked by relatively low levels of physical activity and reduced sensory awareness that is distinct from periods of rest that occur during wakefulness. Wakefulness is characterized by high levels of sensory awareness, thought, and behavior. Beyond being awake or asleep, there are many other states of consciousness people experience. These include daydreaming, intoxication, and unconsciousness due to anesthesia. We might also experience unconscious states of being via drug-induced anesthesia for medical purposes. Often, we are not completely aware of our surroundings, even when we are fully awake. For instance, have you ever daydreamed while driving home from work or school without really thinking about the drive itself? You were capable of engaging in all of the complex tasks involved with operating a motor vehicle even though you were not aware of doing so. Many of these processes, like much of psychological behavior, are rooted in our biology.

Biological Rhythms

Biological rhythms are internal rhythms of biological activity. A woman’s menstrual cycle is an example of a biological rhythm—a recurring, cyclical pattern of bodily changes. One complete menstrual cycle takes about 28 days—a lunar month—but many biological cycles are much shorter. For example, body temperature fluctuates cyclically over a 24-hour period (Figure 4.2). Alertness is associated with higher body temperatures, and sleepiness with lower body temperatures.

A line graph is titled “Circadian Change in Body Temperature (Source: Waterhouse et al., 2012).” The y-axis, is labeled “temperature (degrees Fahrenheit),” ranges from 97.2 to 99.3. The x-axis, which is labeled “time,” begins at 12:00 A.M. and ends at 4:00 A.M. the following day. The subjects slept from 12:00 A.M. until 8:00 A.M. during which time their average body temperatures dropped from around 98.8 degrees at midnight to 97.6 degrees at 4:00 A.M. and then gradually rose back to nearly the same starting temperature by 8:00 A.M. The average body temperature fluctuated slightly throughout the day with an upward tilt, until the next sleep cycle where the temperature again dropped.
Figure 4.2 This chart illustrates the circadian change in body temperature over 28 hours in a group of eight young men. Body temperature rises throughout the waking day, peaking in the afternoon, and falls during sleep with the lowest point occurring during the very early morning hours.

This pattern of temperature fluctuation, which repeats every day, is one example of a circadian rhythm. A circadian rhythm is a biological rhythm that takes place over a period of about 24 hours. Our sleep-wake cycle, which is linked to our environment’s natural light-dark cycle, is perhaps the most obvious example of a circadian rhythm, but we also have daily fluctuations in heart rate, blood pressure, blood sugar, and body temperature. Some circadian rhythms play a role in changes in our state of consciousness.

If we have biological rhythms, then is there some sort of biological clock? In the brain, the hypothalamus, which lies above the pituitary gland, is the main center of homeostasis. Homeostasis is the tendency to maintain a balance, or optimal level, within a biological system.

The brain’s clock mechanism is located in an area of the hypothalamus known as the suprachiasmatic nucleus (SCN). The axons of light-sensitive neurons in the retina provide information to the SCN based on the amount of light present, allowing this internal clock to be synchronized with the outside world (Klein, Moore, & Reppert, 1991; Welsh, Takahashi, & Kay, 2010) (Figure 4.3).

In this graphic, the outline of a person’s head facing left is situated to the right of a picture of the sun, which is labeled ”light” with an arrow pointing to a location in the brain where light input is processed. Inside the head is an illustration of a brain with the following parts’ locations identified: Suprachiasmatic nucleus (SCN), Hypothalamus, Pituitary gland, Pineal gland, and Output rhythms: Physiology and Behavior.
Figure 4.3 The suprachiasmatic nucleus (SCN) serves as the brain’s clock mechanism. The clock sets itself with light information received through projections from the retina.

Problems With Circadian Rhythms

Generally, and for most people, our circadian cycles are aligned with the outside world. For example, most people sleep during the night and are awake during the day. One important regulator of sleep-wake cycles is the hormone melatonin. The pineal gland, an endocrine structure located inside the brain that releases melatonin, is thought to be involved in the regulation of various biological rhythms and of the immune system during sleep (Hardeland, Pandi-Perumal, & Cardinali, 2006). Melatonin release is stimulated by darkness and inhibited by light.

There are individual differences in regard to our sleep-wake cycle. For instance, some people would say they are morning people, while others would consider themselves to be night owls. These individual differences in circadian patterns of activity are known as a person’s chronotype, and research demonstrates that morning larks and night owls differ with regard to sleep regulation (Taillard, Philip, Coste, Sagaspe, & Bioulac, 2003). Sleep regulation refers to the brain’s control of switching between sleep and wakefulness as well as coordinating this cycle with the outside world.

Whether lark, owl or somewhere in between, there are situations in which a person’s circadian clock gets out of synchrony with the external environment. One way that this happens involves traveling across multiple time zones. When we do this, we often experience jet lag. Jet lag is a collection of symptoms that results from the mismatch between our internal circadian cycles and our environment. These symptoms include fatigue, sluggishness, irritability, and insomnia (i.e., a consistent difficulty in falling or staying asleep for at least three nights a week over a month’s time) (Roth, 2007).

Individuals who do rotating shift work are also likely to experience disruptions in circadian cycles. Rotating shift work refers to a work schedule that changes from early to late on a daily or weekly basis. For example, a person may work from 7:00 a.m. to 3:00 p.m. on Monday, 3:00 a.m. to 11:00 a.m. on Tuesday, and 11:00 a.m. to 7:00 p.m. on Wednesday. In such instances, the individual’s schedule changes so frequently that it becomes difficult for a normal circadian rhythm to be maintained. This often results in sleeping problems, and it can lead to signs of depression and anxiety. These kinds of schedules are common for individuals working in health care professions and service industries, and they are associated with persistent feelings of exhaustion and agitation that can make someone more prone to making mistakes on the job (Gold et al., 1992; Presser, 1995).

Rotating shift work has pervasive effects on the lives and experiences of individuals engaged in that kind of work, which is clearly illustrated in stories reported in a qualitative study that researched the experiences of middle-aged nurses who worked rotating shifts (West, Boughton & Byrnes, 2009). Several of the nurses interviewed commented that their work schedules affected their relationships with their families. One of the nurses said,

If you’ve had a partner who does work regular job 9 to 5 office hours . . . the ability to spend time, good time with them when you’re not feeling absolutely exhausted . . . that would be one of the problems that I’ve encountered. (West et al., 2009, p. 114)

While disruptions in circadian rhythms can have negative consequences, there are things we can do to help us realign our biological clocks with the external environment. Some of these approaches, such as using a bright light as shown in Figure 4.4, have been shown to alleviate some of the problems experienced by individuals suffering from jet lag or from the consequences of rotating shift work. Because the biological clock is driven by light, exposure to bright light during working shifts and dark exposure when not working can help combat insomnia and symptoms of anxiety and depression (Huang, Tsai, Chen, & Hsu, 2013).

A photograph shows a bright lamp.
Figure 4.4 Devices like this are designed to provide exposure to bright light to help people maintain a regular circadian cycle. They can be helpful for people working night shifts or for people affected by seasonal variations in light.

When people have difficulty getting sleep due to their work or the demands of day-to-day life, they accumulate a sleep debt. A person with a sleep debt does not get sufficient sleep on a chronic basis. The consequences of sleep debt include decreased levels of alertness and mental efficiency. Interestingly, since the advent of electric light, the amount of sleep that people get has declined. While we certainly welcome the convenience of having the darkness lit up, we also suffer the consequences of reduced amounts of sleep because we are more active during the nighttime hours than our ancestors were. As a result, many of us sleep less than 7–8 hours a night and accrue a sleep debt. While there is tremendous variation in any given individual’s sleep needs, the National Sleep Foundation (n.d.) cites research to estimate that newborns require the most sleep (between 12 and 18 hours a night) and that this amount declines to just 7–9 hours by the time we are adults.

If you lie down to take a nap and fall asleep very easily, chances are you may have sleep debt. Given that college students are notorious for suffering from significant sleep debt (Hicks, Fernandez, & Pelligrini, 2001; Hicks, Johnson, & Pelligrini, 1992; Miller, Shattuck, & Matsangas, 2010), chances are you and your classmates deal with sleep debt-related issues on a regular basis. In 2015, the National Sleep Foundation updated its sleep duration hours, to better accommodate individual differences. Table 4.1 shows the new recommendations, which describe sleep durations that are “recommended”, “may be appropriate”, and “not recommended”.

Sleep Needs at Different Ages
Age Recommended May be appropriate Not recommended
0–3 months 14–17 hours 11–13 hours
18–19 hours
Fewer than 11 hours
More than 19 hours
4–11 months 12–15 hours 10–11 hours
16–18 hours
Fewer than 10 hours
More than 18 hours
1–2 years 11–14 hours 9–10 hours
15–16 hours
Fewer than 9 hours
More than 16 hours
3–5 years 10–13 hours 8–9 hours
14 hours
Fewer than 8 hours
More than 14 hours
6–13 years 9–11 hours 7–8 hours
12 hours
Fewer than 7 hours
More than 12 hours
14–17 years 8–10 hours 7 hours
11 hours
Fewer than 7 hours
More than 11 hours
18–25 years 7–9 hours 6 hours
10–11 hours
Fewer than 6 hours
More than 11 hours
26–64 years 7–9 hours 6 hours
10 hours
Fewer than 6 hours
More than 10 hours
≥65 years 7–8 hours 5–6 hours
9 hours
Fewer than 5 hours
More than 9 hours
Table4.1

Sleep debt and sleep deprivation have significant negative psychological and physiological consequences Figure 4.5. As mentioned earlier, lack of sleep can result in decreased mental alertness and cognitive function. In addition, sleep deprivation often results in depression-like symptoms. These effects can occur as a function of accumulated sleep debt or in response to more acute periods of sleep deprivation. It may surprise you to know that sleep deprivation is associated with obesity, increased blood pressure, increased levels of stress hormones, and reduced immune functioning (Banks & Dinges, 2007). A sleep-deprived individual generally will fall asleep more quickly than if she were not sleep deprived. Some sleep-deprived individuals have difficulty staying awake when they stop moving (for example sitting and watching television or driving a car). That is why individuals suffering from sleep deprivation can also put themselves and others at risk when they put themselves behind the wheel of a car or work with dangerous machinery. Some research suggests that sleep deprivation affects cognitive and motor function as much as, if not more than, alcohol intoxication (Williamson & Feyer, 2000). Research shows that the most severe effects of sleep deprivation occur when a person stays awake for more than 24 hours (Killgore & Weber, 2014; Killgore et al., 2007), or following repeated nights with fewer than four hours in bed (Wickens, Hutchins, Lauk, Seebook, 2015). For example, irritability, distractibility, and impairments in cognitive and moral judgment can occur with fewer than four hours of sleep. If someone stays awake for 48 consecutive hours, they could start to hallucinate.

An illustration of the top half of a human body identifies the locations in the body that correspond with various adverse affects of sleep deprivation. The brain is labeled with “Irritability,” “Cognitive impairment,” “Memory lapses or loss,” “Impaired moral judgment,” “Severe yawning,” “Hallucinations,” and “Symptoms similar to ADHD.” The heart is labeled with “Risk of heart disease.” The muscles are labeled with “Increased reaction time,” “Decreased accuracy,” “Tremors,” and “Aches.” There is an organ near the stomach labeled “Risk of diabetes Type 2.” Various parts of the neck, arm, and underarm are labeled “Impaired immune system.” Other risks include “Growth suppression,” “Risk of obesity,” “Decreased temperature.”
Figure 4.5 This figure illustrates some of the negative consequences of sleep deprivation. While cognitive deficits may be the most obvious, many body systems are negatively impacted by lack of sleep. (credit: modification of work by Mikael Häggström)

The amount of sleep we get varies across the lifespan. When we are very young, we spend up to 16 hours a day sleeping. As we grow older, we sleep less. In fact, a meta-analysis, which is a study that combines the results of many related studies, conducted within the last decade indicates that by the time we are 65 years old, we average fewer than 7 hours of sleep per day (Ohayon, Carskadon, Guilleminault, & Vitiello, 2004).

Learning Objectives

By the end of this section, you will be able to:

  • Describe areas of the brain involved in sleep
  • Understand hormone secretions associated with sleep
  • Describe several theories aimed at explaining the function of sleep

We spend approximately one-third of our lives sleeping. Given the average life expectancy for U.S. citizens falls between 73 and 79 years old (Singh & Siahpush, 2006), we can expect to spend approximately 25 years of our lives sleeping. Some animals never sleep (e.g., some fish and amphibian species); other animals sleep very little without apparent negative consequences (e.g., giraffes); yet some animals (e.g., rats) die after two weeks of sleep deprivation (Siegel, 2008). Why do we devote so much time to sleeping? Is it absolutely essential that we sleep? This section will consider these questions and explore various explanations for why we sleep.

What is Sleep?

You have read that sleep is distinguished by low levels of physical activity and reduced sensory awareness. As discussed by Siegel (2008), a definition of sleep must also include mention of the interplay of the circadian and homeostatic mechanisms that regulate sleep. Homeostatic regulation of sleep is evidenced by sleep rebound following sleep deprivation. Sleep rebound refers to the fact that a sleep-deprived individual will fall asleep more quickly during subsequent opportunities for sleep. Sleep is characterized by certain patterns of activity of the brain that can be visualized using electroencephalography (EEG), and different phases of sleep can be differentiated using EEG as well.

Sleep-wake cycles seem to be controlled by multiple brain areas acting in conjunction with one another. Some of these areas include the thalamus, the hypothalamus, and the pons. As already mentioned, the hypothalamus contains the SCN—the biological clock of the body—in addition to other nuclei that, in conjunction with the thalamus, regulate slow-wave sleep. The pons is important for regulating rapid eye movement (REM) sleep (National Institutes of Health, n.d.).

Sleep is also associated with the secretion and regulation of a number of hormones from several endocrine glands including melatonin, follicle-stimulating hormone (FSH), luteinizing hormone (LH), and growth hormone (National Institutes of Health, n.d.). You have read that the pineal gland releases melatonin during sleep (Figure 4.6). Melatonin is thought to be involved in the regulation of various biological rhythms and the immune system (Hardeland et al., 2006). During sleep, the pituitary gland secretes both FSH and LH which are important in regulating the reproductive system (Christensen et al., 2012; Sofikitis et al., 2008). The pituitary gland also secretes growth hormone, during sleep, which plays a role in physical growth and maturation as well as other metabolic processes (Bartke, Sun, & Longo, 2013).

An illustration of a brain shows the locations of the hypothalamus, thalamus, pons, suprachiasmatic nucleus, pituitary gland, and pineal gland.
Figure 4.6 The pineal and pituitary glands secrete a number of hormones during sleep.

Why Do We Sleep?

Given the central role that sleep plays in our lives and the number of adverse consequences that have been associated with sleep deprivation, one would think that we would have a clear understanding of why it is that we sleep. Unfortunately, this is not the case; however, several hypotheses have been proposed to explain the function of sleep.

Adaptive Function of Sleep

One popular hypothesis of sleep incorporates the perspective of evolutionary psychology. Evolutionary psychology is a discipline that studies how universal patterns of behavior and cognitive processes have evolved over time as a result of natural selection. Variations and adaptations in cognition and behavior make individuals more or less successful in reproducing and passing their genes to their offspring. One hypothesis from this perspective might argue that sleep is essential to restore resources that are expended during the day. Just as bears hibernate in the winter when resources are scarce, perhaps people sleep at night to reduce their energy expenditures. While this is an intuitive explanation of sleep, there is little research that supports this explanation. In fact, it has been suggested that there is no reason to think that energetic demands could not be addressed with periods of rest and inactivity (Frank, 2006; Rial et al., 2007), and some research has actually found a negative correlation between energetic demands and the amount of time spent sleeping (Capellini, Barton, McNamara, Preston, & Nunn, 2008).

Another evolutionary hypothesis of sleep holds that our sleep patterns evolved as an adaptive response to predatory risks, which increase in darkness. Thus we sleep in safe areas to reduce the chance of harm. Again, this is an intuitive and appealing explanation for why we sleep. Perhaps our ancestors spent extended periods of time asleep to reduce attention to themselves from potential predators. Comparative research indicates, however, that the relationship that exists between predatory risk and sleep is very complex and equivocal. Some research suggests that species that face higher predatory risks sleep fewer hours than other species (Capellini et al., 2008), while other researchers suggest there is no relationship between the amount of time a given species spends in deep sleep and its predation risk (Lesku, Roth, Amlaner, & Lima, 2006).

It is quite possible that sleep serves no single universally adaptive function, and different species have evolved different patterns of sleep in response to their unique evolutionary pressures. While we have discussed the negative outcomes associated with sleep deprivation, it should be pointed out that there are many benefits that are associated with adequate amounts of sleep. A few such benefits listed by the National Sleep Foundation (n.d.) include maintaining a healthy weight, lowering stress levels, improving mood, and increasing motor coordination, as well as a number of benefits related to cognition and memory formation.

Cognitive Function of Sleep

Another theory regarding why we sleep involves sleep’s importance for cognitive function and memory formation (Rattenborg, Lesku, Martinez-Gonzalez, & Lima, 2007). Indeed, we know sleep deprivation results in disruptions in cognition and memory deficits (Brown, 2012), leading to impairments in our abilities to maintain attention, make decisions, and recall long-term memories. Moreover, these impairments become more severe as the amount of sleep deprivation increases (Alhola & Polo-Kantola, 2007). Furthermore, slow-wave sleep after learning a new task can improve resultant performance on that task (Huber, Ghilardi, Massimini, & Tononi, 2004) and seems essential for effective memory formation (Stickgold, 2005). Understanding the impact of sleep on cognitive function should help you understand that cramming all night for a test may be not effective and can even prove counterproductive.

Learning Objectives

By the end of this section, you will be able to:

  • Differentiate between REM and non-REM sleep
  • Describe the differences between the three stages of non-REM sleep
  • Understand the role that REM and non-REM sleep play in learning and memory

Sleep is not a uniform state of being. Instead, sleep is composed of several different stages that can be differentiated from one another by the patterns of brain wave activity that occur during each stage. These changes in brain wave activity can be visualized using EEG and are distinguished from one another by both the frequency and amplitude of brain waves (Figure 4.7). Sleep can be divided into two different general phases: REM sleep and non-REM (NREM) sleep. Rapid eye movement (REM) sleep is characterized by darting movements of the eyes under closed eyelids. Brain waves during REM sleep appear very similar to brain waves during wakefulness. In contrast, non-REM (NREM) sleep is subdivided into three stages distinguished from each other and from wakefulness by characteristic patterns of brain waves. The first three stages of sleep are NREM sleep, while the fourth and final stage of sleep is REM sleep. In this section, we will discuss each of these stages of sleep and their associated patterns of brain wave activity.

NREM Stages of Sleep

The first stage of NREM sleep is known as stage 1 sleep. Stage 1 sleep is a transitional phase that occurs between wakefulness and sleep, the period during which we drift off to sleep. During this time, there is a slowdown in both the rates of respiration and heartbeat. In addition, stage 1 sleep involves a marked decrease in both overall muscle tension and core body temperature.

In terms of brain wave activity, stage 1 sleep is associated with both alpha and theta waves. The early portion of stage 1 sleep produces alpha waves, which are relatively low frequency (8–13Hz), high amplitude patterns of electrical activity (waves) that become synchronized (Figure 4.8). This pattern of brain wave activity resembles that of someone who is very relaxed, yet awake. As an individual continues through stage 1 sleep, there is an increase in theta wave activity. Theta waves are even lower frequency (4–7 Hz), higher amplitude brain waves than alpha waves. It is relatively easy to wake someone from stage 1 sleep; in fact, people often report that they have not been asleep if they are awoken during stage 1 sleep.

A graph has a y-axis labeled “EEG” and an x-axis labeled “time (seconds.) Plotted along the y-axis and moving upward are the stages of sleep. First is REM, followed by Stage 3 NREM Delta, Stage 2 NREM Theta (sleep spindles; K-complexes), Stage 1 NREM Alpha, and Awake. Charted on the x axis is Time in seconds from 2–20 in 2 second intervals. Each sleep stage has associated wavelengths of varying amplitude and frequency. Relative to the others, “awake” has a very close wavelength and a medium amplitude. Stage 1 is characterized by a generally uniform wavelength and a relatively low amplitude which doubles and quickly reverts to normal every 2 seconds. Stage 2 is comprised of a similar wavelength as stage 1. It introduces the K-complex from seconds 10 through 12 which is a short burst of doubled or tripled amplitude and decreased wavelength. Stage 3 has a more uniform wave with gradually increasing amplitude. Finally, REM sleep looks much like stage 2 without the K-complex.
Figure 4.8 Brainwave activity changes dramatically across the different stages of sleep.

As we move into stage 2 sleep, the body goes into a state of deep relaxation. Theta waves still dominate the activity of the brain, but they are interrupted by brief bursts of activity known as sleep spindles (Figure 4.9). A sleep spindle is a rapid burst of higher frequency brain waves that may be important for learning and memory (Fogel & Smith, 2011; Poe, Walsh, & Bjorness, 2010). In addition, the appearance of K-complexes is often associated with stage 2 sleep. A K-complex is a very high amplitude pattern of brain activity that may in some cases occur in response to environmental stimuli. Thus, K-complexes might serve as a bridge to higher levels of arousal in response to what is going on in our environments (Halász, 1993; Steriade & Amzica, 1998).

A graph has an x-axis labeled “time” and a y-axis labeled “voltage. A line illustrates brainwaves, with two areas labeled “sleep spindle” and “k-complex”. The area labeled “sleep spindle” has decreased wavelength and moderately increased amplitude, while the area labeled “k-complex” has significantly high amplitude and longer wavelength.
Figure 4.9 Stage 2 sleep is characterized by the appearance of both sleep spindles and K-complexes.

Stage 3 is often referred to as deep sleep or slow-wave sleep because this stage is characterized by low frequency (less than 3 Hz), high amplitude delta waves (Figure 4.10). During this time, an individual’s heart rate and respiration slow dramatically. It is much more difficult to awaken someone from sleep during stage 3 than during earlier stages. Interestingly, individuals who have increased levels of alpha brain wave activity (more often associated with wakefulness and transition into stage 1 sleep) during stage 3 often report that they do not feel refreshed upon waking, regardless of how long they slept (Stone, Taylor, McCrae, Kalsekar, & Lichstein, 2008).

Polysonograph a shows the pattern of delta waves, which are low frequency and high amplitude. Delta waves are found mostly in stage 3 of sleep. Chart b shows brainwaves at various stages of sleep, with stage 3 highlighted.
Figure 4.10 (a) Delta waves, which are low frequency and high amplitude, characterize (b) slow-wave stage 3 sleep.

REM Sleep

As mentioned earlier, REM sleep is marked by rapid movements of the eyes. The brain waves associated with this stage of sleep are very similar to those observed when a person is awake, as shown in Figure 4.11, and this is the period of sleep in which dreaming occurs. It is also associated with paralysis of muscle systems in the body with the exception of those that make circulation and respiration possible. Therefore, no movement of voluntary muscles occurs during REM sleep in a normal individual; REM sleep is often referred to as paradoxical sleep because of this combination of high brain activity and lack of muscle tone. Like NREM sleep, REM has been implicated in various aspects of learning and memory (Wagner, Gais, & Born, 2001; Siegel, 2001).

Chart A is a polysonograph with the period of rapid eye movement (REM) highlighted. Chart b shows brainwaves at various stages of sleep, with the “awake” stage highlighted to show its similarity to the wave pattern of “REM” in chart A.
Figure 4.11 (a) A period of rapid eye movement is marked by the short red line segment. The brain waves associated with REM sleep, outlined in the red box in (a), look very similar to those seen (b) during wakefulness.

If people are deprived of REM sleep and then allowed to sleep without disturbance, they will spend more time in REM sleep in what would appear to be an effort to recoup the lost time in REM. This is known as the REM rebound, and it suggests that REM sleep is also homeostatically regulated. Aside from the role that REM sleep may play in processes related to learning and memory, REM sleep may also be involved in emotional processing and regulation. In such instances, REM rebound may actually represent an adaptive response to stress in non-depressed individuals by suppressing the emotional salience of aversive events that occurred in wakefulness (Suchecki, Tiba, & Machado, 2012). Sleep deprivation, in general, is associated with a number of negative consequences (Brown, 2012).

The hypnogram below (Figure 4.12) shows a person’s passage through the stages of sleep.

This is a hypnogram showing the transitions of the sleep cycle during a typical eight hour period of sleep. During the first hour, the person goes through stages 1 and 2 and ends at 3. In the second hour, sleep oscillates in stage 3 before attaining a 30-minute period of REM sleep. The third hour follows the same pattern as the second, but ends with a brief awake period. The fourth hour follows a similar pattern as the third, with a slightly longer REM stage. In the fifth hour, stage 3 is no longer reached. The sleep stages are fluctuating from 2, to 1, to REM, to awake, and then they repeat with shortening intervals until the end of the eighth hour when the person awakens.
Figure 4.12 A hypnogram is a diagram of the stages of sleep as they occur during a period of sleep. This hypnogram illustrates how an individual moves through the various stages of sleep.

Dreams and their associated meanings vary across different cultures and periods of time. By the late 19th century, German psychiatrist Sigmund Freud had become convinced that dreams represented an opportunity to gain access to the unconscious. By analyzing dreams, Freud thought people could increase self-awareness and gain valuable insight to help them deal with the problems they faced in their lives. Freud made distinctions between the manifest content and the latent content of dreams. Manifest content is the actual content, or storyline, of a dream. Latent content, on the other hand, refers to the hidden meaning of a dream. For instance, if a woman dreams about being chased by a snake, Freud might have argued that this represents the woman’s fear of sexual intimacy, with the snake serving as a symbol of a man’s penis.

Freud was not the only theorist to focus on the content of dreams. The 20th-century Swiss psychiatrist Carl Jung believed that dreams allowed us to tap into the collective unconscious. The collective unconscious, as described by Jung, is a theoretical repository of information he believed to be shared by everyone. According to Jung, certain symbols in dreams reflected universal archetypes with meanings that are similar for all people regardless of culture or location.

The sleep and dreaming researcher Rosalind Cartwright, however, believes that dreams simply reflect life events that are important to the dreamer. Unlike Freud and Jung, Cartwright’s ideas about dreaming have found empirical support. For example, she and her colleagues published a study in which women going through a divorce were asked several times over a five-month period to report the degree to which their former spouses were on their minds. These same women were awakened during REM sleep in order to provide a detailed account of their dream content. There was a significant positive correlation between the degree to which women thought about their former spouses during waking hours and the number of times their former spouses appeared as characters in their dreams (Cartwright, Agargun, Kirkby, & Friedman, 2006). Recent research (Horikawa, Tamaki, Miyawaki, & Kamitani, 2013) has uncovered new techniques by which researchers may effectively detect and classify the visual images that occur during dreaming by using fMRI for neural measurement of brain activity patterns, opening the way for additional research in this area.

Alan Hobson, a neuroscientist, is credited for developing the activation-synthesis theory of dreaming. Early versions of this theory proposed that dreams were not the meaning-filled representations of angst proposed by Freud and others, but were rather the result of our brain attempting to make sense of (“synthesize”) the neural activity (“activation”) that was happening during REM sleep. Recent adaptations (e.g., Hobson, 2002) continue to update the theory based on accumulating evidence. For example, Hobson (2009) suggests that dreaming may represent a state of protoconsciousness. In other words, dreaming involves constructing a virtual reality in our heads that we might use to help us during wakefulness. Among a variety of neurobiological evidence, John Hobson cites research on lucid dreams as an opportunity to better understand dreaming in general. Lucid dreams are dreams in which certain aspects of wakefulness are maintained during a dream state. In a lucid dream, a person becomes aware of the fact that they are dreaming, and as such, they can control the dream’s content (LaBerge, 1990).

Learning Objectives

By the end of this section, you will be able to:

  • Describe the symptoms and treatments of insomnia
  • Recognize the symptoms of several parasomnias
  • Describe the symptoms and treatments for sleep apnea
  • Recognize risk factors associated with sudden infant death syndrome (SIDS) and steps to prevent it
  • Describe the symptoms and treatments for narcolepsy

Many people experience disturbances in their sleep at some point in their lives. Depending on the population and sleep disorder being studied, between 30% and 50% of the population suffers from a sleep disorder at some point in their lives (Bixler, Kales, Soldatos, Kaels, & Healey, 1979; Hossain & Shapiro, 2002; Ohayon, 1997, 2002; Ohayon & Roth, 2002). This section will describe several sleep disorders as well as some of their treatment options.

Insomnia

Insomnia, a consistent difficulty in falling or staying asleep, is the most common of the sleep disorders. Individuals with insomnia often experience long delays between the times that they go to bed and actually fall asleep. In addition, these individuals may wake up several times during the night only to find that they have difficulty getting back to sleep. As mentioned earlier, one of the criteria for insomnia involves experiencing these symptoms for at least three nights a week for at least one month’s time (Roth, 2007).

It is not uncommon for people suffering from insomnia to experience increased levels of anxiety about their inability to fall asleep. This becomes a self-perpetuating cycle because increased anxiety leads to increased arousal, and higher levels of arousal make the prospect of falling asleep even more unlikely. Chronic insomnia is almost always associated with feeling overtired and may be associated with symptoms of depression.

There may be many factors that contribute to insomnia, including age, drug use, exercise, mental status, and bedtime routines. Not surprisingly, insomnia treatment may take one of several different approaches. People who suffer from insomnia might limit their use of stimulant drugs (such as caffeine) or increase their amount of physical exercise during the day. Some people might turn to over-the-counter (OTC) or prescribed sleep medications to help them sleep, but this should be done sparingly because many sleep medications result in dependence and alter the nature of the sleep cycle, and they can increase insomnia over time. Those who continue to have insomnia, particularly if it affects their quality of life, should seek professional treatment.

Some forms of psychotherapy, such as cognitive-behavioral therapy, can help sufferers of insomnia. Cognitive-behavioral therapy is a type of psychotherapy that focuses on cognitive processes and problem behaviors. The treatment of insomnia likely would include stress management techniques and changes in problematic behaviors that could contribute to insomnia (e.g., spending more waking time in bed). Cognitive-behavioral therapy has been demonstrated to be quite effective in treating insomnia (Savard, Simard, Ivers, & Morin, 2005; Williams, Roth, Vatthauer, & McCrae, 2013).

EVERYDAY CONNECTION: Solutions to Support Healthy Sleep

Has something like this ever happened to you? My sophomore college housemate got so stressed out during finals sophomore year he drank almost a whole bottle of Nyquil to try to fall asleep. When he told me, I made him go see the college therapist.

Many college students struggle to get the recommended 7–9 hours of sleep each night. However, for some, it’s not because of all-night partying or late-night study sessions. It’s simply that they feel so overwhelmed and stressed that they cannot fall asleep or stay asleep. One or two nights of sleep difficulty is not unusual, but if you experience anything more than that, you should seek a doctor’s advice.

Here are some tips to maintain healthy sleep:

  • Stick to a sleep schedule, even on the weekends. Try going to bed and waking up at the same time every day to keep your biological clock in sync so your body gets in the habit of sleeping every night.
  • Avoid anything stimulating for an hour before bed. That includes exercise and bright light from devices.
  • Exercise daily.
  • Avoid naps.
  • Keep your bedroom temperature between 60 and 67 degrees. People sleep better in cooler temperatures.
  • Avoid alcohol, cigarettes, caffeine, and heavy meals before bed. It may feel like alcohol helps you sleep, but it actually disrupts REM sleep and leads to frequent awakenings. Heavy meals may make you sleepy, but they can also lead to frequent awakenings due to gastric distress.
  • If you cannot fall asleep, leave your bed and do something else until you feel tired again. Train your body to associate the bed with sleeping rather than other activities like studying, eating, or watching television shows.

Parasomnias

parasomnia is one of a group of sleep disorders in which unwanted, disruptive motor activity and/or experiences during sleep play a role. Parasomnias can occur in either REM or NREM phases of sleep. Sleepwalking, restless leg syndrome and night terrors are all examples of parasomnias (Mahowald & Schenck, 2000).

Sleepwalking

In sleepwalking or somnambulism, the sleeper engages in relatively complex behaviors ranging from wandering about to driving an automobile. During periods of sleepwalking, sleepers often have their eyes open, but they are not responsive to attempts to communicate with them. Sleepwalking most often occurs during slow-wave sleep, but it can occur at any time during a sleep period in some affected individuals (Mahowald & Schenck, 2000).

Historically, somnambulism has been treated with a variety of pharmacotherapies ranging from benzodiazepines to antidepressants. However, the success rate of such treatments is questionable. Guilleminault et al. (2005) found that sleepwalking was not alleviated with the use of benzodiazepines. However, all of their somnambulistic patients who also suffered from sleep-related breathing problems showed a marked decrease in sleepwalking when their breathing problems were effectively treated.

REM Sleep Behavior Disorder (RBD)

REM sleep behavior disorder (RBD) occurs when the muscle paralysis associated with the REM sleep phase does not occur. Individuals who suffer from RBD have high levels of physical activity during REM sleep, especially during disturbing dreams. These behaviors vary widely, but they can include kicking, punching, scratching, yelling, and behaving like an animal that has been frightened or attacked. People who suffer from this disorder can injure themselves or their sleeping partners when engaging in these behaviors. Furthermore, these types of behaviors ultimately disrupt sleep, although affected individuals have no memories that these behaviors have occurred (Arnulf, 2012).

This disorder is associated with a number of neurodegenerative diseases such as Parkinson’s disease. In fact, this relationship is so robust that some view the presence of RBD as a potential aid in the diagnosis and treatment of a number of neurodegenerative diseases (Ferini-Strambi, 2011). Clonazepam, an anti-anxiety medication with sedative properties, is most often used to treat RBD. It is administered alone or in conjunction with doses of melatonin (the hormone secreted by the pineal gland). As part of treatment, the sleeping environment is often modified to make it a safer place for those suffering from RBD (Zangini, Calandra-Buonaura, Grimaldi, & Cortelli, 2011).

Other Parasomnias

A person with restless leg syndrome has uncomfortable sensations in the legs during periods of inactivity or when trying to fall asleep. This discomfort is relieved by deliberately moving the legs, which, not surprisingly, contributes to difficulty in falling or staying asleep. Restless leg syndrome is quite common and has been associated with a number of other medical diagnoses, such as chronic kidney disease and diabetes (Mahowald & Schenck, 2000). There are a variety of drugs that treat restless leg syndrome: benzodiazepines, opiates, and anticonvulsants (Restless Legs Syndrome Foundation, n.d.).

Night terrors result in a sense of panic in the sufferer and are often accompanied by screams and attempts to escape from the immediate environment (Mahowald & Schenck, 2000). Although individuals suffering from night terrors appear to be awake, they generally have no memories of the events that occurred, and attempts to console them are ineffective. Typically, individuals suffering from night terrors will fall back asleep again within a short time. Night terrors apparently occur during the NREM phase of sleep (Provini, Tinuper, Bisulli, & Lagaresi, 2011). Generally, treatment for night terrors is unnecessary unless there is some underlying medical or psychological condition that is contributing to the night terrors (Mayo Clinic, n.d.).

Sleep Apnea

Sleep apnea is defined by episodes during which a sleeper’s breathing stops. These episodes can last 10–20 seconds or longer and often are associated with brief periods of arousal. While individuals suffering from sleep apnea may not be aware of these repeated disruptions in sleep, they do experience increased levels of fatigue. Many individuals diagnosed with sleep apnea first seek treatment because their sleeping partners indicate that they snore loudly and/or stop breathing for extended periods of time while sleeping (Henry & Rosenthal, 2013). Sleep apnea is much more common in overweight people and is often associated with loud snoring. Surprisingly, sleep apnea may exacerbate cardiovascular disease (Sánchez-de-la-Torre, Campos-Rodriguez, & Barbé, 2012). While sleep apnea is less common in thin people, anyone, regardless of their weight, who snores loudly or gasps for air while sleeping, should be checked for sleep apnea.

While people are often unaware of their sleep apnea, they are keenly aware of some of the adverse consequences of insufficient sleep. Consider a patient who believed that as a result of his sleep apnea he “had three car accidents in six weeks. They were ALL my fault. Two of them I didn’t even know I was involved in until afterward” (Henry & Rosenthal, 2013, p. 52). It is not uncommon for people suffering from undiagnosed or untreated sleep apnea to fear that their careers will be affected by the lack of sleep, illustrated by this statement from another patient, “I’m in a job where there’s a premium on being mentally alert. I was really sleepy… and having trouble concentrating…. It was getting to the point where it was kind of scary” (Henry & Rosenthal, 2013, p. 52).

There are two types of sleep apnea: obstructive sleep apnea and central sleep apnea. Obstructive sleep apnea occurs when an individual’s airway becomes blocked during sleep, and the air is prevented from entering the lungs. In central sleep apnea, disruption in signals sent from the brain that regulate breathing cause periods of interrupted breathing (White, 2005).

One of the most common treatments for sleep apnea involves the use of a special device during sleep. A continuous positive airway pressure (CPAP) device includes a mask that fits over the sleeper’s nose and mouth, which is connected to a pump that pumps air into the person’s airways, forcing them to remain open, as shown in Figure 4.13. Some newer CPAP masks are smaller and cover only the nose. This treatment option has proven to be effective for people suffering from mild to severe cases of sleep apnea (McDaid et al., 2009). However, alternative treatment options are being explored because consistent compliance by users of CPAP devices is a problem. Recently, a new EPAP (expiratory positive air pressure) device has shown promise in double-blind trials as one such alternative (Berry, Kryger, & Massie, 2011).

Photograph A shows a CPAP device. Photograph B shows a clear full face CPAP mask attached to a mannequin's head with straps.
Figure 4.13 (a) A typical CPAP device used in the treatment of sleep apnea is (b) affixed to the head with straps and a mask that covers the nose and mouth.

SIDS

In sudden infant death syndrome (SIDS) an infant stops breathing during sleep and dies. Infants younger than 12 months appear to be at the highest risk for SIDS, and boys have a greater risk than girls. A number of risk factors have been associated with SIDS including premature birth, smoking within the home, and hyperthermia. There may also be differences in both brain structure and function in infants that die from SIDS (Berkowitz, 2012; Mage & Donner, 2006; Thach, 2005).

The substantial amount of research on SIDS has led to a number of recommendations to parents to protect their children (Figure 4.14). For one, research suggests that infants should be placed on their backs when put down to sleep, and their cribs should not contain any items which pose suffocation threats, such as blankets, pillows, or padded crib bumpers (cushions that cover the bars of a crib). Infants should not have caps placed on their heads when put down to sleep in order to prevent overheating, and people in the child’s household should abstain from smoking in the home. Recommendations like these have helped to decrease the number of infant deaths from SIDS in recent years (Mitchell, 2009; Task Force on Sudden Infant Death Syndrome, 2011).

The “Safe to Sleep” campaign logo shows a baby sleeping and the words “safe to sleep.”
Figure 4.14 The Safe to Sleep campaign educates the public about how to minimize risk factors associated with SIDS. This campaign is sponsored in part by the National Institute of Child Health and Human Development.

Narcolepsy

Unlike the other sleep disorders described in this section, a person with narcolepsy cannot resist falling asleep at inopportune times. These sleep episodes are often associated with cataplexy, which is a lack of muscle tone or muscle weakness, and in some cases involves complete paralysis of the voluntary muscles. This is similar to the kind of paralysis experienced by healthy individuals during REM sleep (Burgess & Scammell, 2012; Hishikawa & Shimizu, 1995; Luppi et al., 2011). Narcoleptic episodes take on other features of REM sleep. For example, around one-third of individuals diagnosed with narcolepsy experience vivid, dream-like hallucinations during narcoleptic attacks (Chokroverty, 2010).

Surprisingly, narcoleptic episodes are often triggered by states of heightened arousal or stress. The typical episode can last from a minute or two to half an hour. Once awakened from a narcoleptic attack, people report that they feel refreshed (Chokroverty, 2010). Obviously, regular narcoleptic episodes could interfere with the ability to perform one’s job or complete schoolwork, and in some situations, narcolepsy can result in significant harm and injury (e.g., driving a car or operating machinery or other potentially dangerous equipment).

Generally, narcolepsy is treated using psychomotor stimulant drugs, such as amphetamines (Mignot, 2012). These drugs promote increased levels of neural activity. Narcolepsy is associated with reduced levels of the signaling molecule hypocretin in some areas of the brain (De la Herrán-Arita & Drucker-Colín, 2012; Han, 2012), and the traditional stimulant drugs do not have direct effects on this system. Therefore, it is quite likely that new medications that are developed to treat narcolepsy will be designed to target the hypocretin system.

There is a tremendous amount of variability among sufferers, both in terms of how symptoms of narcolepsy manifest and the effectiveness of currently available treatment options. This is illustrated by McCarty’s (2010) case study of a 50-year-old woman who sought help for the excessive sleepiness during normal waking hours that she had experienced for several years. She indicated that she had fallen asleep at inappropriate or dangerous times, including while eating, while socializing with friends, and while driving her car. During periods of emotional arousal, the woman complained that she felt some weakness on the right side of her body. Although she did not experience any dream-like hallucinations, she was diagnosed with narcolepsy as a result of sleep testing. In her case, the fact that her cataplexy was confined to the right side of her body was quite unusual. Early attempts to treat her condition with a stimulant drug alone were unsuccessful. However, when a stimulant drug was used in conjunction with a popular antidepressant, her condition improved dramatically.

Learning Objectives

By the end of this section, you will be able to:

  • Describe the diagnostic criteria for substance use disorders
  • Identify the neurotransmitter systems impacted by various categories of drugs
  • Describe how different categories of drugs affect behavior and experience

While we all experience altered states of consciousness in the form of sleep on a regular basis, some people use drugs and other substances that result in altered states of consciousness as well. This section will present information relating to the use of various psychoactive drugs and problems associated with such use. This will be followed by brief descriptions of the effects of some of the more well-known drugs commonly used today.

Substance Use Disorders

The fifth edition of the Diagnostic and Statistical Manual of Mental DisordersFifth Edition (DSM-5) is used by clinicians to diagnose individuals suffering from various psychological disorders. Drug use disorders are addictive disorders, and the criteria for specific substance (drug) use disorders are described in DSM-5. A person who has a substance use disorder often uses more of the substance than they originally intended to and continues to use that substance despite experiencing significant adverse consequences. In individuals diagnosed with a substance use disorder, there is a compulsive pattern of drug use that is often associated with both physical and psychological dependence.

Physical dependence involves changes in normal bodily functions—the user will experience withdrawal from the drug upon cessation of use. In contrast, a person who has psychological dependence has an emotional, rather than physical, need for the drug and may use the drug to relieve psychological distress. Tolerance is linked to physiological dependence, and it occurs when a person requires more and more drug to achieve effects previously experienced at lower doses. Tolerance can cause the user to increase the amount of drug used to a dangerous level—even to the point of overdose and death.

Drug withdrawal includes a variety of negative symptoms experienced when drug use is discontinued. These symptoms usually are the opposite of the effects of the drug. For example, withdrawal from sedative drugs often produces unpleasant arousal and agitation. In addition to withdrawal, many individuals who are diagnosed with substance use disorders will also develop tolerance to these substances. Psychological dependence, or drug craving, is a recent addition to the diagnostic criteria for substance use disorder in DSM-5. This is an important factor because we can develop tolerance and experience withdrawal from any number of drugs that we do not abuse. In other words, physical dependence in and of itself is of limited utility in determining whether or not someone has a substance use disorder.

Drug Categories

The effects of all psychoactive drugs occur through their interactions with our endogenous neurotransmitter systems. Many of these drugs, and their relationships, are shown in Table 4.2. As you have learned, drugs can act as agonists or antagonists of a given neurotransmitter system. An agonist facilitates the activity of a neurotransmitter system, and antagonists impede neurotransmitter activity.

Drugs and Their Effects
Class of Drug Examples Effects on the Body Effects When Used Psychologically Addicting?
Stimulants Cocaine, amphetamines (including some ADHD medications such as Adderall), methamphetamines, MDMA (“Ecstasy” or “Molly”) Increased heart rate, blood pressure, body temperature Increased alertness, mild euphoria, decreased appetite in low doses. High doses increase agitation, paranoia, can cause hallucinations. Some can cause heightened sensitivity to physical stimuli. High doses of MDMA can cause brain toxicity and death. Yes
Sedative-Hypnotics (“Depressants”) Alcohol, barbiturates (e.g., secobarbital, pentobarbital), Benzodiazepines (e.g., Xanax) Decreased heart rate, blood pressure Low doses increase relaxation, decrease inhibitions. High doses can induce sleep, cause motor disturbance, memory loss, decreased respiratory function, and death. Yes
Opiates Opium, Heroin, Fentanyl, Morphine, Oxycodone, Vicoden, methadone, and other prescription pain relievers Decreased pain, pupil dilation, decreased gut motility, decreased respiratory function Pain relief, euphoria, sleepiness. High doses can cause death due to respiratory depression. Yes
Hallucinogens Marijuana, LSD, Peyote, mescaline, DMT, dissociative anesthetics including ketamine and PCP Increased heart rate and blood pressure that may dissipate over time Mild to intense perceptual changes with high variability in effects based on strain, method of ingestion, and individual differences Yes
Table4.2

Alcohol and Other Depressants

Ethanol, which we commonly refer to as alcohol, is in a class of psychoactive drugs known as depressants (Figure 4.15). A depressant is a drug that tends to suppress central nervous system activity. Other depressants include barbiturates and benzodiazepines. These drugs share in common their ability to serve as agonists of the gamma-Aminobutyric acid (GABA) neurotransmitter system. Because GABA has a quieting effect on the brain, GABA agonists also have a quieting effect; these types of drugs are often prescribed to treat both anxiety and insomnia.

An illustration of a GABA-gated chloride channel in a cell membrane shows receptor sites for barbiturate, benzodiazepine, GABA, alcohol, and neurosteroids, as well as three negatively-charged chloride ions passing through the channel. Each drug type has a specific shape, such as triangular, rectangular or square, which corresponds to a similarly shaped receptor spot.
Figure 4.15 The GABA-gated chloride (Cl) channel is embedded in the cell membrane of certain neurons. The channel has multiple receptor sites where alcohol, barbiturates, and benzodiazepines bind to exert their effects. The binding of these molecules opens the chloride channel, allowing negatively-charged chloride ions (Cl) into the neuron’s cell body. Changing its charge in a negative direction pushes the neuron away from firing; thus, activating a GABA neuron has a quieting effect on the brain.

Acute alcohol administration results in a variety of changes to consciousness. At rather low doses, alcohol use is associated with feelings of euphoria. As the dose increases, people report feeling sedated. Generally, alcohol is associated with decreases in reaction time and visual acuity lowered levels of alertness, and reduction in behavioral control. With excessive alcohol use, a person might experience a complete loss of consciousness and/or difficulty remembering events that occurred during a period of intoxication (McKim & Hancock, 2013). In addition, if a pregnant woman consumes alcohol, her infant may be born with a cluster of birth defects and symptoms collectively called fetal alcohol spectrum disorder (FASD) or fetal alcohol syndrome (FAS).

With repeated use of many central nervous system depressants, such as alcohol, a person becomes physically dependent upon the substance and will exhibit signs of both tolerance and withdrawal. Psychological dependence on these drugs is also possible. Therefore, the abuse potential of central nervous system depressants is relatively high.

Drug withdrawal is usually an aversive experience, and it can be a life-threatening process in individuals who have a long history of very high doses of alcohol and/or barbiturates. This is of such concern that people who are trying to overcome addiction to these substances should only do so under medical supervision.

Stimulants

Stimulants are drugs that tend to increase overall levels of neural activity. Many of these drugs act as agonists of the dopamine neurotransmitter system. Dopamine activity is often associated with reward and craving; therefore, drugs that affect dopamine neurotransmission often have abuse liability. Drugs in this category include cocaine, amphetamines (including methamphetamine), cathinones (i.e., bath salts), MDMA (ecstasy), nicotine, and caffeine.

Cocaine can be taken in multiple ways. While many users snort cocaine, intravenous injection and inhalation (smoking) are also common. The freebase version of cocaine, known as crack, is a potent, smokable version of the drug. Like many other stimulants, cocaine agonizes the dopamine neurotransmitter system by blocking the reuptake of dopamine in the neuronal synapse.

DIG DEEPER: Methamphetamine

Methamphetamine in its smokable form often called “crystal meth” due to its resemblance to rock crystal formations, is highly addictive. The smokable form reaches the brain very quickly to produce an intense euphoria that dissipates almost as fast as it arrives, prompting users to continue taking the drug. Users often consume the drug every few hours across days-long binges called “runs,” in which the user forgoes food and sleep. In the wake of the opiate epidemic, many drug cartels in Mexico are shifting from producing heroin to producing highly potent but inexpensive forms of methamphetamine. The low cost coupled with a lower risk of overdose than with opiate drugs is making crystal meth a popular choice among drug users today (NIDA, 2019). Using crystal meth poses a number of serious long-term health issues, including dental problems (often called “meth mouth”), skin abrasions caused by excessive scratching, memory loss, sleep problems, violent behavior, paranoia, and hallucinations. Methamphetamine addiction produces an intense craving that is difficult to treat.

Amphetamines have a mechanism of action quite similar to cocaine in that they block the reuptake of dopamine in addition to stimulating its release (Figure 4.16). While amphetamines are often abused, they are also commonly prescribed to children diagnosed with attention deficit hyperactivity disorder (ADHD). It may seem counterintuitive that stimulant medications are prescribed to treat a disorder that involves hyperactivity, but the therapeutic effect comes from increases in neurotransmitter activity within certain areas of the brain associated with impulse control. These brain areas include the prefrontal cortex and basal ganglia.

An illustration of a presynaptic cell and a postsynaptic cell shows these cells’ interactions with cocaine and dopamine molecules. The presynaptic cell contains two cylinder-shaped channels, one on each side near where it faces the postsynaptic cell. The postsynaptic cell contains several receptors, side-by-side across the area that faces the presynaptic cell. In the space between the two cells, there are both cocaine and dopamine molecules. One of the cocaine molecules attaches to one of the presynaptic cell’s channels. This cocaine molecule is labeled “bound cocaine.” An X-shape is shown over the top of the bound cocaine and the channel to indicate that the cocaine does not enter the presynaptic cell. A dopamine molecule is shown inside of the presynaptic cell’s other channel. Arrows connect this dopamine molecule to several others inside of the presynaptic cell. More arrows connect to more dopamine molecules, tracing their paths from the channel into the presynaptic cell, and out into the space between the presynaptic cell and the postsynaptic cell. Arrows extend from two of the dopamine molecules in this in-between space to the postsynaptic cell’s receptors. Only the dopamine molecules are shown binding to the postsynaptic cell’s receptors.
Figure 4.16 As one of their mechanisms of action, cocaine, and amphetamines block the reuptake of dopamine from the synapse into the presynaptic cell.

In recent years, methamphetamine (meth) use has become increasingly widespread. Methamphetamine is a type of amphetamine that can be made from ingredients that are readily available (e.g., medications containing pseudoephedrine, a compound found in many over-the-counter cold and flu remedies). Despite recent changes in laws designed to make obtaining pseudoephedrine more difficult, methamphetamine continues to be an easily accessible and relatively inexpensive drug option (Shukla, Crump, & Chrisco, 2012).

Stimulant users seek a euphoric high, feelings of intense elation and pleasure, especially in those users who take the drug via intravenous injection or smoking. MDMA (3.4-methelynedioxy-methamphetamine, commonly known as “ecstasy” or “Molly”) is a mild stimulant with perception-altering effects. It is typically consumed in pill form. Users experience increased energy, feelings of pleasure, and emotional warmth. Repeated use of these stimulants can have significant adverse consequences. Users can experience physical symptoms that include nausea, elevated blood pressure, and increased heart rate. In addition, these drugs can cause feelings of anxiety, hallucinations, and paranoia (Fiorentini et al., 2011). Normal brain functioning is altered after repeated use of these drugs. For example, repeated use can lead to overall depletion among the monoamine neurotransmitters (dopamine, norepinephrine, and serotonin). The depletion of certain neurotransmitters can lead to mood dysphoria, cognitive problems, and other factors. This can lead to people compulsively using stimulants such as cocaine and amphetamines, in part to try to re-establish the person’s physical and psychological pre-use baseline. (Jayanthi & Ramamoorthy, 2005; Rothman, Blough, & Baumann, 2007).

Caffeine is another stimulant drug. While it is probably the most commonly used drug in the world, the potency of this particular drug pales in comparison to the other stimulant drugs described in this section. Generally, people use caffeine to maintain increased levels of alertness and arousal. Caffeine is found in many common medicines (such as weight loss drugs), beverages, foods, and even cosmetics (Herman & Herman, 2013). While caffeine may have some indirect effects on dopamine neurotransmission, its primary mechanism of action involves antagonizing adenosine activity (Porkka-Heiskanen, 2011). Adenosine is a neurotransmitter that promotes sleep. Caffeine is an adenosine antagonist, so caffeine inhibits the adenosine receptors, thus decreasing sleepiness and promoting wakefulness.

While caffeine is generally considered a relatively safe drug, high blood levels of caffeine can result in insomnia, agitation, muscle twitching, nausea, irregular heartbeat, and even death (Reissig, Strain, & Griffiths, 2009; Wolt, Ganetsky, & Babu, 2012). In 2012, Kromann and Nielson reported on a case study of a 40-year-old woman who suffered significant ill effects from her use of caffeine. The woman used caffeine in the past to boost her mood and to provide energy, but over the course of several years, she increased her caffeine consumption to the point that she was consuming three liters of soda each day. Although she had been taking a prescription antidepressant, her symptoms of depression continued to worsen and she began to suffer physically, displaying significant warning signs of cardiovascular disease and diabetes. Upon admission to an outpatient clinic for treatment of mood disorders, she met all of the diagnostic criteria for substance dependence and was advised to dramatically limit her caffeine intake. Once she was able to limit her use to less than 12 ounces of soda a day, both her mental and physical health gradually improved. Despite the prevalence of caffeine use and the large number of people who confess to suffering from caffeine addiction, this was the first published description of soda dependence appearing in the scientific literature.

Nicotine is highly addictive, and the use of tobacco products is associated with increased risks of heart disease, stroke, and a variety of cancers. Nicotine exerts its effects through its interaction with acetylcholine receptors. Acetylcholine functions as a neurotransmitter in motor neurons. In the central nervous system, it plays a role in arousal and reward mechanisms. Nicotine is most commonly used in the form of tobacco products like cigarettes or chewing tobacco; therefore, there is a tremendous interest in developing effective smoking cessation techniques. To date, people have used a variety of nicotine replacement therapies in addition to various psychotherapeutic options in an attempt to discontinue their use of tobacco products. In general, smoking cessation programs may be effective in the short term, but it is unclear whether these effects persist (Cropley, Theadom, Pravettoni, & Webb, 2008; Levitt, Shaw, Wong, & Kaczorowski, 2007; Smedslund, Fisher, Boles, & Lichtenstein, 2004). Vaping as a means to deliver nicotine is becoming increasingly popular, especially among teens and young adults. Vaping uses battery-powered devices, sometimes called e-cigarettes, that deliver liquid nicotine and flavorings as a vapor. Originally reported as a safe alternative to the known cancer-causing agents found in cigarettes, vaping is now known to be very dangerous and has led to serious lung disease and death in users.

Opioids

An opioid is one of a category of drugs that includes heroin, morphine, methadone, and codeine. Opioids have analgesic properties; that is, they decrease pain. Humans have an endogenous opioid neurotransmitter system—the body makes small quantities of opioid compounds that bind to opioid receptors reducing pain and producing euphoria. Thus, opioid drugs, which mimic this endogenous painkilling mechanism, have an extremely high potential for abuse. Natural opioids, called opiates, are derivatives of opium, which is a naturally occurring compound found in the poppy plant. There are now several synthetic versions of opiate drugs (correctly called opioids) that have very potent painkilling effects, and they are often abused. For example, the National Institutes of Drug Abuse has sponsored research that suggests the misuse and abuse of the prescription pain killers hydrocodone and oxycodone are significant public health concerns (Maxwell, 2006). In 2013, the U.S. Food and Drug Administration recommended tighter controls on their medical use.

Historically, heroin has been a major opioid drug of abuse (Figure 4.17). Heroin can be snorted, smoked, or injected intravenously. Heroin produces intense feelings of euphoria and pleasure, which are amplified when the heroin is injected intravenously. Following the initial “rush,” users experience 4–6 hours of “going on the nod,” alternating between conscious and semi-conscious states. Heroin users often shoot the drug directly into their veins. Some people who have injected many times into their arms will show “track marks,” while other users will inject into areas between their fingers or between their toes, so as not to show obvious track marks and, like all abusers of intravenous drugs, have an increased risk for contraction of both tuberculosis and HIV.

Photograph A shows various paraphernalia spread out on a black surface. The items include a tourniquet, three syringes of varying widths, three cotton-balls, a tiny cooking vessel, a condom, a capsule of sterile water, and an alcohol swab. Photograph B shows a hand holding a spoon containing heroin tar above a small candle.
Figure 4.17 (a) Common paraphernalia for heroin preparation and use are shown here in a needle exchange kit. (b) Heroin is cooked on a spoon over a candle. (credit a: modification of work by Todd Huffman)

Aside from their utility as analgesic drugs, opioid-like compounds are often found in cough suppressants, anti-nausea, and anti-diarrhea medications. Given that withdrawal from a drug often involves an experience opposite to the effect of the drug, it should be no surprise that opioid withdrawal resembles a severe case of the flu. While opioid withdrawal can be extremely unpleasant, it is not life-threatening (Julien, 2005). Still, people experiencing opioid withdrawal may be given methadone to make the withdrawal from the drug less difficult. Methadone is a synthetic opioid that is less euphorigenic than heroin and similar drugs. Methadone clinics help people who previously struggled with opioid addiction manage withdrawal symptoms through the use of methadone. Other drugs, including the opioid buprenorphine, have also been used to alleviate symptoms of opiate withdrawal.

Codeine is an opioid with relatively low potency. It is often prescribed for minor pain, and it is available over-the-counter in some other countries. Like all opioids, codeine does have abuse potential. In fact, abuse of prescription opioid medications is becoming a major concern worldwide (Aquina, Marques-Baptista, Bridgeman, & Merlin, 2009; Casati, Sedefov, & Pfeiffer-Gerschel, 2012).

EVERYDAY CONNECTION: The Opioid Crisis

Few people in the United States remain untouched by the recent opioid epidemic. It seems like everyone knows a friend, family member, or neighbor who has died of an overdose. Opioid addiction reached crisis levels in the United States such that by 2019, an average of 130 people died each day of an opioid overdose (NIDA, 2019).

The crisis actually began in the 1990s, when pharmaceutical companies began mass-marketing pain-relieving opioid drugs like OxyContin with the promise (now known to be false) that they were non-addictive. Increased prescriptions led to greater rates of misuse, along with greater incidence of addiction, even among patients who used these drugs as prescribed. Physiologically, the body can become addicted to opiate drugs in less than a week, including when taken as prescribed. Withdrawal from opioids includes pain, which patients often misinterpret as pain caused by the problem that led to the original prescription, and which motivates patients to continue using the drugs.

The FDA’s 2013 recommendation for tighter controls on opiate prescriptions left many patients addicted to prescription drugs like OxyContin unable to obtain legitimate prescriptions. This created a black market for the drug, where prices soared to $80 or more for a single pill. To prevent withdrawal, many people turned to cheaper heroin, which could be bought for $5 a dose or less. To keep heroin affordable, many dealers began adding more potent synthetic opioids including fentanyl and carfentanyl to increase the effects of heroin. These synthetic drugs are so potent that even small doses can cause overdose and death.

Large-scale public health campaigns by the National Institutes of Health and the National Institute of Drug Abuse have led to recent declines in the opioid crisis. These initiatives include increasing access to treatment and recovery services, increasing access to overdose-reversal drugs like Naloxone, and implementing better public health monitoring systems (NIDA, 2019).

Hallucinogens

hallucinogen is one of a class of drugs that results in profound alterations in sensory and perceptual experiences (Figure 4.18). In some cases, users experience vivid visual hallucinations. It is also common for these types of drugs to cause hallucinations of body sensations (e.g., feeling as if you are a giant) and a skewed perception of the passage of time.

An illustration shows a colorful spiral pattern.
Figure 4.18 Psychedelic images like this are often associated with hallucinogenic compounds. (credit: modification of work by “new 1lluminati”/Flickr)

As a group, hallucinogens are incredibly varied in terms of the neurotransmitter systems they affect. Mescaline and LSD are serotonin agonists, and PCP (angel dust) and ketamine (an animal anesthetic) act as antagonists of the NMDA glutamate receptor. In general, these drugs are not thought to possess the same sort of abuse potential as other classes of drugs discussed in this section.

A photograph shows a window with a neon sign. The sign includes the word “medical” above the shape of a marijuana leaf.
Figure 4.19 Medical marijuana shops are becoming more and more common in the United States. (credit: Laurie Avocado)

While medical marijuana laws have been passed on a state-by-state basis, federal laws still classify this as an illicit substance, making conducting research on the potentially beneficial medicinal uses of marijuana problematic. There is quite a bit of controversy within the scientific community as to the extent to which marijuana might have medicinal benefits due to a lack of large-scale, controlled research (Bostwick, 2012). As a result, many scientists have urged the federal government to allow for the relaxation of current marijuana laws and classifications in order to facilitate a more widespread study of the drug’s effects (Aggarwal et al., 2009; Bostwick, 2012; Kogan & Mechoulam, 2007).

Until recently, the United States Department of Justice routinely arrested people involved and seized marijuana used in medicinal settings. In the latter part of 2013, however, the United States Department of Justice issued statements indicating that they would not continue to challenge state medical marijuana laws. This shift in policy may be in response to the scientific community’s recommendations and/or reflect changing public opinion regarding marijuana.

Learning Objectives

By the end of this section, you will be able to:

  • Define hypnosis and meditation
  • Understand the similarities and differences between hypnosis and meditation

Our states of consciousness change as we move from wakefulness to sleep. We also alter our consciousness through the use of various psychoactive drugs. This final section will consider hypnotic and meditative states as additional examples of altered states of consciousness experienced by some individuals.

Hypnosis

Hypnosis is a state of extreme self-focus and attention in which minimal attention is given to external stimuli. In the therapeutic setting, a clinician may use relaxation and suggestion in an attempt to alter the thoughts and perceptions of a patient. Hypnosis has also been used to draw out information believed to be buried deeply in someone’s memory. For individuals who are especially open to the power of suggestion, hypnosis can prove to be a very effective technique, and brain imaging studies have demonstrated that hypnotic states are associated with global changes in brain functioning (Del Casale et al., 2012; Guldenmund, Vanhaudenhuyse, Boly, Laureys, & Soddu, 2012).

Historically, hypnosis has been viewed with some suspicion because of its portrayal in popular media and entertainment (Figure 4.20). Therefore, it is important to make a distinction between hypnosis as an empirically based therapeutic approach versus as a form of entertainment. Contrary to popular belief, individuals undergoing hypnosis usually have clear memories of the hypnotic experience and are in control of their own behaviors. While hypnosis may be useful in enhancing memory or a skill, such enhancements are very modest in nature (Raz, 2011).

A poster titled “Barnum the Hypnotist” shows illustrations of a person performing hypnotism.
Figure 4.20 Popular portrayals of hypnosis have led to some widely-held misconceptions.

How exactly does a hypnotist bring a participant to a state of hypnosis? While there are variations, there are four parts that appear consistent in bringing people into the state of suggestibility associated with hypnosis (National Research Council, 1994). These components include:

  • The participant is guided to focus on one thing, such as the hypnotist’s words or a ticking watch.
  • The participant is made comfortable and is directed to be relaxed and sleepy.
  • The participant is told to be open to the process of hypnosis, trust the hypnotist, and let go.
  • The participant is encouraged to use his or her imagination.

These steps are conducive to being open to the heightened suggestibility of hypnosis.

People vary in terms of their ability to be hypnotized, but a review of available research suggests that most people are at least moderately hypnotizable (Kihlstrom, 2013). Hypnosis in conjunction with other techniques is used for a variety of therapeutic purposes and has shown to be at least somewhat effective for pain management, treatment of depression and anxiety, smoking cessation, and weight loss (Alladin, 2012; Elkins, Johnson, & Fisher, 2012; Golden, 2012; Montgomery, Schnur, & Kravits, 2012).

How does hypnosis work? Two theories attempt to answer this question: One theory views hypnosis as dissociation and the other theory views it as the performance of a social role. According to the dissociation view, hypnosis is effectively a dissociated state of consciousness, much like our earlier example where you may drive to work, but you are only minimally aware of the process of driving because your attention is focused elsewhere. This theory is supported by Ernest Hilgard’s research into hypnosis and pain. In Hilgard’s experiments, he induced participants into a state of hypnosis and placed their arms into ice water. Participants were told they would not feel pain, but they could press a button if they did; while they reported not feeling pain, they did, in fact, press the button, suggesting a dissociation of consciousness while in the hypnotic state (Hilgard & Hilgard, 1994).

Taking a different approach to explain hypnosis, the social-cognitive theory of hypnosis sees people in hypnotic states as performing the social role of a hypnotized person. As you will learn when you study social roles, people’s behavior can be shaped by their expectations of how they should act in a given situation. Some view a hypnotized person’s behavior not as an altered or dissociated state of consciousness, but as their fulfillment of the social expectations for that role (Coe, 2009; Coe & Sarbin, 1966).

Meditation

Meditation is the act of focusing on a single target (such as the breath or a repeated sound) to increase awareness of the moment. While hypnosis is generally achieved through the interaction of a therapist and the person being treated, an individual can perform meditation alone. Often, however, people wishing to learn to meditate receive some training in techniques to achieve a meditative state.

Although there are a number of different techniques in use, the central feature of all meditation is clearing the mind in order to achieve a state of relaxed awareness and focus (Chen et al., 2013; Lang et al., 2012). Mindfulness meditation has recently become popular. In the variation of mindful meditation, the meditator’s attention is focused on some internal process or an external object (Zeidan, Grant, Brown, McHaffie, & Coghill, 2012).

Meditative techniques have their roots in religious practices (Figure 4.21), but their use has grown in popularity among practitioners of alternative medicine. Research indicates that meditation may help reduce blood pressure, and the American Heart Association suggests that meditation might be used in conjunction with more traditional treatments as a way to manage hypertension, although there is not sufficient data for a recommendation to be made (Brook et al., 2013). Like hypnosis, meditation also shows promise in stress management, sleep quality (Caldwell, Harrison, Adams, Quin, & Greeson, 2010), treatment of mood and anxiety disorders (Chen et al., 2013; Freeman et al., 2010; Vøllestad, Nielsen, & Nielsen, 2012), and pain management (Reiner, Tibi, & Lipsitz, 2013).

Photograph A shows a statue of Buddha with eyes closed and legs crisscrossed. Photograph B shows a person in a similar position.
Figure 4.21 (a) This is a statue of a meditating Buddha, representing one of the many religious traditions of which meditation plays a part. (b) People practicing meditation may experience an alternate state of consciousness. (credit a: modification of work by Jim Epler; credit b: modification of work by Caleb Roenigk)

Sensation and Perception

4

A photograph shows a person playing a piano on the sidewalk near a busy intersection in a city.
Figure 5.1 If you were standing in the midst of this street scene, you would be absorbing and processing numerous pieces of sensory input. (credit: modification of work by Cory Zanker)

Imagine standing on a city street corner. You might be struck by movement everywhere as cars and people go about their business, by the sound of a street musician’s melody or a horn honking in the distance, by the smell of exhaust fumes or of food being sold by a nearby vendor, and by the sensation of hard pavement under your feet.

We rely on our sensory systems to provide important information about our surroundings. We use this information to successfully navigate and interact with our environment so that we can find nourishment, seek shelter, maintain social relationships, and avoid potentially dangerous situations.

This chapter will provide an overview of how sensory information is received and processed by the nervous system and how that affects our conscious experience of the world. We begin by learning the distinction between sensation and perception. Then we consider the physical properties of light and sound stimuli, along with an overview of the basic structure and function of the major sensory systems. The chapter will close with a discussion of a historically important theory of perception called Gestalt.

Learning Objectives

By the end of this section, you will be able to:

  • Distinguish between sensation and perception
  • Describe the concepts of absolute threshold and difference threshold
  • Discuss the roles attention, motivation, and sensory adaptation play in perception
Sensation

What does it mean to sense something? Sensory receptors are specialized neurons that respond to specific types of stimuli. When sensory information is detected by a sensory receptor, sensation has occurred. For example, light that enters the eye causes chemical changes in cells that line the back of the eye. These cells relay messages, in the form of action potentials (as you learned when studying biopsychology), to the central nervous system. The conversion from sensory stimulus energy to action potential is known as transduction.

You have probably known since elementary school that we have five senses: vision, hearing (audition), smell (olfaction), taste (gustation), and touch (somatosensation). It turns out that this notion of five senses is oversimplified. We also have sensory systems that provide information about balance (the vestibular sense), body position and movement (proprioception and kinesthesia), pain (nociception), and temperature (thermoception).

The sensitivity of a given sensory system to the relevant stimuli can be expressed as an absolute threshold. Absolute threshold refers to the minimum amount of stimulus energy that must be present for the stimulus to be detected 50% of the time. Another way to think about this is by asking how dim can a light be or how soft can a sound be and still be detected half of the time. The sensitivity of our sensory receptors can be quite amazing. It has been estimated that on a clear night, the most sensitive sensory cells in the back of the eye can detect a candle flame 30 miles away (Okawa & Sampath, 2007). Under quiet conditions, the hair cells (the receptor cells of the inner ear) can detect the tick of a clock 20 feet away (Galanter, 1962).

It is also possible for us to get messages that are presented below the threshold for conscious awareness—these are called subliminal messages. A stimulus reaches a physiological threshold when it is strong enough to excite sensory receptors and send nerve impulses to the brain: This is an absolute threshold. A message below that threshold is said to be subliminal: We receive it, but we are not consciously aware of it. Over the years there has been a great deal of speculation about the use of subliminal messages in advertising, rock music, and self-help audio programs. Research evidence shows that in laboratory settings, people can process and respond to information outside of awareness. But this does not mean that we obey these messages like zombies; in fact, hidden messages have little effect on behavior outside the laboratory (Kunst-Wilson & Zajonc, 1980; Rensink, 2004; Nelson, 2008; Radel, Sarrazin, Legrain, & Gobancé, 2009; Loersch, Durso, & Petty, 2013).

Absolute thresholds are generally measured under incredibly controlled conditions in situations that are optimal for sensitivity. Sometimes, we are more interested in how much difference in stimuli is required to detect a difference between them. This is known as the just noticeable difference (jnd) or difference threshold. Unlike the absolute threshold, the difference threshold changes depending on the stimulus intensity. As an example, imagine yourself in a very dark movie theater. If an audience member were to receive a text message that caused the cell phone screen to light up, chances are that many people would notice the change in illumination in the theater. However, if the same thing happened in a brightly lit arena during a basketball game, very few people would notice. The cell phone brightness does not change, but its ability to be detected as a change in illumination varies dramatically between the two contexts. Ernst Weber proposed this theory of change in difference threshold in the 1830s, and it has become known as Weber’s law: The difference threshold is a constant fraction of the original stimulus, as the example illustrates.

Perception

While our sensory receptors are constantly collecting information from the environment, it is ultimately how we interpret that information that affects how we interact with the world. Perception refers to the way sensory information is organized, interpreted, and consciously experienced. Perception involves both bottom-up and top-down processing. Bottom-up processing refers to sensory information from a stimulus in the environment driving a process, and top-down processing refers to knowledge and expectancy driving a process, as shown in Figure 5.2 (Egeth & Yantis, 1997; Fine & Minnery, 2009; Yantis & Egeth, 1999).

The figure includes two vertical arrows. The first arrow comes from the word “Top” and points downward to the word “Down.” The explanation reads, “Top-down processing occurs when previous experience and expectations are first used to recognize stimuli.” The second arrow comes from the word “bottom” and points upward to the word “up.” The explanation reads, “Bottom-up processing occurs when we sense basic features of stimuli and then integrate them.”
Figure 5.2 Top-down and bottom-up are ways we process our perceptions.

Imagine that you and some friends are sitting in a crowded restaurant eating lunch and talking. It is very noisy, and you are concentrating on your friend’s face to hear what she is saying, then the sound of breaking glass and the clang of metal pans hitting the floor rings out. The server dropped a large tray of food. Although you were attending to your meal and conversation, that crashing sound would likely get through your attentional filters and capture your attention. You would have no choice but to notice it. That attentional capture would be caused by the sound from the environment: it would be bottom-up.

Alternatively, top-down processes are generally goal-directed, slow, deliberate, effortful, and under your control (Fine & Minnery, 2009; Miller & Cohen, 2001; Miller & D’Esposito, 2005). For instance, if you misplaced your keys, how would you look for them? If you had a yellow key fob, you would probably look for the yellowness of a certain size in specific locations, such as on the counter, coffee table, and other similar places. You would not look for yellowness on your ceiling fan, because you know keys are not normally lying on top of a ceiling fan. That act of searching for a certain size of yellowness in some locations and not others would be top-down—under your control and based on your experience.

One way to think of this concept is that sensation is a physical process, whereas perception is psychological. For example, upon walking into a kitchen and smelling the scent of baking cinnamon rolls, the sensation is the scent receptors detecting the odor of cinnamon, but the perception may be “Mmm, this smells like the bread Grandma used to bake when the family gathered for holidays.”

Although our perceptions are built from sensations, not all sensations result in perception. In fact, we often don’t perceive stimuli that remain relatively constant over prolonged periods of time. This is known as sensory adaptation. Imagine going to a city that you have never visited. You check in to the hotel, but when you get to your room, there is a road construction sign with a bright flashing light outside your window. Unfortunately, there are no other rooms available, so you are stuck with a flashing light. You decide to watch television to unwind. The flashing light was extremely annoying when you first entered your room. It was as if someone was continually turning a bright yellow spotlight on and off in your room, but after watching television for a short while, you no longer notice the light flashing. The light is still flashing and filling your room with yellow light every few seconds, and the photoreceptors in your eyes still sense the light, but you no longer perceive the rapid changes in lighting conditions. That you no longer perceive the flashing light demonstrates sensory adaptation and shows that while closely associated, sensation and perception are different.

There is another factor that affects sensation and perception: attention. Attention plays a significant role in determining what is sensed versus what is perceived. Imagine you are at a party full of music, chatter, and laughter. You get involved in an interesting conversation with a friend, and you tune out all the background noise. If someone interrupted you to ask what song had just finished playing, you would probably be unable to answer that question.

One of the most interesting demonstrations of how important attention is in determining our perception of the environment occurred in a famous study conducted by Daniel Simons and Christopher Chabris (1999). In this study, participants watched a video of people dressed in black and white passing basketballs. Participants were asked to count the number of times the team dressed in white passed the ball. During the video, a person dressed in a black gorilla costume walks among the two teams. You would think that someone would notice the gorilla, right? Nearly half of the people who watched the video didn’t notice the gorilla at all, despite the fact that he was clearly visible for nine seconds. Because participants were so focused on the number of times the team dressed in white was passing the ball, they completely tuned out other visual information. Inattentional blindness is the failure to notice something that is completely visible because the person was actively attending to something else and did not pay attention to other things (Mack & Rock, 1998; Simons & Chabris, 1999).

In a similar experiment, researchers tested inattentional blindness by asking participants to observe images moving across a computer screen. They were instructed to focus on either white or black objects, disregarding the other color. When a red cross passed across the screen, about one-third of subjects did not notice it (Figure 5.3) (Most, Simons, Scholl, & Chabris, 2000).

A photograph shows a person staring at a screen that displays one red cross toward the left side and numerous black and white shapes all over.
Figure 5.3 Nearly one-third of participants in a study did not notice that a red cross passed on the screen because their attention was focused on the black or white figures. (credit: Cory Zanker)

Motivation can also affect perception. Have you ever been expecting a really important phone call and, while taking a shower, you think you hear the phone ringing, only to discover that it is not? If so, then you have experienced how motivation to detect a meaningful stimulus can shift our ability to discriminate between a true sensory stimulus and background noise. The ability to identify a stimulus when it is embedded in a distracting background is called signal detection theory. This might also explain why a mother is awakened by a quiet murmur from her baby but not by other sounds that occur while she is asleep. Signal detection theory has practical applications, such as increasing air traffic controller accuracy. Controllers need to be able to detect planes among many signals (blips) that appear on the radar screen and follow those planes as they move through the sky. In fact, the original work of the researcher who developed signal detection theory was focused on improving the sensitivity of air traffic controllers to plane blips (Swets, 1964).

Our perceptions can also be affected by our beliefs, values, prejudices, expectations, and life experiences. As you will see later in this chapter, individuals who are deprived of the experience of binocular vision during critical periods of development have trouble perceiving depth (Fawcett, Wang, & Birch, 2005). The shared experiences of people within a given cultural context can have pronounced effects on perception. For example, Marshall Segall, Donald Campbell, and Melville Herskovits (1963) published the results of a multinational study in which they demonstrated that individuals from Western cultures were more prone to experience certain types of visual illusions than individuals from non-Western cultures, and vice versa. One such illusion that Westerners were more likely to experience was the Müller-Lyer illusion (Figure 5.4): The lines appear to be different lengths, but they are actually the same length.

Two vertical lines are shown on the left in (a). They each have V–shaped brackets on their ends, but one line has the brackets angled toward its center, and the other has the brackets angled away from its center. The lines are the same length, but the second line appears longer due to the orientation of the brackets on its endpoints. To the right of these lines is a two-dimensional drawing of walls meeting at 90-degree angles. Within this drawing are 2 lines which are the same length, but appear different lengths. Because one line is bordering a window on a wall that has the appearance of being farther away from the perspective of the viewer, it appears shorter than the other line which marks the 90 degree angle where the facing wall appears closer to the viewer’s perspective point.
Figure 5.4 In the Müller-Lyer illusion, lines appear to be of different lengths although they are identical. (a) Arrows at the ends of lines may make the line on the right appear longer, although the lines are the same length. (b) When applied to a three-dimensional image, the line on the right again may appear longer although both black lines are the same length.

These perceptual differences were consistent with differences in the types of environmental features experienced on a regular basis by people in a given cultural context. People in Western cultures, for example, have a perceptual context of buildings with straight lines, what Segall’s study called a carpentered world (Segall et al., 1966). In contrast, people from certain non-Western cultures with an uncarpentered view, such as the Zulu of South Africa, whose villages are made up of round huts arranged in circles, are less susceptible to this illusion (Segall et al., 1999). It is not just vision that is affected by cultural factors. Indeed, research has demonstrated that the ability to identify an odor and rate its pleasantness and its intensity, varies cross-culturally (Ayabe-Kanamura, Saito, Distel, Martínez-Gómez, & Hudson, 1998).

Children described as thrill-seekers are more likely to show taste preferences for intense sour flavors (Liem, Westerbeek, Wolterink, Kok, & de Graaf, 2004), which suggests that basic aspects of personality might affect perception. Furthermore, individuals who hold positive attitudes toward reduced-fat foods are more likely to rate foods labeled as reduced-fat as tasting better than people who have less positive attitudes about these products (Aaron, Mela, & Evans, 1994).

Learning Objectives

By the end of this section, you will be able to:

  • Describe important physical features of wave forms
  • Show how physical properties of light waves are associated with perceptual experience
  • Show how physical properties of sound waves are associated with perceptual experience
 Visual and auditory stimuli both occur in the form of waves. Although the two stimuli are very different in terms of composition, wave forms share similar characteristics that are especially important to our visual and auditory perceptions. In this section, we describe the physical properties of the waves as well as the perceptual experiences associated with them.

Amplitude and Wavelength

Two physical characteristics of a wave are amplitude and wavelength (Figure 5.5). The amplitude of a wave is the distance from the center line to the top point of the crest or the bottom point of the trough. Wavelength refers to the length of a wave from one peak to the next.

A diagram illustrates the basic parts of a wave. Moving from left to right, the wavelength line begins above a straight horizontal line and falls and rises equally above and below that line. One of the areas where the wavelength line reaches its highest point is labeled “Peak.” A horizontal bracket, labeled “Wavelength,” extends from this area to the next peak. One of the areas where the wavelength reaches its lowest point is labeled “Trough.” A vertical bracket, labeled “Amplitude,” extends from a “Peak” to a “Trough.”
Figure 5.5 The amplitude or height of a wave is measured from the peak to the trough. The wavelength is measured from peak to peak.

Wavelength is directly related to the frequency of a given wave form. Frequency refers to the number of waves that pass a given point in a given time period and is often expressed in terms of hertz (Hz), or cycles per second. Longer wavelengths will have lower frequencies, and shorter wavelengths will have higher frequencies (Figure 5.6).

Stacked vertically are 5 waves of different colors and wavelengths. The top wave is red with a long wavelengths, which indicate a low frequency. Moving downward, the color of each wave is different: orange, yellow, green, and blue. Also moving downward, the wavelengths become shorter as the frequencies increase.
Figure 5.6 This figure illustrates waves of differing wavelengths/frequencies. At the top of the figure, the red wave has a long wavelength/short frequency. Moving from top to bottom, the wavelengths decrease, and frequencies increase.

Light Waves

The visible spectrum is the portion of the larger electromagnetic spectrum that we can see. As Figure 5.7 shows, the electromagnetic spectrum encompasses all of the electromagnetic radiation that occurs in our environment and includes gamma rays, x-rays, ultraviolet light, visible light, infrared light, microwaves, and radio waves. The visible spectrum in humans is associated with wavelengths that range from 380 to 740 nm—a very small distance since a nanometer (nm) is one-billionth of a meter. Other species can detect other portions of the electromagnetic spectrum. For instance, honeybees can see light in the ultraviolet range (Wakakuwa, Stavenga, & Arikawa, 2007), and some snakes can detect infrared radiation in addition to more traditional visual light cues (Chen, Deng, Brauth, Ding, & Tang, 2012; Hartline, Kass, & Loop, 1978).

This illustration shows the wavelength, frequency, and size of objects across the electromagnetic spectrum.. At the top, various wavelengths are given in sequence from small to large, with a parallel illustration of a wave with increasing frequency. These are the provided wavelengths, measured in meters: “Gamma ray 10 to the negative twelfth power,” “x-ray 10 to the negative tenth power,” ultraviolet 10 to the negative eighth power,” “visible .5 times 10 to the negative sixth power,” “infrared 10 to the negative fifth power,” microwave 10 to the negative second power,” and “radio 10 cubed.”Another section is labeled “About the size of” and lists from left to right: “Atomic nuclei,” “Atoms,” “Molecules,” “Protozoans,” “Pinpoints,” “Honeybees,” “Humans,” and “Buildings” with an illustration of each . At the bottom is a line labeled “Frequency” with the following measurements in hertz: 10 to the powers of 20, 18, 16, 15, 12, 8, and 4. From left to right the line changes in color from purple to red with the remaining colors of the visible spectrum in between.
Figure 5.7 Light that is visible to humans makes up only a small portion of the electromagnetic spectrum.

In humans, light wavelength is associated with the perception of color (Figure 5.8). Within the visible spectrum, our experience of red is associated with longer wavelengths, greens are intermediate, and blues and violets are shorter in wavelength. (An easy way to remember this is the mnemonic ROYGBIV: red, orange, yellow, green, blue, indigo, violet.) The amplitude of light waves is associated with our experience of brightness or intensity of color, with larger amplitudes appearing brighter.

A line provides Wavelength in nanometers for “400,” “500,” “600,” and “700” nanometers. Within this line are all of the colors of the visible spectrum. Below this line, labeled from left to right are “Cosmic radiation,” “Gamma rays,” “X-rays,” “Ultraviolet,” then a small callout area for the line above containing the colors in the visual spectrum, followed by “Infrared,” “Terahertz radiation,” “Radar,” “Television and radio broadcasting,” and “AC circuits.”
Figure 5.8 Different wavelengths of light are associated with our perception of different colors. (credit: modification of work by Johannes Ahlmann)

Sound Waves

Like light waves, the physical properties of sound waves are associated with various aspects of our perception of sound. The frequency of a sound wave is associated with our perception of that sound’s pitch. High-frequency sound waves are perceived as high-pitched sounds, while low-frequency sound waves are perceived as low-pitched sounds. The audible range of sound frequencies is between 20 and 20000 Hz, with the greatest sensitivity to those frequencies that fall in the middle of this range.

As was the case with the visible spectrum, other species show differences in their audible ranges. For instance, chickens have a very limited audible range, from 125 to 2000 Hz. Mice have an audible range from 1000 to 91000 Hz, and the beluga whale’s audible range is from 1000 to 123000 Hz. Our pet dogs and cats have audible ranges of about 70–45000 Hz and 45–64000 Hz, respectively (Strain, 2003).

The loudness of a given sound is closely associated with the amplitude of the sound wave. Higher amplitudes are associated with louder sounds. Loudness is measured in terms of decibels (dB), a logarithmic unit of sound intensity. A typical conversation would correlate with 60 dB; a rock concert might check-in at 120 dB (Figure 5.9). A whisper 5 feet away or rustling leaves are at the low end of our hearing range; sounds like a window air conditioner, a normal conversation, and even heavy traffic or a vacuum cleaner are within a tolerable range. However, there is the potential for hearing damage from about 80 dB to 130 dB: These are sounds of a food processor, power lawnmower, heavy truck (25 feet away), subway train (20 feet away), live rock music, and a jackhammer. About one-third of all hearing loss is due to noise exposure, and the louder the sound, the shorter the exposure needed to cause hearing damage (Le, Straatman, Lea, & Westerberg, 2017). Listening to music through earbuds at maximum volume (around 100–105 decibels) can cause noise-induced hearing loss after 15 minutes of exposure. Although listening to music at maximum volume may not seem to cause damage, it increases the risk of age-related hearing loss (Kujawa & Liberman, 2006). The threshold for pain is about 130 dB, a jet plane taking off, or a revolver firing at close range (Dunkle, 1982).

This illustration has a vertical bar in the middle labeled Decibels (dB) numbered 0 to 150 in intervals from the bottom to the top. To the left of the bar, the “sound intensity” of different sounds is labeled: “Hearing threshold” is 0; “Whisper” is 30, “soft music” is 40, “Refrigerator” is 45, “Safe” and “normal conversation” is 60, “Heavy city traffic” with “permanent damage after 8 hours of exposure” is 85, “Motorcycle” with “permanent damage after 6 hours exposure” is 95, “Earbuds max volume” with “permanent damage after 15 miutes exposure” is 105, “Risk of hearing loss” is 110, “pain threshold” is 130, “harmful” is 140, and “firearms” with “immediate permanent damage” is 150. To the right of the bar are photographs depicting “common sound”: At 20 decibels is a picture of rustling leaves; At 60 is two people talking, at 85 is traffic, at 105 is ear buds, at 120 is a music concert, and at 130 are jets.
Figure 5.9 This figure illustrates the loudness of common sounds. (credit “planes”: modification of work by Max Pfandl; credit “crowd”: modification of work by Christian Holmér; credit: “earbuds”: modification of work by “Skinny Guy Lover_Flickr”/Flickr; credit “traffic”: modification of work by “quinntheislander_Pixabay”/Pixabay; credit “talking”: modification of work by Joi Ito; credit “leaves”: modification of work by Aurelijus Valeiša)

Although wave amplitude is generally associated with loudness, there is some interaction between frequency and amplitude in our perception of loudness within the audible range. For example, a 10 Hz sound wave is inaudible no matter the amplitude of the wave. A 1000 Hz sound wave, on the other hand, would vary dramatically in terms of perceived loudness as the amplitude of the wave increased.

LINK TO LEARNING: Watch this brief video about our perception of frequency and amplitude to learn more.

Of course, different musical instruments can play the same musical note at the same level of loudness, yet they still sound quite different. This is known as the timbre of a sound. Timbre refers to a sound’s purity, and it is affected by the complex interplay of frequency, amplitude, and timing of sound waves.

Learning Objectives

By the end of this section, you will be able to:

  • Describe the basic anatomy of the visual system
  • Discuss how rods and cones contribute to different aspects of vision
  • Describe how monocular and binocular cues are used in the perception of depth

The visual system constructs a mental representation of the world around us (Figure 5.10). This contributes to our ability to successfully navigate through physical space and interact with important individuals and objects in our environments. This section will provide an overview of the basic anatomy and function of the visual system. In addition, we will explore our ability to perceive color and depth.

Several photographs of peoples’ eyes are shown.
Figure 5.10 Our eyes take in sensory information that helps us understand the world around us. (credit “top left”: modification of work by “rajkumar1220″/Flickr”; credit “top right”: modification of work by Thomas Leuthard; credit “middle left”: modification of work by Demietrich Baker; credit “middle right”: modification of work by “kaybee07″/Flickr; credit “bottom left”: modification of work by “Isengardt”/Flickr; credit “bottom right”: modification of work by Willem Heerbaart)

Anatomy of the Visual System

The eye is the major sensory organ involved in vision (Figure 5.11). Light waves are transmitted across the cornea and enter the eye through the pupil. The cornea is the transparent covering over the eye. It serves as a barrier between the inner eye and the outside world, and it is involved in focusing light waves that enter the eye. The pupil is the small opening in the eye through which light passes, and the size of the pupil can change as a function of light levels as well as emotional arousal. When light levels are low, the pupil will become dilated, or expanded, to allow more light to enter the eye. When light levels are high, the pupil will constrict, or become smaller, to reduce the amount of light that enters the eye. The pupil’s size is controlled by muscles that are connected to the iris, which is the colored portion of the eye.

Different parts of the eye are labeled in this illustration. The cornea, pupil, iris, and lens are situated toward the front of the eye, and at the back are the optic nerve, fovea, and retina.
Figure 5.11 The anatomy of the eye is illustrated in this diagram.

After passing through the pupil, light crosses the lens, a curved, transparent structure that serves to provide additional focus. The lens is attached to muscles that can change its shape to aid in focusing light that is reflected from near or far objects. In a normal-sighted individual, the lens will focus images perfectly on a small indentation in the back of the eye known as the fovea, which is part of the retina, the light-sensitive lining of the eye. The fovea contains densely packed specialized photoreceptor cells (Figure 5.12). These photoreceptor cells, known as cones, are light-detecting cells. The cones are specialized types of photoreceptors that work best in bright light conditions. Cones are very sensitive to acute detail and provide tremendous spatial resolution. They also are directly involved in our ability to perceive color.

While cones are concentrated in the fovea, where images tend to be focused, rods, another type of photoreceptor, are located throughout the remainder of the retina. Rods are specialized photoreceptors that work well in low light conditions, and while they lack the spatial resolution and color function of the cones, they are involved in our vision in dimly lit environments as well as in our perception of movement on the periphery of our visual field.

This illustration shows light reaching the optic nerve, beneath which are Ganglion cells, and then rods and cones.
Figure 5.12 The two types of photoreceptors are shown in this image. Cones are colored green and the rods are blue.

We have all experienced the different sensitivities of rods and cones when making the transition from a brightly lit environment to a dimly lit environment. Imagine going to see a blockbuster movie on a clear summer day. As you walk from the brightly lit lobby into the dark theater, you notice that you immediately have difficulty seeing much of anything. After a few minutes, you begin to adjust to the darkness and can see the interior of the theater. In the bright environment, your vision was dominated primarily by cone activity. As you move to the dark environment, rod activity dominates, but there is a delay in transitioning between the phases. If your rods do not transform light into nerve impulses as easily and efficiently as they should, you will have difficulty seeing in dim light, a condition known as night blindness.

Rods and cones are connected (via several interneurons) to retinal ganglion cells. Axons from the retinal ganglion cells converge and exit through the back of the eye to form the optic nerve. The optic nerve carries visual information from the retina to the brain. There is a point in the visual field called the blind spot: Even when light from a small object is focused on the blind spot, we do not see it. We are not consciously aware of our blind spots for two reasons: First, each eye gets a slightly different view of the visual field; therefore, the blind spots do not overlap. Second, our visual system fills in the blind spot so that although we cannot respond to visual information that occurs in that portion of the visual field, we are also not aware that information is missing.

The optic nerve from each eye merges just below the brain at a point called the optic chiasm. As Figure 5.13 shows, the optic chiasm is an X-shaped structure that sits just below the cerebral cortex at the front of the brain. At the point of the optic chiasm, information from the right visual field (which comes from both eyes) is sent to the left side of the brain, and information from the left visual field is sent to the right side of the brain.

An illustration shows the location of the occipital lobe, optic chiasm, optic nerve, and the eyes in relation to their position in the brain and head.
Figure 5.13 This illustration shows the optic chiasm at the front of the brain and the pathways to the occipital lobe at the back of the brain, where visual sensations are processed into meaningful perceptions.

Once inside the brain, visual information is sent via a number of structures to the occipital lobe at the back of the brain for processing. Visual information might be processed in parallel pathways which can generally be described as the “what pathway” and the “where/how” pathway. The “what pathway” is involved in object recognition and identification, while the “where/how pathway” is involved with location in space and how one might interact with a particular visual stimulus (Milner & Goodale, 2008; Ungerleider & Haxby, 1994). For example, when you see a ball rolling down the street, the “what pathway” identifies what the object is, and the “where/how pathway” identifies its location or movement in space.

WHAT DO YOU THINK? The Ethics of Research Using Animals

David Hubel and Torsten Wiesel were awarded the Nobel Prize in Medicine in 1981 for their research on the visual system. They collaborated for more than twenty years and made significant discoveries about the neurology of visual perception (Hubel & Wiesel, 1959, 1962, 1963, 1970; Wiesel & Hubel, 1963). They studied animals, mostly cats and monkeys. Although they used several techniques, they did considerable single-unit recordings, during which tiny electrodes were inserted in the animal’s brain to determine when a single cell was activated. Among their many discoveries, they found that specific brain cells respond to lines with specific orientations (called ocular dominance), and they mapped the way those cells are arranged in areas of the visual cortex known as columns and hypercolumns.

In some of their research, they sutured one eye of newborn kittens closed and followed the development of the kittens’ vision. They discovered there was a critical period of development for vision. If kittens were deprived of input from one eye, other areas of their visual cortex filled in the area that was normally used by the eye that was sewn closed. In other words, neural connections that exist at birth can be lost if they are deprived of sensory input.

What do you think about sewing a kitten’s eye closed for research? To many animal advocates, this would seem brutal, abusive, and unethical. What if you could do research that would help ensure babies and children born with certain conditions could develop normal vision instead of becoming blind? Would you want that research done? Would you conduct that research, even if it meant causing some harm to cats? Would you think the same way if you were the parent of such a child? What if you worked at the animal shelter?

Like virtually every other industrialized nation, the United States permits medical experimentation on animals, with few limitations (assuming sufficient scientific justification). The goal of any laws that exist is not to ban such tests but rather to limit unnecessary animal suffering by establishing standards for the humane treatment and housing of animals in laboratories.

As explained by Stephen Latham, the director of the Interdisciplinary Center for Bioethics at Yale (2012), possible legal and regulatory approaches to animal testing vary on a continuum from strong government regulation and monitoring of all experimentation at one end, to a self-regulated approach that depends on the ethics of the researchers at the other end. The United Kingdom has the most significant regulatory scheme, whereas Japan uses the self-regulation approach. The U.S. approach is somewhere in the middle, the result of a gradual blending of the two approaches.

There is no question that medical research is a valuable and important practice. The question is whether the use of animals is a necessary or even best practice for producing the most reliable results. Alternatives include the use of patient-drug databases, virtual drug trials, computer models and simulations, and noninvasive imaging techniques such as magnetic resonance imaging and computed tomography scans (“Animals in Science/Alternatives,” n.d.). Other techniques, such as microdosing, use humans not as test animals but as a means to improve the accuracy and reliability of test results. In vitro methods based on human cell and tissue cultures, stem cells, and genetic testing methods are also increasingly available.

Today, at the local level, any facility that uses animals and receives federal funding must have an Institutional Animal Care and Use Committee (IACUC) that ensures that the NIH guidelines are being followed. The IACUC must include researchers, administrators, a veterinarian, and at least one person with no ties to the institution: that is, a concerned citizen. This committee also performs inspections of laboratories and protocols.

Color and Depth Perception

We do not see the world in black and white; neither do we see it as two-dimensional (2-D) or flat (just height and width, no depth). Let’s look at how color vision works and how we perceive three dimensions (height, width, and depth).

Color Vision

Normal-sighted individuals have three different types of cones that mediate color vision. Each of these cone types is maximally sensitive to a slightly different wavelength of light. According to the trichromatic theory of color vision, shown in Figure 5.14, all colors in the spectrum can be produced by combining red, green, and blue. The three types of cones are each receptive to one of the colors.

A graph is shown with “sensitivity” plotted on the y-axis and “Wavelength” in nanometers plotted along the x-axis with measurements of 400, 500, 600, and 700. Three lines in different colors move from the base to the peak of the y axis, and back to the base. The blue line begins at 400 nm and hits its peak of sensitivity around 455 nanometers, before the sensitivity drops off at roughly the same rate at which it increased, returning to the lowest sensitivity around 530 nm . The green line begins at 400 nm and reaches its peak of sensitivity around 535 nanometers. Its sensitivity then decreases at roughly the same rate at which it increased, returning to the lowest sensitivity around 650 nm. The red line follows the same pattern as the first two, beginning at 400 nm, increasing and decreasing at the same rate, and it hits its height of sensitivity around 580 nanometers. Below this graph is a horizontal bar showing the colors of the visible spectrum.
Figure 5.14 This figure illustrates the different sensitivities for the three cone types found in a normal-sighted individual. (credit: modification of work by Vanessa Ezekowitz)

CONNECT THE CONCEPTS

Colorblindness: A Personal Story

Several years ago, I dressed to go to a public function and walked into the kitchen where my 7-year-old daughter sat. She looked up at me, and in her most stern voice, said, “You can’t wear that.” I asked, “Why not?” and she informed me the colors of my clothes did not match. She had complained frequently that I was bad at matching my shirts, pants, and ties, but this time, she sounded especially alarmed. As a single father with no one else to ask at home, I drove us to the nearest convenience store and asked the store clerk if my clothes matched. She said my pants were a bright green color, my shirt was a reddish-orange, and my tie was brown. She looked at me quizzically and said, “No way do your clothes match.” Over the next few days, I started asking my coworkers and friends if my clothes matched. After several days of being told that my coworkers just thought I had “a really unique style,” I made an appointment with an eye doctor and was tested (Figure 5.15). It was then that I found out that I was colorblind. I cannot differentiate between most greens, browns, and reds. Fortunately, other than unknowingly being badly dressed, my colorblindness rarely harms my day-to-day life.

The figure includes three large circles that are made up of smaller circles of varying shades and sizes. Inside each large circle is a number that is made visible only by its different color. The first circle has an orange number 12 in a background of green. The second color has a green number 74 in a background of orange. The third circle has a red and brown number 42 in a background of black and gray.
Figure 5.15 The Ishihara test evaluates color perception by assessing whether individuals can discern numbers that appear in a circle of dots of varying colors and sizes.

Some forms of color deficiency are rare. Seeing in grayscale (only shades of black and white) is extremely rare, and people who do so only have rods, which means they have very low visual acuity and cannot see very well. The most common X-linked inherited abnormality is red-green color blindness (Birch, 2012). Approximately 8% of males of European Caucasian descent, 5% of Asian males, 4% of African males, and less than 2% of indigenous American males, Australian males, and Polynesian males have red-green color deficiency (Birch, 2012). Comparatively, only about 0.4% of females of European Caucasian descent have red-green color deficiency (Birch, 2012).

The trichromatic theory of color vision is not the only theory—another major theory of color vision is known as the opponent-process theory. According to this theory, color is coded in opponent pairs: black-white, yellow-blue, and green-red. The basic idea is that some cells of the visual system are excited by one of the opponent colors and inhibited by the other. So, a cell that was excited by wavelengths associated with green would be inhibited by wavelengths associated with red, and vice versa. One of the implications of opponent processing is that we do not experience greenish-reds or yellowish-blues as colors. Another implication is that this leads to the experience of negative afterimages. An afterimage describes the continuation of a visual sensation after the removal of the stimulus. For example, when you stare briefly at the sun and then look away from it, you may still perceive a spot of light although the stimulus (the sun) has been removed. When color is involved in the stimulus, the color pairings identified in the opponent-process theory lead to a negative afterimage. You can test this concept using the flag in Figure 5.16.

An illustration shows a green flag with a thick, black-bordered yellow lines meeting slightly to the left of the center. A small white dot sits within the yellow space in the exact center of the flag.
Figure 5.16 Stare at the white dot for 30–60 seconds and then move your eyes to a blank piece of white paper. What do you see? This is known as a negative afterimage, and it provides empirical support for the opponent-process theory of color vision.

But these two theories—the trichromatic theory of color vision and the opponent-process theory—are not mutually exclusive. Research has shown that they just apply to different levels of the nervous system. For visual processing on the retina, the trichromatic theory applies: the cones are responsive to three different wavelengths that represent red, blue, and green. But once the signal moves past the retina on its way to the brain, the cells respond in a way consistent with opponent-process theory (Land, 1959; Kaiser, 1997).

Depth Perception

Our ability to perceive spatial relationships in three-dimensional (3-D) space is known as depth perception. With depth perception, we can describe things as being in front, behind, above, below, or to the side of other things.

Our world is three-dimensional, so it makes sense that our mental representation of the world has three-dimensional properties. We use a variety of cues in a visual scene to establish our sense of depth. Some of these are binocular cues, which means that they rely on the use of both eyes. One example of a binocular depth cue is binocular disparity, the slightly different view of the world that each of our eyes receives. To experience this slightly different view, do this simple exercise: extend your arm fully and extend one of your fingers and focus on that finger. Now, close your left eye without moving your head, then open your left eye and close your right eye without moving your head. You will notice that your finger seems to shift as you alternate between the two eyes because of the slightly different view each eye has of your finger.

A 3-D movie works on the same principle: the special glasses you wear allow the two slightly different images projected onto the screen to be seen separately by your left and your right eye. As your brain processes these images, you have the illusion that the leaping animal or running person is coming right toward you.

Although we rely on binocular cues to experience depth in our 3-D world, we can also perceive depth in 2-D arrays. Think about all the paintings and photographs you have seen. Generally, you pick up on depth in these images even though the visual stimulus is 2-D. When we do this, we are relying on a number of monocular cues, or cues that require only one eye. If you think you can’t see depth with one eye, note that you don’t bump into things when using only one eye while walking—and, in fact, we have more monocular cues than binocular cues.

An example of a monocular cue would be what is known as linear perspective. Linear perspective refers to the fact that we perceive depth when we see two parallel lines that seem to converge in an image (Figure 5.17). Some other monocular depth cues are interposition, the partial overlap of objects, and the relative size and closeness of images to the horizon.

A photograph shows an empty road that continues toward the horizon.
Figure 5.17 We perceive depth in a two-dimensional figure like this one through the use of monocular cues like linear perspective, like the parallel lines converging as the road narrows in the distance. (credit: Marc Dalmulder)

DIG DEEPER: Stereoblindness

Bruce Bridgeman was born with an extreme case of lazy eye that resulted in him being stereoblind, or unable to respond to binocular cues of depth. He relied heavily on monocular depth cues, but he never had a true appreciation of the 3-D nature of the world around him. This all changed one night in 2012 while Bruce was seeing a movie with his wife.

The movie the couple was going to see was shot in 3-D, and even though he thought it was a waste of money, Bruce paid for the 3-D glasses when he purchased his ticket. As soon as the film began, Bruce put on the glasses and experienced something completely new. For the first time in his life, he appreciated the true depth of the world around him. Remarkably, his ability to perceive depth persisted outside of the movie theater.

There are cells in the nervous system that respond to binocular depth cues. Normally, these cells require activation during early development in order to persist, so experts familiar with Bruce’s case (and others like his) assume that at some point in his development, Bruce must have experienced at least a fleeting moment of binocular vision. It was enough to ensure the survival of the cells in the visual system tuned to binocular cues. The mystery now is why it took Bruce nearly 70 years to have these cells activated (Peck, 2012).

 

Learning Objectives

By the end of this section, you will be able to:
  • Describe the basic anatomy and function of the auditory system
  • Explain how we encode and perceive pitch
  • Discuss how we localize sound

Our auditory system converts pressure waves into meaningful sounds. This translates into our ability to hear the sounds of nature, to appreciate the beauty of music, and to communicate with one another through spoken language. This section will provide an overview of the basic anatomy and function of the auditory system. It will include a discussion of how the sensory stimulus is translated into neural impulses, where in the brain that information is processed, how we perceive pitch, and how we know where sound is coming from.

Anatomy of the Auditory System

The ear can be separated into multiple sections. The outer ear includes the pinna, which is the visible part of the ear that protrudes from our heads, the auditory canal, and the tympanic membrane, or eardrum. The middle ear contains three tiny bones known as the ossicles, which are named the malleus (or hammer), incus (or anvil), and the stapes (or stirrup). The inner ear contains the semi-circular canals, which are involved in balance and movement (the vestibular sense), and the cochlea. The cochlea is a fluid-filled, snail-shaped structure that contains the sensory receptor cells (hair cells) of the auditory system (Figure 5.18).

An illustration shows sound waves entering the “auditory canal” and traveling to the inner ear. The locations of the “pinna,” “tympanic membrane (eardrum)” are labeled, as well as parts of the inner ear: the “ossicles” and its subparts, the “malleus,” “incus,” and “stapes.” A callout leads to a close-up illustration of the inner ear that shows the locations of the “semicircular canals,” “uticle,” “oval window,” “saccule,” “cochlea,” and the “basilar membrane and hair cells.”
Figure 5.18 The ear is divided into the outer (pinna and tympanic membrane), middle (the three ossicles: malleus, incus, and stapes), and inner (cochlea and basilar membrane) divisions.

Sound waves travel along the auditory canal and strike the tympanic membrane, causing it to vibrate. This vibration results in the movement of the three ossicles. As the ossicles move, the stapes presses into a thin membrane of the cochlea known as the oval window. As the stapes presses into the oval window, the fluid inside the cochlea begins to move, which in turn stimulates hair cells, which are auditory receptor cells of the inner ear embedded in the basilar membrane. The basilar membrane is a thin strip of tissue within the cochlea.

The activation of hair cells is a mechanical process: the stimulation of the hair cell ultimately leads to activation of the cell. As hair cells become activated, they generate neural impulses that travel along the auditory nerve to the brain. Auditory information is shuttled to the inferior colliculus, the medial geniculate nucleus of the thalamus, and finally to the auditory cortex in the temporal lobe of the brain for processing. Like the visual system, there is also evidence suggesting that information about auditory recognition and localization is processed in parallel streams (Rauschecker & Tian, 2000; Renier et al., 2009).

Pitch Perception

Different frequencies of sound waves are associated with differences in our perception of the pitch of those sounds. Low-frequency sounds are lower-pitched, and high-frequency sounds are higher pitched. How does the auditory system differentiate among various pitches?

Several theories have been proposed to account for pitch perception. We’ll discuss two of them here: temporal theory and place theory. The temporal theory of pitch perception asserts that frequency is coded by the activity level of a sensory neuron. This would mean that a given hair cell would fire action potentials related to the frequency of the sound wave. While this is a very intuitive explanation, we detect such a broad range of frequencies (20–20,000 Hz) that the frequency of action potentials fired by hair cells cannot account for the entire range. Because of properties related to sodium channels on the neuronal membrane that are involved in action potentials, there is a point at which a cell cannot fire any faster (Shamma, 2001).

The place theory of pitch perception suggests that different portions of the basilar membrane are sensitive to sounds of different frequencies. More specifically, the base of the basilar membrane responds best to high frequencies and the tip of the basilar membrane responds best to low frequencies. Therefore, hair cells that are in the base portion would be labeled as high-pitch receptors, while those in the tip of the basilar membrane would be labeled as low-pitch receptors (Shamma, 2001).

In reality, both theories explain different aspects of pitch perception. At frequencies up to about 4000 Hz, it is clear that both the rate of action potentials and place contribute to our perception of pitch. However, much higher frequency sounds can only be encoded using place cues (Shamma, 2001).

Sound Localization

The ability to locate sound in our environments is an important part of hearing. Localizing sound could be considered similar to the way that we perceive depth in our visual fields. Like the monocular and binocular cues that provided information about depth, the auditory system uses both monaural (one-eared) and binaural (two-eared) cues to localize sound.

Each pinna interacts with incoming sound waves differently, depending on the sound’s source relative to our bodies. This interaction provides a monaural cue that is helpful in locating sounds that occur above or below and in front of or behind us. The sound waves received by your two ears from sounds that come from directly above, below, in front, or behind you would be identical; therefore, monaural cues are essential (Grothe, Pecka, & McAlpine, 2010).

Binaural cues, on the other hand, provide information on the location of a sound along a horizontal axis by relying on differences in patterns of vibration of the eardrum between our two ears. If a sound comes from an off-center location, it creates two types of binaural cues: interaural level differences and interaural timing differences. Interaural level difference refers to the fact that a sound coming from the right side of your body is more intense at your right ear than at your left ear because of the attenuation of the sound wave as it passes through your head. Interaural timing difference refers to the small difference in the time at which a given sound wave arrives at each ear (Figure 5.19). Certain brain areas monitor these differences to construct where along a horizontal axis a sound originates (Grothe et al., 2010).

A photograph of jets has an illustration of arced waves labeled “sound” coming from the jets. These extend to an outline of a human head, with arrows from the jets identifying the location of each ear.
Figure 5.19 Localizing sound involves the use of both monaural and binaural cues. (credit “plane”: modification of work by Max Pfandl)

Hearing Loss

Deafness is the partial or complete inability to hear. Some people are born without hearing, which is known as congenital deafness. Other people suffer from conductive hearing loss, which is due to a problem delivering sound energy to the cochlea. Causes for conductive hearing loss include blockage of the ear canal, a hole in the tympanic membrane, problems with the ossicles, or fluid in the space between the eardrum and cochlea. Another group of people suffer from sensorineural hearing loss, which is the most common form of hearing loss. Sensorineural hearing loss can be caused by many factors, such as aging, head or acoustic trauma, infections and diseases (such as measles or mumps), medications, environmental effects such as noise exposure (noise-induced hearing loss, as shown in Figure 5.20), tumors, and toxins (such as those found in certain solvents and metals).

Photograph A shows Beyoncé performing at a concert. Photograph B shows a construction worker operating a jackhammer.
Figure 5.20 Environmental factors that can lead to sensorineural hearing loss include regular exposure to loud music or construction equipment. (a) Musical performers and (b) construction workers are at risk for this type of hearing loss. (credit a: modification of work by “GillyBerlin_Flickr”/Flickr; credit b: modification of work by Nick Allen)

Given the mechanical nature by which the sound wave stimulus is transmitted from the eardrum through the ossicles to the oval window of the cochlea, some degree of hearing loss is inevitable. With conductive hearing loss, hearing problems are associated with a failure in the vibration of the eardrum and/or movement of the ossicles. These problems are often dealt with through devices like hearing aids that amplify incoming sound waves to make the vibration of the eardrum and movement of the ossicles more likely to occur.

When the hearing problem is associated with a failure to transmit neural signals from the cochlea to the brain, it is called sensorineural hearing loss. One disease that results in sensorineural hearing loss is Ménière’s disease. Although not well understood, Ménière’s disease results in a degeneration of inner ear structures that can lead to hearing loss, tinnitus (constant ringing or buzzing), vertigo (a sense of spinning), and an increase in pressure within the inner ear (Semaan & Megerian, 2011). This kind of loss cannot be treated with hearing aids, but some individuals might be candidates for a cochlear implant as a treatment option. Cochlear implants are electronic devices that consist of a microphone, a speech processor, and an electrode array. The device receives incoming sound information and directly stimulates the auditory nerve to transmit information to the brain.

WHAT DO YOU THINK? Deaf Culture

In the United States and other places around the world, deaf people have their own language, schools, and customs. This is called deaf culture. In the United States, deaf individuals often communicate using American Sign Language (ASL); ASL has no verbal component and is based entirely on visual signs and gestures. The primary mode of communication is signing. One of the values of deaf culture is to continue traditions like using sign language rather than teaching deaf children to try to speak, read lips, or have cochlear implant surgery.

When a child is diagnosed as deaf, parents have difficult decisions to make. Should the child be enrolled in mainstream schools and taught to verbalize and read lips? Or should the child be sent to a school for deaf children to learn ASL and have significant exposure to deaf culture? Do you think there might be differences in the way that parents approach these decisions depending on whether or not they are also deaf?

Learning Objectives

By the end of this section, you will be able to:

  • Describe the basic functions of the chemical senses
  • Explain the basic functions of the somatosensory, nociceptive, and thermoceptive sensory systems
  • Describe the basic functions of the vestibular, proprioceptive, and kinesthetic sensory systems

Vision and hearing have received an incredible amount of attention from researchers over the years. While there is still much to be learned about how these sensory systems work, we have a much better understanding of them than our other sensory modalities. In this section, we will explore our chemical senses (taste and smell) and our body senses (touch, temperature, pain, balance, and body position).

The Chemical Senses

Taste (gustation) and smell (olfaction) are called chemical senses because both have sensory receptors that respond to molecules in the food we eat or in the air we breathe. There is a pronounced interaction between our chemical senses. For example, when we describe the flavor of a given food, we are really referring to both gustatory and olfactory properties of the food working in combination.

Taste (Gustation)

You have learned since elementary school that there are four basic groupings of taste: sweet, salty, sour, and bitter. Research demonstrates, however, that we have at least six taste groupings. Umami is our fifth taste. Umami is actually a Japanese word that roughly translates to yummy, and it is associated with a taste for monosodium glutamate (Kinnamon & Vandenbeuch, 2009). There is also a growing body of experimental evidence suggesting that we possess a taste for the fatty content of a given food (Mizushige, Inoue, & Fushiki, 2007).

Molecules from the food and beverages we consume dissolve in our saliva and interact with taste receptors on our tongue and in our mouth and throat. Taste buds are formed by groupings of taste receptor cells with hair-like extensions that protrude into the central pore of the taste bud (Figure 5.21). Taste buds have a life cycle of ten days to two weeks, so even destroying some by burning your tongue won’t have any long-term effect; they just grow right back. Taste molecules bind to receptors on this extension and cause chemical changes within the sensory cell that result in neural impulses being transmitted to the brain via different nerves, depending on where the receptor is located. Taste information is transmitted to the medulla, thalamus, and limbic system, and to the gustatory cortex, which is tucked underneath the overlap between the frontal and temporal lobes (Maffei, Haley, & Fontanini, 2012; Roper, 2013).

Illustration A shows a taste bud in an opening of the tongue, with the “tongue surface,” “taste pore,” “taste receptor cell” and “nerves” labeled. Part B is a micrograph showing taste buds on a human tongue.
Figure 5.21 (a) Taste buds are composed of a number of individual taste receptor cells that transmit information to nerves. (b) This micrograph shows a close-up view of the tongue’s surface. (credit a: modification of work by Jonas Töle; credit b: scale-bar data from Matt Russell)

Smell (Olfaction)

Olfactory receptor cells are located in a mucous membrane at the top of the nose. Small hair-like extensions from these receptors serve as the sites for odor molecules dissolved in the mucus to interact with chemical receptors located on these extensions (Figure 5.22). Once an odor molecule has bound a given receptor, chemical changes within the cell result in signals being sent to the olfactory bulb: a bulb-like structure at the tip of the frontal lobe where the olfactory nerves begin. From the olfactory bulb, information is sent to regions of the limbic system and to the primary olfactory cortex, which is located very near the gustatory cortex (Lodovichi & Belluscio, 2012; Spors et al., 2013).

An illustration shows a side view of a human head and the location of the “nasal cavity,” “olfactory receptors,” and “olfactory bulb.”
Figure 5.22 Olfactory receptors are the hair-like parts that extend from the olfactory bulb into the mucous membrane of the nasal cavity.

There is tremendous variation in the sensitivity of the olfactory systems of different species. We often think of dogs as having far superior olfactory systems than our own, and indeed, dogs can do some remarkable things with their noses. There is some evidence to suggest that dogs can “smell” dangerous drops in blood glucose levels as well as cancerous tumors (Wells, 2010). Dogs’ extraordinary olfactory abilities may be due to the increased number of functional genes for olfactory receptors (between 800 and 1200), compared to the fewer than 400 observed in humans and other primates (Niimura & Nei, 2007).

Many species respond to chemical messages, known as pheromones, sent by another individual (Wysocki & Preti, 2004). Pheromonal communication often involves providing information about the reproductive status of a potential mate. So, for example, when a female rat is ready to mate, she secretes pheromonal signals that draw attention from nearby male rats. Pheromonal activation is actually an important component in eliciting sexual behavior in the male rat (Furlow, 1996, 2012; Purvis & Haynes, 1972; Sachs, 1997). There has also been a good deal of research (and controversy) about pheromones in humans (Comfort, 1971; Russell, 1976; Wolfgang-Kimball, 1992; Weller, 1998).

Touch, Thermoception, and Nociception

A number of receptors are distributed throughout the skin to respond to various touch-related stimuli (Figure 5.23). These receptors include Meissner’s corpuscles, Pacinian corpuscles, Merkel’s disks, and Ruffini corpuscles. Meissner’s corpuscles respond to pressure and lower frequency vibrations, and Pacinian corpuscles detect transient pressure and higher frequency vibrations. Merkel’s disks respond to light pressure, while Ruffini corpuscles detect stretch (Abraira & Ginty, 2013).

An illustration shows “skin surface” underneath which different receptors are identified: the “pacinian corpuscle,” “ruffini corpuscle,” “merkel’s disk,” and “meissner’s corpuscle.”
Figure 5.23 There are many types of sensory receptors located in the skin, each attuned to specific touch-related stimuli.

In addition to the receptors located in the skin, there are also a number of free nerve endings that serve sensory functions. These nerve endings respond to a variety of different types of touch-related stimuli and serve as sensory receptors for both thermoception (temperature perception) and nociception (a signal indicating potential harm and maybe pain) (Garland, 2012; Petho & Reeh, 2012; Spray, 1986). Sensory information collected from the receptors and free nerve endings travels up the spinal cord and is transmitted to regions of the medulla, thalamus, and ultimately to the somatosensory cortex, which is located in the postcentral gyrus of the parietal lobe.

Pain Perception

Pain is an unpleasant experience that involves both physical and psychological components. Feeling pain is quite adaptive because it makes us aware of an injury, and it motivates us to remove ourselves from the cause of that injury. In addition, pain also makes us less likely to suffer additional injury because we will be gentler with our injured body parts.

Generally speaking, pain can be considered to be neuropathic or inflammatory in nature. Pain that signals some type of tissue damage is known as inflammatory pain. In some situations, pain results from damage to neurons of either the peripheral or central nervous system. As a result, pain signals that are sent to the brain get exaggerated. This type of pain is known as neuropathic pain. Multiple treatment options for pain relief range from relaxation therapy to the use of analgesic medications to deep brain stimulation. The most effective treatment option for a given individual will depend on a number of considerations, including the severity and persistence of the pain and any medical/psychological conditions.

Some individuals are born without the ability to feel pain. This very rare genetic disorder is known as congenital insensitivity to pain (or congenital analgesia). While those with congenital analgesia can detect differences in temperature and pressure, they cannot experience pain. As a result, they often suffer significant injuries. Young children have serious mouth and tongue injuries because they have bitten themselves repeatedly. Not surprisingly, individuals suffering from this disorder have much shorter life expectancies due to their injuries and secondary infections of injured sites (U.S. National Library of Medicine, 2013).

The Vestibular Sense, Proprioception, and Kinesthesia

The vestibular sense contributes to our ability to maintain balance and body posture. As Figure 5.24 shows, the major sensory organs (utricle, saccule, and the three semicircular canals) of this system are located next to the cochlea in the inner ear. The vestibular organs are fluid-filled and have hair cells, similar to the ones found in the auditory system, which respond to the movement of the head and gravitational forces. When these hair cells are stimulated, they send signals to the brain via the vestibular nerve. Although we may not be consciously aware of our vestibular system’s sensory information under normal circumstances, its importance is apparent when we experience motion sickness and/or dizziness related to infections of the inner ear (Khan & Chang, 2013).

An illustration of the vestibular system shows the locations of the three canals (“posterior canal,” “horizontal canal,” and “superior canal”) and the locations of the “urticle,” “oval window,” “cochlea,” “basilar membrane and hair cells,” “saccule,” and “vestibule.”
Figure 5.24 The major sensory organs of the vestibular system are located next to the cochlea in the inner ear. These include the utricle, saccule, and the three semicircular canals (posterior, superior, and horizontal).

In addition to maintaining balance, the vestibular system collects information critical for controlling movement and the reflexes that move various parts of our bodies to compensate for changes in body position. Therefore, both proprioception (perception of body position) and kinesthesia (perception of the body’s movement through space) interact with information provided by the vestibular system.

These sensory systems also gather information from receptors that respond to stretch and tension in muscles, joints, skin, and tendons (Lackner & DiZio, 2005; Proske, 2006; Proske & Gandevia, 2012). Proprioceptive and kinesthetic information travels to the brain via the spinal column. Several cortical regions in addition to the cerebellum receive information from and send information to the sensory organs of the proprioceptive and kinesthetic systems.

Learning Objectives

By the end of this section, you will be able to:

  • Explain the figure-ground relationship
  • Define Gestalt principles of grouping
  • Describe how perceptual set is influenced by an individual’s characteristics and mental state

In the early part of the 20th century, Max Wertheimer published a paper demonstrating that individuals perceived motion in rapidly flickering static images—an insight that came to him as he used a child’s toy tachistoscope. Wertheimer, and his assistants Wolfgang Köhler and Kurt Koffka, who later became his partners, believed that perception involved more than simply combining sensory stimuli. This belief led to a new movement within the field of psychology known as Gestalt psychology. The word gestalt literally means form or pattern, but its use reflects the idea that the whole is different from the sum of its parts. In other words, the brain creates a perception that is more than simply the sum of available sensory inputs, and it does so in predictable ways. Gestalt psychologists translated these predictable ways into principles by which we organize sensory information. As a result, Gestalt psychology has been extremely influential in the area of sensation and perception (Rock & Palmer, 1990).

One Gestalt principle is the figure-ground relationship. According to this principle, we tend to segment our visual world into figure and ground. Figure is the object or person that is the focus of the visual field, while the ground is the background. As Figure 5.25 shows, our perception can vary tremendously, depending on what is perceived as figure and what is perceived as ground. Presumably, our ability to interpret sensory information depends on what we label as figure and what we label as ground in any particular case, although this assumption has been called into question (Peterson & Gibson, 1994; Vecera & O’Reilly, 1998).

An illustration shows two identical black face-like shapes that face towards one another, and one white vase-like shape that occupies all of the space in between them. Depending on which part of the illustration is focused on, either the black shapes or the white shape may appear to be the object of the illustration, leaving the other(s) perceived as negative space.
Figure 5.25 The concept of figure-ground relationship explains why this image can be perceived either as a vase or as a pair of faces.

Another Gestalt principle for organizing sensory stimuli into meaningful perception is proximity. This principle asserts that things that are close to one another tend to be grouped together, as Figure 5.26 illustrates.

Illustration A shows thirty-six dots in six evenly-spaced rows and columns. Illustration B shows thirty-six dots in six evenly-spaced rows but with the columns separated into three sets of two columns.
Figure 5.26 The Gestalt principle of proximity suggests that you see (a) one block of dots on the left side and (b) three columns on the right side.

How we read something provides another illustration of the proximity concept. For example, we read this sentence like this, notl iket hiso rt hat. We group the letters of a given word together because there are no spaces between the letters, and we perceive words because there are spaces between each word. Here are some more examples: Cany oum akes enseo ft hiss entence? What doth es e wor dsmea n?

We might also use the principle of similarity to group things in our visual fields. According to this principle, things that are alike tend to be grouped together (Figure 5.27). For example, when watching a football game, we tend to group individuals based on the colors of their uniforms. When watching an offensive drive, we can get a sense of the two teams simply by grouping along this dimension.

An illustration shows six rows of six dots each. The rows of dots alternate between blue and white colored dots.
Figure 5.27 When looking at this array of dots, we likely perceive alternating rows of colors. We are grouping these dots according to the principle of similarity.

Two additional Gestalt principles are the law of continuity (or good continuation) and closure. The law of continuity suggests that we are more likely to perceive continuous, smooth flowing lines rather than jagged, broken lines (Figure 5.28). The principle of closure states that we organize our perceptions into complete objects rather than as a series of parts (Figure 5.29).

An illustration shows two lines of diagonal dots that cross in the middle in the general shape of an “X.”
Figure 5.28 Good continuation would suggest that we are more likely to perceive this as two overlapping lines, rather than four lines meeting in the center.
An illustration shows fragmented lines that would form a circle if they were connected. Another illustration shows fragmented lines that would form a square if they were connected.
Figure 5.29 Closure suggests that we will perceive a complete circle and rectangle rather than a series of segments.

According to Gestalt theorists, pattern perception, or our ability to discriminate among different figures and shapes, occurs by following the principles described above. You probably feel fairly certain that your perception accurately matches the real world, but this is not always the case. Our perceptions are based on perceptual hypotheses: educated guesses that we make while interpreting sensory information. These hypotheses are informed by a number of factors, including our personalities, experiences, and expectations. We use these hypotheses to generate our perceptual set. For instance, research has demonstrated that those who are given verbal priming produce a biased interpretation of complex ambiguous figures (Goolkasian & Woodbury, 2010).

Case Study in Sensation and Perception

In 2011, the New York Times published a feature story on Krista and Tatiana Hogan, Canadian twin girls. These particular twins are unique because Krista and Tatiana are conjoined twins, connected at the head. There is evidence that the two girls are connected in a part of the brain called the thalamus, which is a major sensory relay center. Most incoming sensory information is sent through the thalamus before reaching higher regions of the cerebral cortex for processing.

The implications of this potential connection mean that it might be possible for one twin to experience the sensations of the other twin. For instance, if Krista is watching a particularly funny television program, Tatiana might smile or laugh even if she is not watching the program. This particular possibility has piqued the interest of many neuroscientists who seek to understand how the brain uses sensory information.

These twins represent an enormous resource in the study of the brain, and since their condition is very rare, it is likely that as long as their family agrees, scientists will follow these girls very closely throughout their lives to gain as much information as possible (Dominus, 2011).

Over time, it has become clear that while Krista and Tatiana share some sensory experiences and motor control, they remain two distinct individuals, which provides tremendous insight into researchers interested in the mind and the brain (Egnor, 2017).

In observational research, scientists are conducting a clinical or case study when they focus on one person or just a few individuals. Indeed, some scientists spend their entire careers studying just 10–20 individuals. Why would they do this? Obviously, when they focus their attention on a very small number of people, they can gain a tremendous amount of insight into those cases. The richness of information that is collected in clinical or case studies is unmatched by any other single research method. This allows the researcher to have a very deep understanding of the individuals and the particular phenomenon being studied.

If clinical or case studies provide so much information, why are they not more frequent among researchers? As it turns out, the major benefit of this particular approach is also a weakness. As mentioned earlier, this approach is often used when studying individuals who are interesting to researchers because they have a rare characteristic. Therefore, the individuals who serve as the focus of case studies are not like most other people. If scientists ultimately want to explain all behavior, focusing attention on such a special group of people can make it difficult to generalize any observations to the larger population as a whole. Generalizing refers to the ability to apply the findings of a particular research project to larger segments of society. Again, case studies provide enormous amounts of information, but since the cases are so specific, the potential to apply what’s learned to the average person may be very limited.

Additional Supplemental Resources

Websites

  • International Association for the Study of Pain 
    • The International Association for the Study of Pain brings together scientists, clinicians, health-care providers, and policymakers to stimulate and support the study of pain and translate that knowledge into improved pain relief worldwide.

Videos

  • Awareness Test 
    • Watch this video very closely it is a great example of change blindness and selective attention. Closed captioning available.
  • Ambiguous Vase 
    • Description of a famous ambiguous figure. Closed captioning available.
  • The “Door” Study 
    • Video footage of classic change blindness research. Closed captioning available.
  • Young Woman Illusions
    • Animated demonstration of a famous ambiguous figure.
  • Ted-Ed: What is color? 
    • How does color work? In this Ted-Ed video, you’ll learn about the properties of color, and how frequency plays a role in our perception of color.  A variety of discussion and assessment questions are included with the video (free registration is required to access the questions). Closed captioning available.
  • Ted-Ed: How optical illusions trick your brain 
    • Watch this Ted-Ed video to learn more about the ways in which our eyes and brain are tricked by optical illusions.  What does this tell us about the inner-workings of our brains?  A variety of discussion and assessment questions are included with the video (free registration is required to access the questions). Closed captioning available.
  • How the Ear Works 
    • How does the ear work? In this short video clip, you’ll learn about the inner workings of the human ear. Closed captioning available.
  • The mysterious science of pain – Joshua W. Pate 

    • Explore the biological and psychological factors that influence how we experience pain and how our nervous system reactions to harmful stimuli.  Joshua W. Pate investigates the experience of pain.
  • Crash Course Video #5 – Sensation and Perception 
    • This video on the sensation and perception covers topics including absolute threshold, Weber’s Law, signal detection theory, and vision. Closed captioning available.
  • Crash Course Video #6 – Homunculus 
    • This video on the homunculus covers the senses of hearing, taste, smell, and touch. Closed captioning available.
  • Crash Course Video #7 – Perceiving is Believing 
    • This video on perceiving includes information on form perception, depth perception, and monocular cues. Closed captioning available.

 

Access for free at https://openstax.org/books/psychology-2e/pages/1-introduction

Learning

5

A photograph shows a baby turtle moving across sand toward the ocean. A photograph shows a young child standing on a surfboard in a small wave.
Figure 6.1 Loggerhead sea turtle hatchlings are born knowing how to find the ocean and how to swim. Unlike the sea turtle, humans must learn how to swim (and surf). (credit “turtle”: modification of work by Becky Skiba, USFWS; credit “surfer”: modification of work by Mike Baird)

The summer sun shines brightly on a deserted stretch of beach. Suddenly, a tiny grey head emerges from the sand, then another and another. Soon the beach is teeming with loggerhead sea turtle hatchlings (Figure 6.1). Although only minutes old, the hatchlings know exactly what to do. Their flippers are not very efficient for moving across the hot sand, yet they continue onward, instinctively. Some are quickly snapped up by gulls circling overhead and others become lunch for hungry ghost crabs that dart out of their holes. Despite these dangers, the hatchlings are driven to leave the safety of their nest and find the ocean.

Not far down this same beach, Ben and his son, Julian, paddle out into the ocean on surfboards. A wave approaches. Julian crouches on his board, then jumps up and rides the wave for a few seconds before losing his balance. He emerges from the water in time to watch his father ride the face of the wave.

Unlike baby sea turtles, which know how to find the ocean and swim with no help from their parents, we are not born knowing how to swim (or surf). Yet we humans pride ourselves on our ability to learn. In fact, over thousands of years and across cultures, we have created institutions devoted entirely to learning. But have you ever asked yourself how exactly it is that we learn? What processes are at work as we come to know what we know? This chapter focuses on the primary ways in which learning occurs.

Learning Objectives

By the end of this section, you will be able to:

  • Explain how learned behaviors are different from instincts and reflexes
  • Define learning
  • Recognize and define three basic forms of learning—classical conditioning, operant conditioning, and observational learning

Birds build nests and migrate as winter approaches. Infants suckle at their mother’s breast. Dogs shake water off their wet fur. Salmon swim upstream to spawn, and spiders spin intricate webs. What do these seemingly unrelated behaviors have in common? They all are unlearned behaviors. Both instincts and reflexes are innate (unlearned) behaviors that organisms are born with. Reflexes are a motor or neural reaction to a specific stimulus in the environment. They tend to be simpler than instincts, involve the activity of specific body parts and systems (e.g., the knee-jerk reflex and the contraction of the pupil in bright light), and involve more primitive centers of the central nervous system (e.g., the spinal cord and the medulla). In contrast, instincts are innate behaviors that are triggered by a broader range of events, such as maturation and the change of seasons. They are more complex patterns of behavior, involve the movement of the organism as a whole (e.g., sexual activity and migration), and involve higher brain centers.

Both reflexes and instincts help an organism adapt to its environment and do not have to be learned. For example, every healthy human baby has a sucking reflex, present at birth. Babies are born knowing how to suck on a nipple, whether artificial (from a bottle) or human. Nobody teaches the baby to suck, just as no one teaches a sea turtle hatchling to move toward the ocean. Learning, like reflexes and instincts, allows an organism to adapt to its environment. But unlike instincts and reflexes, learned behaviors involve change and experience: learning is a relatively permanent change in behavior or knowledge that results from experience. In contrast to the innate behaviors discussed above, learning involves acquiring knowledge and skills through experience. Looking back at our surfing scenario, Julian will have to spend much more time training with his surfboard before he learns how to ride the waves like his father.

Learning to surf, as well as any complex learning process (e.g., learning about the discipline of psychology), involves a complex interaction of conscious and unconscious processes. Learning has traditionally been studied in terms of its simplest components—the associations our minds automatically make between events. Our minds have a natural tendency to connect events that occur closely together or in sequence. Associative learning occurs when an organism makes connections between stimuli or events that occur together in the environment. You will see that associative learning is central to all three basic learning processes discussed in this chapter; classical conditioning tends to involve unconscious processes, operant conditioning tends to involve conscious processes, and observational learning adds social and cognitive layers to all the basic associative processes, both conscious and unconscious. These learning processes will be discussed in detail later in the chapter, but it is helpful to have a brief overview of each as you begin to explore how learning is understood from a psychological perspective.

In classical conditioning, also known as Pavlovian conditioning, organisms learn to associate events—or stimuli—that repeatedly happen together. We experience this process throughout our daily lives. For example, you might see a flash of lightning in the sky during a storm and then hear a loud boom of thunder. The sound of the thunder naturally makes you jump (loud noises have that effect by reflex). Because lightning reliably predicts the impending boom of thunder, you may associate the two and jump when you see lightning. Psychological researchers study this associative process by focusing on what can be seen and measured—behaviors. Researchers ask if one stimulus triggers a reflex, can we train a different stimulus to trigger that same reflex? In operant conditioning, organisms learn, again, to associate events—a behavior and its consequence (reinforcement or punishment). A pleasant consequence encourages more of that behavior in the future, whereas a punishment deters the behavior. Imagine you are teaching your dog, Hodor, to sit. You tell Hodor to sit and give him a treat when he does. After repeated experiences, Hodor begins to associate the act of sitting with receiving a treat. He learns that the consequence of sitting is that he gets a doggie biscuit (Figure 6.2). Conversely, if the dog is punished when exhibiting a behavior, it becomes conditioned to avoid that behavior (e.g., receiving a small shock when crossing the boundary of an invisible electric fence).

A photograph shows a dog standing at attention and smelling a treat in a person’s hand.
Figure 6.2 In operant conditioning, a response is associated with a consequence. This dog has learned that certain behaviors result in receiving a treat. (credit: Crystal Rolfe)

Observational learning extends the effective range of both classical and operant conditioning. In contrast to classical and operant conditioning, in which learning occurs only through direct experience, observational learning is the process of watching others and then imitating what they do. A lot of learning among humans and other animals comes from observational learning. To get an idea of the extra effective range that observational learning brings, consider Ben and his son Julian from the introduction. How might observation help Julian learn to surf, as opposed to learning by trial and error alone? By watching his father, he can imitate the moves that bring success and avoid the moves that lead to failure. Can you think of something you have learned how to do after watching someone else?

All of the approaches covered in this chapter are part of a particular tradition in psychology, called behaviorism, which we discuss in the next section. However, these approaches do not represent the entire study of learning. Separate traditions of learning have taken shape within different fields of psychology, such as memory and cognition, so you will find that other chapters will round out your understanding of the topic. Over time these traditions tend to converge. For example, in this chapter, you will see how cognition has come to play a larger role in behaviorism, whose more extreme adherents once insisted that behaviors are triggered by the environment with no intervening thought.

Learning Objectives

By the end of this section, you will be able to:

  • Explain how classical conditioning occurs
  • Summarize the processes of acquisition, extinction, spontaneous recovery, generalization, and discrimination
Does the name Ivan Pavlov ring a bell? Even if you are new to the study of psychology, chances are that you have heard of Pavlov and his famous dogs.

Pavlov (1849–1936), a Russian scientist, performed extensive research on dogs and is best known for his experiments in classical conditioning (Figure 6.3). As we discussed briefly in the previous section, classical conditioning is a process by which we learn to associate stimuli and, consequently, to anticipate events.

A portrait shows Ivan Pavlov.
Figure 6.3 Ivan Pavlov’s research on the digestive system of dogs unexpectedly led to his discovery of the learning process now known as classical conditioning.

Pavlov came to his conclusions about how learning occurs completely by accident. Pavlov was a physiologist, not a psychologist. Physiologists study the life processes of organisms, from the molecular level to the level of cells, organ systems, and entire organisms. Pavlov’s area of interest was the digestive system (Hunt, 2007). In his studies with dogs, Pavlov measured the amount of saliva produced in response to various foods. Over time, Pavlov (1927) observed that the dogs began to salivate not only at the taste of food, but also at the sight of food, at the sight of an empty food bowl, and even at the sound of the laboratory assistants’ footsteps. Salivating to food in the mouth is reflexive, so no learning is involved. However, dogs don’t naturally salivate at the sight of an empty bowl or the sound of footsteps.

These unusual responses intrigued Pavlov, and he wondered what accounted for what he called the dogs’ “psychic secretions” (Pavlov, 1927). To explore this phenomenon in an objective manner, Pavlov designed a series of carefully controlled experiments to see which stimuli would cause the dogs to salivate. He was able to train the dogs to salivate in response to stimuli that clearly had nothing to do with food, such as the sound of a bell, a light, and a touch on the leg. Through his experiments, Pavlov realized that an organism has two types of responses to its environment: (1) unconditioned (unlearned) responses, or reflexes, and (2) conditioned (learned) responses.

In Pavlov’s experiments, the dogs salivated each time meat powder was presented to them. The meat powder in this situation was an unconditioned stimulus (UCS): a stimulus that elicits a reflexive response in an organism. The dogs’ salivation was an unconditioned response (UCR): a natural (unlearned) reaction to a given stimulus. Before conditioning, think of the dogs’ stimulus and response like this:

Meat powder (UCS)→Salivation (UCR)

In classical conditioning, a neutral stimulus is presented immediately before an unconditioned stimulus. Pavlov would sound a tone (like ringing a bell) and then give the dogs the meat powder (Figure 6.4). The tone was the neutral stimulus (NS), which is a stimulus that does not naturally elicit a response. Prior to conditioning, the dogs did not salivate when they just heard the tone because the tone had no association for the dogs.

 

Tone (NS) + Meat Powder (UCS)→Salivation (UCR)

When Pavlov paired the tone with the meat powder over and over again, the previously neutral stimulus (the tone) also began to elicit salivation from the dogs. Thus, the neutral stimulus became the conditioned stimulus (CS), which is a stimulus that elicits a response after repeatedly being paired with an unconditioned stimulus. Eventually, the dogs began to salivate to the tone alone, just as they previously had salivated at the sound of the assistants’ footsteps. The behavior caused by the conditioned stimulus is called the conditioned response (CR). In the case of Pavlov’s dogs, they had learned to associate the tone (CS) with being fed, and they began to salivate (CR) in anticipation of food.

 

Tone (CS)→Salivation (CR)

 

Two illustrations are labeled “before conditioning” and show a dog salivating over a dish of food, and a dog not salivating while a bell is rung. An illustration labeled “during conditioning” shows a dog salivating over a bowl of food while a bell is rung. An illustration labeled “after conditioning” shows a dog salivating while a bell is rung.
Figure 6.4 Before conditioning, an unconditioned stimulus (food) produces an unconditioned response (salivation), and a neutral stimulus (bell) does not produce a response. During conditioning, the unconditioned stimulus (food) is presented repeatedly just after the presentation of the neutral stimulus (bell). After conditioning, the neutral stimulus alone produces a conditioned response (salivation), thus becoming a conditioned stimulus.

Real-World Application of Classical Conditioning

How does classical conditioning work in the real world? Consider the case of Moisha, who was diagnosed with cancer. When she received her first chemotherapy treatment, she vomited shortly after the chemicals were injected. In fact, every trip to the doctor for chemotherapy treatment shortly after the drugs were injected, she vomited. Moisha’s treatment was a success and her cancer went into remission. Now, when she visits her oncologist’s office every 6 months for a check-up, she becomes nauseous. In this case, the chemotherapy drugs are the unconditioned stimulus (UCS), vomiting is the unconditioned response (UCR), the doctor’s office is the conditioned stimulus (CS) after being paired with the UCS, and nausea is the conditioned response (CR). Let’s assume that the chemotherapy drugs that Moisha takes are given through a syringe injection. After entering the doctor’s office, Moisha sees a syringe and then gets her medication. In addition to the doctor’s office, Moisha will learn to associate the syringe will the medication and will respond to syringes with nausea. This is an example of higher-order (or second-order) conditioning when the conditioned stimulus (the doctor’s office) serves to condition another stimulus (the syringe). It is hard to achieve anything above second-order conditioning. For example, if someone rang a bell every time Moisha received a syringe injection of chemotherapy drugs in the doctor’s office, Moisha likely will never get sick in response to the bell.

Consider another example of classical conditioning. Let’s say you have a cat named Tiger, who is quite spoiled. You keep her food in a separate cabinet, and you also have a special electric can opener that you use only to open cans of cat food. For every meal, Tiger hears the distinctive sound of the electric can opener (“zzhzhz”) and then gets her food. Tiger quickly learns that when she hears “zzhzhz” she is about to get fed. What do you think Tiger does when she hears the electric can opener? She will likely get excited and run to where you are preparing her food. This is an example of classical conditioning. In this case, what are the UCS, CS, UCR, and CR?

What if the cabinet holding Tiger’s food becomes squeaky? In that case, Tiger hears “squeak” (the cabinet), “zzhzhz” (the electric can opener), and then she gets her food. Tiger will learn to get excited when she hears the “squeak” of the cabinet. Pairing a new neutral stimulus (“squeak”) with the conditioned stimulus (“zzhzhz”) is called higher-order conditioning, or second-order conditioning. This means you are using the conditioned stimulus of the can opener to condition another stimulus: the squeaky cabinet (Figure 6.5). It is hard to achieve anything above second-order conditioning. For example, if you ring a bell, open the cabinet (“squeak”), use the can opener (“zzhzhz”), and then feed Tiger, Tiger will likely never get excited when hearing the bell alone.

A diagram is labeled “Higher-Order / Second-Order Conditioning” and has three rows. The first row shows an electric can opener labeled “conditioned stimulus” followed by a plus sign and then a dish of food labeled “unconditioned stimulus,” followed by an equal sign and a picture of a salivating cat labeled “unconditioned response.” The second row shows a squeaky cabinet door labeled “second-order stimulus” followed by a plus sign and then an electric can opener labeled “conditioned stimulus,” followed by an equal sign and a picture of a salivating cat labeled “conditioned response.” The third row shows a squeaky cabinet door labeled “second-order stimulus” followed by an equal sign and a picture of a salivating cat labeled “conditioned response.”
Figure 6.5 In higher-order conditioning, an established conditioned stimulus is paired with a new neutral stimulus (the second-order stimulus), so that eventually the new stimulus also elicits the conditioned response, without the initial conditioned stimulus being presented.

Now that you know how classical conditioning works and have seen several examples, let’s take a look at some of the general processes involved. In classical conditioning, the initial period of learning is known as acquisition, when an organism learns to connect a neutral stimulus and an unconditioned stimulus. During acquisition, the neutral stimulus begins to elicit the conditioned response, and eventually, the neutral stimulus becomes a conditioned stimulus capable of eliciting the conditioned response by itself. Timing is important for conditioning to occur. Typically, there should only be a brief interval between the presentation of the conditioned stimulus and the unconditioned stimulus. Depending on what is being conditioned, sometimes this interval is as little as five seconds (Chance, 2009). However, with other types of conditioning, the interval can be up to several hours.

Taste aversion is a type of conditioning in which an interval of several hours may pass between the conditioned stimulus (something ingested) and the unconditioned stimulus (nausea or illness). Here’s how it works. Between classes, you and a friend grab a quick lunch from a food cart on campus. You share a dish of chicken curry and head off to your next class. A few hours later, you feel nauseous and become ill. Although your friend is fine and you determine that you have intestinal flu (the food is not the culprit), you’ve developed a taste aversion; the next time you are at a restaurant and someone orders curry, you immediately feel ill. While the chicken dish is not what made you sick, you are experiencing taste aversion: you’ve been conditioned to be averse to a food after a single, bad experience.

How does this occur—conditioning based on a single instance and involving an extended time lapse between the event and the negative stimulus? Research into taste aversion suggests that this response may be an evolutionary adaptation designed to help organisms quickly learn to avoid harmful foods (Garcia & Rusiniak, 1980; Garcia & Koelling, 1966). Not only may this contribute to species survival via natural selection, but it may also help us develop strategies for challenges such as helping cancer patients through the nausea induced by certain treatments (Holmes, 1993; Jacobsen et al., 1993; Hutton, Baracos, & Wismer, 2007; Skolin et al., 2006). Garcia and Koelling (1966) showed not only that taste aversions could be conditioned, but also that there were biological constraints to learning. In their study, separate groups of rats were conditioned to associate either a flavor with illness or lights and sounds with illness. Results showed that all rats exposed to flavor-illness pairings learned to avoid the flavor, but none of the rats exposed to lights and sounds with illness learned to avoid lights or sounds. This added evidence to the idea that classical conditioning could contribute to species survival by helping organisms learn to avoid stimuli that posed real dangers to health and welfare.

Robert Rescorla demonstrated how powerfully an organism can learn to predict the UCS from the CS. Take, for example, the following two situations. Ari’s dad always has dinner on the table every day at 6:00. Soraya’s mom switches it up so that some days they eat dinner at 6:00, some days they eat at 5:00 and other days they eat at 7:00. For Ari, 6:00 reliably and consistently predicts dinner, so Ari will likely start feeling hungry every day right before 6:00, even if he’s had a late snack. Soraya, on the other hand, will be less likely to associate 6:00 with dinner, since 6:00 does not always predict that dinner is coming. Rescorla, along with his colleague at Yale University, Alan Wagner, developed a mathematical formula that could be used to calculate the probability that an association would be learned given the ability of a conditioned stimulus to predict the occurrence of an unconditioned stimulus and other factors; today this is known as the Rescorla-Wagner model (Rescorla & Wagner, 1972)

Once we have established the connection between the unconditioned stimulus and the conditioned stimulus, how do we break that connection and get the dog, cat, or child to stop responding? In Tiger’s case, imagine what would happen if you stopped using the electric can opener for her food and began to use it only for human food. Now, Tiger would hear the can opener, but she would not get food. In classical conditioning terms, you would be giving the conditioned stimulus, but not the unconditioned stimulus. Pavlov explored this scenario in his experiments with dogs: sounding the tone without giving the dogs the meat powder. Soon the dogs stopped responding to the tone. Extinction is the decrease in the conditioned response when the unconditioned stimulus is no longer presented with the conditioned stimulus. When presented with the conditioned stimulus alone, the dog, cat, or another organism would show a weaker and weaker response, and finally no response. In classical conditioning terms, there is a gradual weakening and disappearance of the conditioned response.

What happens when learning is not used for a while—when what was learned lies dormant? As we just discussed, Pavlov found that when he repeatedly presented the bell (conditioned stimulus) without the meat powder (unconditioned stimulus), extinction occurred; the dogs stopped salivating to the bell. However, after a couple of hours of resting from this extinction training, the dogs again began to salivate when Pavlov rang the bell. What do you think would happen with Tiger’s behavior if your electric can opener broke, and you did not use it for several months? When you finally got it fixed and started using it to open Tiger’s food again, Tiger would remember the association between the can opener and her food—she would get excited and run to the kitchen when she heard the sound. The behavior of Pavlov’s dogs and Tiger illustrates a concept Pavlov called spontaneous recovery: the return of a previously extinguished conditioned response following a rest period (Figure 6.7).

A chart has an x-axis labeled “time” and a y-axis labeled “strength of CR;” there are four columns of graphed data. The first column is labeled “acquisition (CS + UCS) and the line rises steeply from the bottom to the top. The second column is labeled “Extinction (CS alone)” and the line drops rapidly from the top to the bottom. The third column is labeled “Pause” and has no line. The fourth column has a line that begins midway and drops sharply to the bottom. At the point where the line begins, it is labeled “Spontaneous recovery of CR”; the halfway point on the line is labeled “Extinction (CS alone).”
Figure 6.7 This is the curve of acquisition, extinction, and spontaneous recovery. The rising curve shows the conditioned response quickly getting stronger through the repeated pairing of the conditioned stimulus and the unconditioned stimulus (acquisition). Then the curve decreases, which shows how the conditioned response weakens when only the conditioned stimulus is presented (extinction). After a break or pause from conditioning, the conditioned response reappears (spontaneous recovery).

Of course, these processes also apply to humans. For example, let’s say that every day when you walk to campus, an ice cream truck passes your route. Day after day, you hear the truck’s music (neutral stimulus), so you finally stop and purchase a chocolate ice cream bar. You take a bite (unconditioned stimulus) and then your mouth waters (unconditioned response). This initial period of learning is known as acquisition when you begin to connect the neutral stimulus (the sound of the truck) and the unconditioned stimulus (the taste of the chocolate ice cream in your mouth). During acquisition, the conditioned response gets stronger and stronger through repeated pairings of the conditioned stimulus and unconditioned stimulus. Several days (and ice cream bars) later, you notice that your mouth begins to water (conditioned response) as soon as you hear the truck’s musical jingle—even before you bite into the ice cream bar. Then one day you head down the street. You hear the truck’s music (conditioned stimulus), and your mouth waters (conditioned response). However, when you get to the truck, you discover that they are all out of ice cream. You leave disappointed. The next few days you pass by the truck and hear the music, but don’t stop to get an ice cream bar because you’re running late for class. You begin to salivate less and less when you hear the music until by the end of the week, your mouth no longer waters when you hear the tune. This illustrates extinction. The conditioned response weakens when only the conditioned stimulus (the sound of the truck) is presented, without being followed by the unconditioned stimulus (chocolate ice cream in the mouth). Then the weekend comes. You don’t have to go to class, so you don’t pass the truck. Monday morning arrives and you take your usual route to campus. You round the corner and hear the truck again. What do you think happens? Your mouth begins to water again. Why? After a break from conditioning, the conditioned response reappears, which indicates spontaneous recovery.

Acquisition and extinction involve the strengthening and weakening, respectively, of a learned association. Two other learning processes—stimulus discrimination and stimulus generalization—are involved in determining which stimuli will trigger learned responses. Animals (including humans) need to distinguish between stimuli—for example, between sounds that predict a threatening event and sounds that do not—so that they can respond appropriately (such as running away if the sound is threatening). When an organism learns to respond differently to various stimuli that are similar, it is called stimulus discrimination. In classical conditioning terms, the organism demonstrates the conditioned response only to the conditioned stimulus. Pavlov’s dogs discriminated between the basic tone that sounded before they were fed and other tones (e.g., the doorbell) because the other sounds did not predict the arrival of food. Similarly, Tiger, the cat, discriminated between the sound of the can opener and the sound of the electric mixer. When the electric mixer is going, Tiger is not about to be fed, so she does not come running to the kitchen looking for food. In our other example, Moisha, the cancer patient, discriminated between oncologists and other types of doctors. She learned not to feel ill when visiting doctors for other types of appointments, such as her annual physical.

On the other hand, when an organism demonstrates the conditioned response to stimuli that are similar to the conditioned stimulus, it is called stimulus generalization, the opposite of stimulus discrimination. The more similar a stimulus is to the conditioned stimulus, the more likely the organism is to give the conditioned response. For instance, if the electric mixer sounds very similar to the electric can opener, Tiger may come running after hearing its sound. But if you do not feed her following the electric mixer sound, and you continue to feed her consistently after the electric can opener sound, she will quickly learn to discriminate between the two sounds (provided they are sufficiently dissimilar that she can tell them apart). In our other example, Moisha continued to feel ill whenever visiting other oncologists or other doctors in the same building as her oncologist.

Behaviorism

John B. Watson, shown in Figure 6.8, is considered the founder of behaviorism. Behaviorism is a school of thought that arose during the first part of the 20th century, which incorporates elements of Pavlov’s classical conditioning (Hunt, 2007). In stark contrast with Freud, who considered the reasons for behavior to be hidden in the unconscious, Watson championed the idea that all behavior can be studied as a simple stimulus-response reaction, without regard for internal processes. Watson argued that in order for psychology to become a legitimate science, it must shift its concern away from internal mental processes because mental processes cannot be seen or measured. Instead, he asserted that psychology must focus on outward observable behavior that can be measured.

A photograph shows John B. Watson.
Figure 6.8 John B. Watson used the principles of classical conditioning in the study of human emotion.

Watson’s ideas were influenced by Pavlov’s work. According to Watson, human behavior, just like animal behavior, is primarily the result of conditioned responses. Whereas Pavlov’s work with dogs involved the conditioning of reflexes, Watson believed the same principles could be extended to the conditioning of human emotions (Watson, 1919). Thus began Watson’s work with his graduate student Rosalie Rayner and a baby called Little Albert. Through their experiments with Little Albert, Watson and Rayner (1920) demonstrated how fears can be conditioned.

In 1920, Watson was the chair of the psychology department at Johns Hopkins University. Through his position at the university, he came to meet Little Albert’s mother, Arvilla Merritte, who worked at a campus hospital (DeAngelis, 2010). Watson offered her a dollar to allow her son to be the subject of his experiments in classical conditioning. Through these experiments, Little Albert was exposed to and conditioned to fear certain things. Initially, he was presented with various neutral stimuli, including a rabbit, a dog, a monkey, masks, cotton wool, and a white rat. He was not afraid of any of these things. Then Watson, with the help of Rayner, conditioned Little Albert to associate these stimuli with an emotion—fear. For example, Watson handed Little Albert the white rat, and Little Albert enjoyed playing with it. Then Watson made a loud sound, by striking a hammer against a metal bar hanging behind Little Albert’s head, each time Little Albert touched the rat. Little Albert was frightened by the sound—demonstrating a reflexive fear of sudden loud noises—and began to cry. Watson repeatedly paired the loud sound with the white rat. Soon Little Albert became frightened by the white rat alone. In this case, what are the UCS, CS, UCR, and CR? Days later, Little Albert demonstrated stimulus generalization—he became afraid of other furry things: a rabbit, a furry coat, and even a Santa Claus mask (Figure 6.9). Watson had succeeded in conditioning a fear response in Little Albert, thus demonstrating that emotions could become conditioned responses. It had been Watson’s intention to produce a phobia—a persistent, excessive fear of a specific object or situation— through conditioning alone, thus countering Freud’s view that phobias are caused by deep, hidden conflicts in the mind. However, there is no evidence that Little Albert experienced phobias in later years. Little Albert’s mother moved away, ending the experiment. While Watson’s research provided new insight into conditioning, it would be considered unethical by today’s standards.

A photograph shows a man wearing a mask with a white beard; his face is close to a baby who is crawling away. A caption reads, “Now he fears even Santa Claus.”
Figure 6.9 Through stimulus generalization, Little Albert came to fear furry things, including Watson in a Santa Claus mask.

Learning Objectives

By the end of this section, you will be able to:

  • Define operant conditioning
  • Explain the difference between reinforcement and punishment
  • Distinguish between reinforcement schedules

The previous section of this chapter focused on the type of associative learning known as classical conditioning. Remember that in classical conditioning, something in the environment triggers a reflex automatically, and researchers train the organism to react to a different stimulus. Now we turn to the second type of associative learning, operant conditioning. In operant conditioning, organisms learn to associate a behavior and its consequence (Table 6.1). A pleasant consequence makes that behavior more likely to be repeated in the future. For example, Spirit, a dolphin at the National Aquarium in Baltimore, does a flip in the air when her trainer blows a whistle. The consequence is that she gets a fish.

Classical and Operant Conditioning Compared
Classical Conditioning Operant Conditioning
Conditioning approach An unconditioned stimulus (such as food) is paired with a neutral stimulus (such as a bell). The neutral stimulus eventually becomes the conditioned stimulus, which brings about the conditioned response (salivation). The target behavior is followed by reinforcement or punishment to either strengthen or weaken it, so that the learner is more likely to exhibit the desired behavior in the future.
Stimulus timing The stimulus occurs immediately before the response. The stimulus (either reinforcement or punishment) occurs soon after the response.
Table 6.1

Psychologist B. F. Skinner saw that classical conditioning is limited to existing behaviors that are reflexively elicited, and it doesn’t account for new behaviors such as riding a bike. He proposed a theory about how such behaviors come about. Skinner believed that behavior is motivated by the consequences we receive for the behavior: the reinforcements and punishments. His idea that learning is the result of consequences is based on the law of effect, which was first proposed by psychologist Edward Thorndike. According to the law of effect, behaviors that are followed by consequences that are satisfying to the organism are more likely to be repeated, and behaviors that are followed by unpleasant consequences are less likely to be repeated (Thorndike, 1911). Essentially, if an organism does something that brings about a desired result, the organism is more likely to do it again. If an organism does something that does not bring about the desired result, the organism is less likely to do it again. An example of the law of effect is in employment. One of the reasons (and often the main reason) we show up for work is because we get paid to do so. If we stop getting paid, we will likely stop showing up—even if we love our job.

Working with Thorndike’s law of effect as his foundation, Skinner began conducting scientific experiments on animals (mainly rats and pigeons) to determine how organisms learn through operant conditioning (Skinner, 1938). He placed these animals inside an operant conditioning chamber, which has come to be known as a “Skinner box” (Figure 6.10). A Skinner box contains a lever (for rats) or disk (for pigeons) that the animal can press or peck for a food reward via the dispenser. Speakers and lights can be associated with certain behaviors. A recorder counts the number of responses made by the animal.

A photograph shows B.F. Skinner. An illustration shows a rat in a Skinner box: a chamber with a speaker, lights, a lever, and a food dispenser.
Figure 6.10 (a) B. F. Skinner developed operant conditioning for the systematic study of how behaviors are strengthened or weakened according to their consequences. (b) In a Skinner box, a rat presses a lever in an operant conditioning chamber to receive a food reward. (credit a: modification of work by “Silly rabbit”/Wikimedia Commons)

In discussing operant conditioning, we use several everyday words—positive, negative, reinforcement, and punishment—in a specialized manner. In operant conditioning, positive and negative do not mean good and bad. Instead, positive means you are adding something, and negative means you are taking something away. Reinforcement means you are increasing a behavior, and punishment means you are decreasing a behavior. Reinforcement can be positive or negative, and punishment can also be positive or negative. All reinforcers (positive or negative) increase the likelihood of a behavioral response. All punishers (positive or negative) decrease the likelihood of a behavioral response. Now let’s combine these four terms: positive reinforcement, negative reinforcement, positive punishment, and negative punishment (Table 6.2).

Positive and Negative Reinforcement and Punishment
Reinforcement Punishment
Positive Something is added to increase the likelihood of a behavior. Something is added to decrease the likelihood of a behavior.
Negative Something is removed to increase the likelihood of a behavior. Something is removed to decrease the likelihood of a behavior.
Table 6.2

Reinforcement

The most effective way to teach a person or animal a new behavior is with positive reinforcement. In positive reinforcement, a desirable stimulus is added to increase a behavior.

For example, you tell your five-year-old son, Jerome, that if he cleans his room, he will get a toy. Jerome quickly cleans his room because he wants a new art set. Let’s pause for a moment. Some people might say, “Why should I reward my child for doing what is expected?” But in fact, we are constantly and consistently rewarded in our lives. Our paychecks are rewards, as are high grades and acceptance into our preferred school. Being praised for doing a good job and for passing a driver’s test is also a reward. Positive reinforcement as a learning tool is extremely effective. It has been found that one of the most effective ways to increase achievement in school districts with below-average reading scores was to pay the children to read. Specifically, second-grade students in Dallas were paid $2 each time they read a book and passed a short quiz about the book. The result was a significant increase in reading comprehension (Fryer, 2010). What do you think about this program? If Skinner were alive today, he would probably think this was a great idea. He was a strong proponent of using operant conditioning principles to influence students’ behavior at school. In fact, in addition to the Skinner box, he also invented what he called a teaching machine that was designed to reward small steps in learning (Skinner, 1961)—an early forerunner of computer-assisted learning. His teaching machine tested students’ knowledge as they worked through various school subjects. If students answered questions correctly, they received immediate positive reinforcement and could continue; if they answered incorrectly, they did not receive any reinforcement. The idea was that students would spend additional time studying the material to increase their chance of being reinforced the next time (Skinner, 1961).

In negative reinforcement, an undesirable stimulus is removed to increase a behavior. For example, car manufacturers use the principles of negative reinforcement in their seatbelt systems, which go “beep, beep, beep” until you fasten your seatbelt. The annoying sound stops when you exhibit the desired behavior, increasing the likelihood that you will buckle up in the future. Negative reinforcement is also used frequently in horse training. Riders apply pressure—by pulling the reins or squeezing their legs—and then remove the pressure when the horse performs the desired behavior, such as turning or speeding up. The pressure is the negative stimulus that the horse wants to remove.

Punishment

Many people confuse negative reinforcement with punishment in operant conditioning, but they are two very different mechanisms. Remember that reinforcement, even when it is negative, always increases a behavior. In contrast, punishment always decreases a behavior. In positive punishment, you add an undesirable stimulus to decrease a behavior. An example of positive punishment is scolding a student to get the student to stop texting in class. In this case, a stimulus (the reprimand) is added in order to decrease the behavior (texting in class). In negative punishment, you remove a pleasant stimulus to decrease behavior. For example, when a child misbehaves, a parent can take away a favorite toy. In this case, a stimulus (the toy) is removed in order to decrease the behavior.

Punishment, especially when it is immediate, is one way to decrease undesirable behavior. For example, imagine your four-year-old son, Brandon, hit his younger brother. You have Brandon write 100 times “I will not hit my brother” (positive punishment). Chances are he won’t repeat this behavior. While strategies like this are common today, in the past children were often subject to physical punishment, such as spanking. It’s important to be aware of some of the drawbacks in using physical punishment on children. First, punishment may teach fear. Brandon may become fearful of the street, but he also may become fearful of the person who delivered the punishment—you, his parent. Similarly, children who are punished by teachers may come to fear the teacher and try to avoid school (Gershoff et al., 2010). Consequently, most schools in the United States have banned corporal punishment. Second, punishment may cause children to become more aggressive and prone to antisocial behavior and delinquency (Gershoff, 2002). They see their parents resort to spanking when they become angry and frustrated, so, in turn, they may act out this same behavior when they become angry and frustrated. For example, because you spank Brenda when you are angry with her for her misbehavior, she might start hitting her friends when they won’t share their toys.

While positive punishment can be effective in some cases, Skinner suggested that the use of punishment should be weighed against the possible negative effects. Today’s psychologists and parenting experts favor reinforcement over punishment—they recommend that you catch your child doing something good and reward her for it.

Shaping

In his operant conditioning experiments, Skinner often used an approach called shaping. Instead of rewarding only the target behavior, in shaping, we reward successive approximations of a target behavior. Why is shaping needed? Remember that in order for reinforcement to work, the organism must first display the behavior. Shaping is needed because it is extremely unlikely that an organism will display anything but the simplest of behaviors spontaneously. In shaping, behaviors are broken down into many small, achievable steps. The specific steps used in the process are the following:

  1. Reinforce any response that resembles the desired behavior.
  2. Then reinforce the response that more closely resembles the desired behavior. You will no longer reinforce the previously reinforced response.
  3. Next, begin to reinforce the response that even more closely resembles the desired behavior.
  4. Continue to reinforce closer and closer approximations of the desired behavior.
  5. Finally, only reinforce the desired behavior.

Shaping is often used in teaching a complex behavior or chain of behaviors. Skinner used shaping to teach pigeons not only such relatively simple behaviors as pecking a disk in a Skinner box, but also many unusual and entertaining behaviors, such as turning in circles, walking in figure eights, and even playing ping pong; the technique is commonly used by animal trainers today. An important part of shaping is stimulus discrimination. Recall Pavlov’s dogs—he trained them to respond to the tone of a bell, and not to similar tones or sounds. This discrimination is also important in operant conditioning and in shaping behavior.

It’s easy to see how shaping is effective in teaching behaviors to animals, but how does shaping work with humans? Let’s consider parents whose goal is to have their child learn to clean his room. They use shaping to help him master steps toward the goal. Instead of performing the entire task, they set up these steps and reinforce each step. First, he cleans up one toy. Second, he cleans up five toys. Third, he chooses whether to pick up ten toys or put his books and clothes away. Fourth, he cleans up everything except two toys. Finally, he cleans his entire room.

Primary and Secondary Reinforcers

Rewards such as stickers, praise, money, toys, and more can be used to reinforce learning. Let’s go back to Skinner’s rats again. How did the rats learn to press the lever in the Skinner box? They were rewarded with food each time they pressed the lever. For animals, food would be an obvious reinforcer.

What would be a good reinforcer for humans? For your child Chris, it was the promise of a toy when they cleaned their room. How about Sydney, the soccer player? If you gave Sydney a piece of candy every time Sydney scored a goal, you would be using a primary reinforcer. Primary reinforcers are reinforcers that have innate reinforcing qualities. These kinds of reinforcers are not learned. Water, food, sleep, shelter, sex, and touch, among others, are primary reinforcers. Pleasure is also a primary reinforcer. Organisms do not lose their drive for these things. For most people, jumping in a cool lake on a very hot day would be reinforcing and the cool lake would be innately reinforcing—the water would cool the person off (a physical need), as well as provide pleasure.

secondary reinforcer has no inherent value and only has reinforcing qualities when linked with a primary reinforcer. Praise, linked to affection, is one example of a secondary reinforcer, as when you called out “Great shot!” every time Sydney made a goal. Another example, money, is only worth something when you can use it to buy other things—either things that satisfy basic needs (food, water, shelter—all primary reinforcers) or other secondary reinforcers. If you were on a remote island in the middle of the Pacific Ocean and you had stacks of money, the money would not be useful if you could not spend it. What about the stickers on the behavior chart? They also are secondary reinforcers.

Sometimes, instead of stickers on a sticker chart, a token is used. Tokens, which are also secondary reinforcers, can then be traded in for rewards and prizes. Entire behavior management systems, known as token economies, are built around the use of these kinds of token reinforcers. Token economies have been found to be very effective at modifying behavior in a variety of settings such as schools, prisons, and mental hospitals. For example, a study by Cangi and Daly (2013) found that the use of a token economy increased appropriate social behaviors and reduced inappropriate behaviors in a group of autistic school children. Autistic children tend to exhibit disruptive behaviors such as pinching and hitting. When the children in the study exhibited appropriate behavior (not hitting or pinching), they received a “quiet hands” token. When they hit or pinched, they lost a token. The children could then exchange specified amounts of tokens for minutes of playtime.

EVERYDAY CONNECTION: Behavior Modification in Children

Parents and teachers often use behavior modification to change a child’s behavior. Behavior modification uses the principles of operant conditioning to accomplish behavior change so that undesirable behaviors are switched for more socially acceptable ones. Some teachers and parents create a sticker chart, in which several behaviors are listed (Figure 6.11). Sticker charts are a form of token economies, as described in the text. Each time children perform the behavior, they get a sticker, and after a certain number of stickers, they get a prize or reinforcer. The goal is to increase acceptable behaviors and decrease misbehavior. Remember, it is best to reinforce desired behaviors, rather than to use punishment. In the classroom, the teacher can reinforce a wide range of behaviors, from students raising their hands to walking quietly in the hall, to turning in their homework. At home, parents might create a behavior chart that rewards children for things such as putting away toys, brushing their teeth, and helping with dinner. In order for behavior modification to be effective, the reinforcement needs to be connected with the behavior; the reinforcement must matter to the child and be done consistently.

A photograph shows a child placing stickers on a chart hanging on the wall.
Figure 6.11 Sticker charts are a form of positive reinforcement and a tool for behavior modification. Once this child earns a certain number of stickers for demonstrating a desired behavior, she will be rewarded with a trip to the ice cream parlor. (credit: Abigail Batchelder)

Time-out is another popular technique used in behavior modification with children. It operates on the principle of negative punishment. When a child demonstrates an undesirable behavior, she is removed from the desirable activity at hand (Figure 6.12). For example, say that Sophia and her brother Mario are playing with building blocks. Sophia throws some blocks at her brother, so you give her a warning that she will go to time-out if she does it again. A few minutes later, she throws more blocks at Mario. You remove Sophia from the room for a few minutes. When she comes back, she doesn’t throw blocks.

There are several important points that you should know if you plan to implement time-out as a behavior modification technique. First, make sure the child is being removed from a desirable activity and placed in a less desirable location. If the activity is something undesirable for the child, this technique will backfire because it is more enjoyable for the child to be removed from the activity. Second, the length of the time-out is important. The general rule of thumb is one minute for each year of the child’s age. Sophia is five; therefore, she sits in a time-out for five minutes. Setting a timer helps children know how long they have to sit in time-out. Finally, as a caregiver, keep several guidelines in mind over the course of a time-out: remain calm when directing your child to time-out; ignore your child during a time-out (because caregiver attention may reinforce misbehavior), and give the child a hug or a kind word when time-out is over.

Photograph A shows several children climbing on playground equipment. Photograph B shows a child sitting alone on a bench.
Figure 6.12 Time-out is a popular form of negative punishment used by caregivers. When a child misbehaves, he or she is removed from a desirable activity in an effort to decrease the unwanted behavior. For example, (a) a child might be playing on the playground with friends and push another child; (b) the child who misbehaved would then be removed from the activity for a short period of time. (credit a: modification of work by Simone Ramella; credit b: modification of work by “Spring Dew”/Flickr)

Reinforcement Schedules

Remember, the best way to teach a person or animal a behavior is to use positive reinforcement. For example, Skinner used positive reinforcement to teach rats to press a lever in a Skinner box. At first, the rat might randomly hit the lever while exploring the box, and out would come a pellet of food. After eating the pellet, what do you think the hungry rat did next? It hit the lever again and received another pellet of food. Each time the rat hit the lever, a pellet of food came out. When an organism receives a reinforcer each time it displays a behavior, it is called continuous reinforcement. This reinforcement schedule is the quickest way to teach someone a behavior, and it is especially effective in training a new behavior. Let’s look back at the dog that was learning to sit earlier in the chapter. Now, each time he sits, you give him a treat. Timing is important here: you will be most successful if you present the reinforcer immediately after he sits so that he can make an association between the target behavior (sitting) and the consequence (getting a treat).

Once a behavior is trained, researchers and trainers often turn to another type of reinforcement schedule—partial reinforcement. In partial reinforcement, also referred to as intermittent reinforcement, the person or animal does not get reinforced every time they perform the desired behavior. There are several different types of partial reinforcement schedules (Table 6.3). These schedules are described as either fixed or variable, and as either interval or ratio. Fixed refers to the number of responses between reinforcements, or the amount of time between reinforcements, which is set and unchanging. Variable refers to the number of responses or amount of time between reinforcements, which varies or changes. Interval means the schedule is based on the time between reinforcements, and ratio means the schedule is based on the number of responses between reinforcements.

Reinforcement Schedules
Reinforcement Schedule Description Result Example
Fixed interval Reinforcement is delivered at predictable time intervals (e.g., after 5, 10, 15, and 20 minutes). Moderate response rate with significant pauses after reinforcement Hospital patient uses patient-controlled, doctor-timed pain relief
Variable interval Reinforcement is delivered at unpredictable time intervals (e.g., after 5, 7, 10, and 20 minutes). Moderate yet steady response rate Checking Facebook
Fixed ratio Reinforcement is delivered after a predictable number of responses (e.g., after 2, 4, 6, and 8 responses). High response rate with pauses after reinforcement Piecework—factory worker getting paid for every x number of items manufactured
Variable ratio Reinforcement is delivered after an unpredictable number of responses (e.g., after 1, 4, 5, and 9 responses). High and steady response rate Gambling
Table 6.3

Now let’s combine these four terms. A fixed interval reinforcement schedule is when behavior is rewarded after a set amount of time. For example, June undergoes major surgery in a hospital. During recovery, she is expected to experience pain and will require prescription medications for pain relief. June is given an IV drip with a patient-controlled painkiller. Her doctor sets a limit: one dose per hour. June pushes a button when the pain becomes difficult, and she receives a dose of medication. Since the reward (pain relief) only occurs on a fixed interval, there is no point in exhibiting the behavior when it will not be rewarded.

With a variable interval reinforcement schedule, the person or animal gets the reinforcement based on varying amounts of time, which are unpredictable. Say that Manuel is the manager at a fast-food restaurant. Every once in a while someone from the quality control division comes to Manuel’s restaurant. If the restaurant is clean and the service is fast, everyone on that shift earns a $20 bonus. Manuel never knows when the quality control person will show up, so he always tries to keep the restaurant clean and ensures that his employees provide prompt and courteous service. His productivity regarding prompt service and keeping a clean restaurant are steady because he wants his crew to earn the bonus.

With a fixed ratio reinforcement schedule, there are a set number of responses that must occur before the behavior is rewarded. Carla sells glasses at an eyeglass store, and she earns a commission every time she sells a pair of glasses. She always tries to sell people more pairs of glasses, including prescription sunglasses or a backup pair, so she can increase her commission. She does not care if the person really needs the prescription sunglasses, Carla just wants her bonus. The quality of what Carla sells does not matter because her commission is not based on quality; it’s only based on the number of pairs sold. This distinction in the quality of performance can help determine which reinforcement method is most appropriate for a particular situation. Fixed ratios are better suited to optimize the quantity of output, whereas a fixed interval, in which the reward is not quantity based, can lead to a higher quality of output.

In a variable ratio reinforcement schedule, the number of responses needed for a reward varies. This is the most powerful partial reinforcement schedule. An example of the variable ratio reinforcement schedule is gambling. Imagine that Sarah—generally a smart, thrifty woman—visits Las Vegas for the first time. She is not a gambler, but out of curiosity, she puts a quarter into the slot machine, and then another, and another. Nothing happens. Two dollars in quarters later, her curiosity is fading, and she is just about to quit. But then, the machine lights up, bells go off, and Sarah gets 50 quarters back. That’s more like it! Sarah gets back to inserting quarters with renewed interest, and a few minutes later she has used up all her gains and is $10 in the hole. Now might be a sensible time to quit. And yet, she keeps putting money into the slot machine because she never knows when the next reinforcement is coming. She keeps thinking that with the next quarter she could win $50, or $100, or even more. Because the reinforcement schedule in most types of gambling has a variable ratio schedule, people keep trying and hoping that the next time they will win big. This is one of the reasons that gambling is so addictive—and so resistant to extinction.

In operant conditioning, extinction of a reinforced behavior occurs at some point after reinforcement stops, and the speed at which this happens depends on the reinforcement schedule. In a variable ratio schedule, the point of extinction comes very slowly, as described above. But in the other reinforcement schedules, extinction may come quickly. For example, if June presses the button for the pain relief medication before the allotted time her doctor has approved, no medication is administered. She is on a fixed interval reinforcement schedule (dosed hourly), so extinction occurs quickly when reinforcement doesn’t come at the expected time. Among the reinforcement schedules, variable ratio is the most productive and the most resistant to extinction. Fixed interval is the least productive and the easiest to extinguish (Figure 6.13).

A graph has an x-axis labeled “Time” and a y-axis labeled “Cumulative number of responses.” Two lines labeled “Variable Ratio” and “Fixed Ratio” have similar, steep slopes. The variable ratio line remains straight and is marked in random points where reinforcement occurs. The fixed ratio line has consistently spaced marks indicating where reinforcement has occurred, but after each reinforcement, there is a small drop in the line before it resumes its overall slope. Two lines labeled “Variable Interval” and “Fixed Interval” have similar slopes at roughly a 45-degree angle. The variable interval line remains straight and is marked in random points where reinforcement occurs. The fixed interval line has consistently spaced marks indicating where reinforcement has occurred, but after each reinforcement, there is a drop in the line.
Figure 6.13The four reinforcement schedules yield different response patterns. The variable ratio schedule is unpredictable and yields high and steady response rates, with little if any pause after reinforcement (e.g., a gambler). A fixed ratio schedule is predictable and produces a high response rate, with a short pause after reinforcement (e.g., eyeglass saleswoman). The variable interval schedule is unpredictable and produces a moderate, steady response rate (e.g., restaurant manager). The fixed interval schedule yields a scallop-shaped response pattern, reflecting a significant pause after reinforcement (e.g., surgery patient).

CONNECT THE CONCEPTS: Gambling and the Brain

Skinner (1953) stated, “If the gambling establishment cannot persuade a patron to turn over money with no return, it may achieve the same effect by returning part of the patron’s money on a variable-ratio schedule” (p. 397).

Skinner uses gambling as an example of the power of the variable-ratio reinforcement schedule for maintaining behavior even during long periods without any reinforcement. In fact, Skinner was so confident in his knowledge of gambling addiction that he even claimed he could turn a pigeon into a pathological gambler (“Skinner’s Utopia,” 1971). It is indeed true that variable-ratio schedules keep behavior quite persistent—just imagine the frequency of a child’s tantrums if a parent gives in even once to the behavior. The occasional reward makes it almost impossible to stop the behavior.

Recent research in rats has failed to support Skinner’s idea that training on variable-ratio schedules alone causes pathological gambling (Laskowski et al., 2019). However, other research suggests that gambling does seem to work on the brain in the same way as most addictive drugs, and so there may be some combination of brain chemistry and reinforcement schedule that could lead to problem gambling (Figure 6.14). Specifically, modern research shows the connection between gambling and the activation of the reward centers of the brain that use the neurotransmitter (brain chemical) dopamine (Murch & Clark, 2016). Interestingly, gamblers don’t even have to win to experience the “rush” of dopamine in the brain. “Near misses,” or almost winning but not actually winning, also have been shown to increase activity in the ventral striatum and other brain reward centers that use dopamine (Chase & Clark, 2010). These brain effects are almost identical to those produced by addictive drugs like cocaine and heroin (Murch & Clark, 2016). Based on the neuroscientific evidence showing these similarities, the DSM-5 now considers gambling an addiction, while earlier versions of the DSM classified gambling as an impulse control disorder.

A photograph shows four digital gaming machines.
Figure 6.14Some research suggests that pathological gamblers use gambling to compensate for abnormally low levels of the hormone norepinephrine, which is associated with stress and is secreted in moments of arousal and thrill. (credit: Ted Murphy)

In addition to dopamine, gambling also appears to involve other neurotransmitters, including norepinephrine and serotonin (Potenza, 2013). Norepinephrine is secreted when a person feels stress, arousal, or thrill. It may be that pathological gamblers use gambling to increase their levels of this neurotransmitter. Deficiencies in serotonin might also contribute to compulsive behavior, including a gambling addiction (Potenza, 2013).

It may be that pathological gamblers’ brains are different than those of other people, and perhaps this difference may somehow have led to their gambling addiction, as these studies seem to suggest. However, it is very difficult to ascertain the cause because it is impossible to conduct a true experiment (it would be unethical to try to turn randomly assigned participants into problem gamblers). Therefore, it may be that causation actually moves in the opposite direction—perhaps the act of gambling somehow changes neurotransmitter levels in some gamblers’ brains. It also is possible that some overlooked factor, or confounding variable, played a role in both the gambling addiction and the differences in brain chemistry.

 

Cognition and Latent Learning

Strict behaviorists like Watson and Skinner focused exclusively on studying behavior rather than cognition (such as thoughts and expectations). In fact, Skinner was such a staunch believer that cognition didn’t matter that his ideas were considered radical behaviorism. Skinner considered the mind a “black box”—something completely unknowable—and, therefore, something not to be studied. However, another behaviorist, Edward C. Tolman, had a different opinion. Tolman’s experiments with rats demonstrated that organisms can learn even if they do not receive immediate reinforcement (Tolman & Honzik, 1930; Tolman, Ritchie, & Kalish, 1946). This finding was in conflict with the prevailing idea at the time that reinforcement must be immediate in order for learning to occur, thus suggesting a cognitive aspect to learning.

In the experiments, Tolman placed hungry rats in a maze with no reward for finding their way through it. He also studied a comparison group that was rewarded with food at the end of the maze. As the unreinforced rats explored the maze, they developed a cognitive map: a mental picture of the layout of the maze (Figure 6.15). After 10 sessions in the maze without reinforcement, food was placed in a goal box at the end of the maze. As soon as the rats became aware of the food, they were able to find their way through the maze quickly, just as quickly as the comparison group, which had been rewarded with food all along. This is known as latent learning: learning that occurs but is not observable in behavior until there is a reason to demonstrate it.

An illustration shows three rats in a maze, with a starting point and food at the end.
Figure 6.15 Psychologist Edward Tolman found that rats use cognitive maps to navigate through a maze. Have you ever worked your way through various levels on a video game? You learned when to turn left or right, move up or down. In that case, you were relying on a cognitive map, just like the rats in a maze. (credit: modification of work by “FutUndBeidl”/Flickr)

Latent learning also occurs in humans. Children may learn by watching the actions of their parents but only demonstrate it at a later date when the learned material is needed. For example, suppose that Ravi’s dad drives him to school every day. In this way, Ravi learns the route from his house to his school, but he’s never driven there himself, so he has not had a chance to demonstrate that he’s learned the way. One morning Ravi’s dad has to leave early for a meeting, so he can’t drive Ravi to school. Instead, Ravi follows the same route on his bike that his dad would have taken in the car. This demonstrates latent learning. Ravi had learned the route to school but had no need to demonstrate this knowledge earlier.

EVERYDAY CONNECTION: This Place Is Like a Maze

Have you ever gotten lost in a building and couldn’t find your way back out? While that can be frustrating, you’re not alone. At one time or another, we’ve all gotten lost in places like a museum, hospital, or university library. Whenever we go someplace new, we build a mental representation—or cognitive map—of the location, as Tolman’s rats built a cognitive map of their maze. However, some buildings are confusing because they include many areas that look alike or have short lines of sight. Because of this, it’s often difficult to predict what’s around a corner or decide whether to turn left or right to get out of a building. Psychologist Laura Carlson (2010) suggests that what we place in our cognitive map can impact our success in navigating through the environment. She suggests that paying attention to specific features upon entering a building, such as a picture on the wall, a fountain, a statue, or an escalator, adds information to our cognitive map that can be used later to help us find our way out of the building.

Thinking and Intelligence

6

Three side by side images are shown. On the left is a person lying in the grass with a book, looking off into the distance. In the middle is a sculpture of a person sitting on rock, with chin rested on hand, and the elbow of that hand rested on knee. The third is a drawing of a person sitting cross-legged with his head resting on his hand, elbow on knee.
Figure 7.1 Thinking is an important part of our human experience, and one that has captivated people for centuries. Today, it is one area of psychological study. The 19th-century Girl with a Book by José Ferraz de Almeida Júnior, the 20th-century sculpture The Thinker by August Rodin, and Shi Ke’s 10th-century painting Huike Thinking all reflect the fascination with the process of human thought. (credit “middle”: modification of work by Jason Rogers; credit “right”: modification of work by Tang Zu-Ming)

What is the best way to solve a problem? How does a person who has never seen or touched snow in real life develop an understanding of the concept of snow? How do young children acquire the ability to learn language with no formal instruction? Psychologists who study thinking explore questions like these and are called cognitive psychologists.

Cognitive psychologists also study intelligence. What is intelligence, and how does it vary from person to person? Are “street smarts” a kind of intelligence, and if so, how do they relate to other types of intelligence? What does an IQ test really measure? These questions and more will be explored in this chapter as you study thinking and intelligence.

In other chapters, we discussed the cognitive processes of perception, learning, and memory. In this chapter, we will focus on high-level cognitive processes. As a part of this discussion, we will consider thinking and briefly explore the development and use of language. We will also discuss problem solving and creativity before ending with a discussion of how intelligence is measured and how our biology and environments interact to affect intelligence. After finishing this chapter, you will have a greater appreciation of the higher-level cognitive processes that contribute to our distinctiveness as a species.

Learning Objectives

By the end of this section, you will be able to:

  • Describe cognition
  • Distinguish concepts and prototypes
  • Explain the difference between natural and artificial concepts
  • Describe how schemata are organized and constructed

Imagine all of your thoughts as if they were physical entities, swirling rapidly inside your mind. How is it possible that the brain is able to move from one thought to the next in an organized, orderly fashion? The brain is endlessly perceiving, processing, planning, organizing, and remembering—it is always active. Yet, you don’t notice most of your brain’s activity as you move throughout your daily routine. This is only one facet of the complex processes involved in cognition. Simply put, cognition is thinking, and it encompasses the processes associated with perception, knowledge, problem solving, judgment, language, and memory. Scientists who study cognition are searching for ways to understand how we integrate, organize, and utilize our conscious cognitive experiences without being aware of all of the unconscious work that our brains are doing (for example, Kahneman, 2011).

Cognition

Upon waking each morning, you begin thinking—contemplating the tasks that you must complete that day. In what order should you run your errands? Should you go to the bank, the cleaners, or the grocery store first? Can you get these things done before you head to class or will they need to wait until school is done? These thoughts are one example of cognition at work. Exceptionally complex, cognition is an essential feature of human consciousness, yet not all aspects of cognition are consciously experienced.

Cognitive psychology is the field of psychology dedicated to examining how people think. It attempts to explain how and why we think the way we do by studying the interactions among human thinking, emotion, creativity, language, and problem solving, in addition to other cognitive processes. Cognitive psychologists strive to determine and measure different types of intelligence, why some people are better at problem solving than others, and how emotional intelligence affects success in the workplace, among countless other topics. They also sometimes focus on how we organize thoughts and information gathered from our environments into meaningful categories of thought, which will be discussed later.

Concepts and Prototypes

The human nervous system is capable of handling endless streams of information. The senses serve as the interface between the mind and the external environment, receiving stimuli and translating it into nerve impulses that are transmitted to the brain. The brain then processes this information and uses the relevant pieces to create thoughts, which can then be expressed through language or stored in memory for future use. To make this process more complex, the brain does not gather information from external environments only. When thoughts are formed, the mind synthesizes information from emotions and memories (Figure 7.2). Emotion and memory are powerful influences on both our thoughts and behaviors.

The outline of a human head is shown. There is a box containing “Information, sensations” in front of the head. An arrow from this box points to another box containing “Emotions, memories” located where the front of the person's brain would be. An arrow from this second box points to a third box containing “Thoughts” located where the back of the person's brain would be. There are two arrows coming from “Thoughts.” One arrow points back to the second box, “Emotions, memories,” and the other arrow points to a fourth box, “Behavior.”
Figure 7.2 Sensations and information are received by our brains, filtered through emotions and memories, and processed to become thoughts.

In order to organize this staggering amount of information, the mind has developed a “file cabinet” of sorts in the mind. The different files stored in the file cabinet are called concepts. Concepts are categories or groupings of linguistic information, images, ideas, or memories, such as life experiences. Concepts are, in many ways, big ideas that are generated by observing details, and categorizing and combining these details into cognitive structures. You use concepts to see the relationships among the different elements of your experiences and to keep the information in your mind organized and accessible.

Concepts are informed by our semantic memory (you will learn more about semantic memory in a later chapter) and are present in every aspect of our lives; however, one of the easiest places to notice concepts is inside a classroom, where they are discussed explicitly. When you study United States history, for example, you learn about more than just individual events that have happened in America’s past. You absorb a large quantity of information by listening to and participating in discussions, examining maps, and reading first-hand accounts of people’s lives. Your brain analyzes these details and develops an overall understanding of American history. In the process, your brain gathers details that inform and refine your understanding of related concepts like democracy, power, and freedom.

Concepts can be complex and abstract, like justice, or more concrete, like types of birds. In psychology, for example, Piaget’s stages of development are abstract concepts. Some concepts, like tolerance, are agreed upon by many people because they have been used in various ways over many years. Other concepts, like the characteristics of your ideal friend or your family’s birthday traditions, are personal and individualized. In this way, concepts touch every aspect of our lives, from our many daily routines to the guiding principles behind the way governments function.

Another technique used by your brain to organize information is the identification of prototypes for the concepts you have developed. A prototype is the best example or representation of a concept. For example, what comes to your mind when you think of a dog? Most likely your early experiences with dogs will shape what you imagine. If your first pet was a Golden Retriever, there is a good chance that this would be your prototype for the category of dogs.

Natural and Artificial Concepts

In psychology, concepts can be divided into two categories, natural and artificial. Natural concepts are created “naturally” through your experiences and can be developed from either direct or indirect experiences. For example, if you live in Essex Junction, Vermont, you have probably had a lot of direct experience with snow. You’ve watched it fall from the sky, you’ve seen lightly falling snow that barely covers the windshield of your car, and you’ve shoveled out 18 inches of fluffy white snow as you’ve thought, “This is perfect for skiing.” You’ve thrown snowballs at your best friend and gone sledding down the steepest hill in town. In short, you know snow. You know what it looks like, smells like, tastes like, and feels like. If, however, you’ve lived your whole life on the island of Saint Vincent in the Caribbean, you may never have actually seen snow, much less tasted, smelled, or touched it. You know snow from the indirect experience of seeing pictures of falling snow—or from watching films that feature snow as part of the setting. Either way, snow is a natural concept because you can construct an understanding of it through direct observations, experiences with snow, or indirect knowledge (such as from films or books) (Figure 7.3).

Photograph A shows a snow covered landscape with the sun shining over it. Photograph B shows a sphere shaped object perched atop the corner of a cube shaped object. There is also a triangular object shown.
Figure 7.3 (a) Our concept of snow is an example of a natural concept—one that we understand through direct observation and experience. (b) In contrast, artificial concepts are ones that we know by a specific set of characteristics that they always exhibit, such as what defines different basic shapes. (credit a: modification of work by Maarten Takens; credit b: modification of work by “Shayan (USA)”/Flickr)

An artificial concept, on the other hand, is a concept that is defined by a specific set of characteristics. Various properties of geometric shapes, like squares and triangles, serve as useful examples of artificial concepts. A triangle always has three angles and three sides. A square always has four equal sides and four right angles. Mathematical formulas, like the equation for area (length × width), are artificial concepts defined by specific sets of characteristics that are always the same. Artificial concepts can enhance the understanding of a topic by building on one another. For example, before learning the concept of “area of a square” (and the formula to find it), you must understand what a square is. Once the concept of “area of a square” is understood, an understanding of area for other geometric shapes can be built upon the original understanding of area. The use of artificial concepts to define an idea is crucial to communicating with others and engaging in complex thought. According to Goldstone and Kersten (2003), concepts act as building blocks and can be connected in countless combinations to create complex thoughts.

Schemata

schema is a mental construct consisting of a cluster or collection of related concepts (Bartlett, 1932). There are many different types of schemata, and they all have one thing in common: schemata are a method of organizing information that allows the brain to work more efficiently. When a schema is activated, the brain makes immediate assumptions about the person or object being observed.

There are several types of schemata. A role schema makes assumptions about how individuals in certain roles will behave (Callero, 1994). For example, imagine you meet someone who introduces himself as a firefighter. When this happens, your brain automatically activates the “firefighter schema” and begins making assumptions that this person is brave, selfless, and community-oriented. Despite not knowing this person, already you have unknowingly made judgments about him. Schemata also help you fill in gaps in the information you receive from the world around you. While schemata allow for more efficient information processing, there can be problems with schemata, regardless of whether they are accurate: Perhaps this particular firefighter is not brave, he just works as a firefighter to pay the bills while studying to become a children’s librarian.

An event schema, also known as a cognitive script, is a set of behaviors that can feel like a routine. Think about what you do when you walk into an elevator (Figure 7.4). First, the doors open and you wait to let exiting passengers leave the elevator car. Then, you step into the elevator and turn around to face the doors, looking for the correct button to push. You never face the back of the elevator, do you? And when you’re riding in a crowded elevator and you can’t face the front, it feels uncomfortable, doesn’t it? Interestingly, event schemata can vary widely among different cultures and countries. For example, while it is quite common for people to greet one another with a handshake in the United States, in Tibet, you greet someone by sticking your tongue out at them, and in Belize, you bump fists (Cairns Regional Council, n.d.)

A crowded elevator is shown. There are many people standing close to one another.
Figure 7.4 What event schema do you perform when riding in an elevator? (credit: “Gideon”/Flickr)

Because event schemata are automatic, they can be difficult to change. Imagine that you are driving home from work or school. This event schema involves getting in the car, shutting the door, and buckling your seatbelt before putting the key in the ignition. You might perform this script two or three times each day. As you drive home, you hear your phone’s ring tone. Typically, the event schema that occurs when you hear your phone ringing involves locating the phone and answering it or responding to your latest text message. So without thinking, you reach for your phone, which could be in your pocket, in your bag, or on the passenger seat of the car. This powerful event schema is informed by your pattern of behavior and the pleasurable stimulation that a phone call or text message gives your brain. Because it is a schema, it is extremely challenging for us to stop reaching for the phone, even though we know that we endanger our own lives and the lives of others while we do it (Neyfakh, 2013) (Figure 7.5).

A person’s right hand is holding a cellular phone. The person is in the driver’s seat of an automobile while on the road.
Figure 7.5 Texting while driving is dangerous, but it is a difficult event schema for some people to resist.

Remember the elevator? It feels almost impossible to walk in and not face the door. Our powerful event schema dictates our behavior in the elevator, and it is no different with our phones. Current research suggests that it is the habit, or event schema, of checking our phones in many different situations that make refraining from checking them while driving especially difficult (Bayer & Campbell, 2012). Because texting and driving has become a dangerous epidemic in recent years, psychologists are looking at ways to help people interrupt the “phone schema” while driving. Event schemata like these are the reason why many habits are difficult to break once they have been acquired. As we continue to examine thinking, keep in mind how powerful the forces of concepts and schemata are to our understanding of the world.

Learning Objectives

By the end of this section, you will be able to:

  • Define language and demonstrate familiarity with the components of language
  • Understand the development of language
  • Explain the relationship between language and thinking

Language is a communication system that involves using words and systematic rules to organize those words to transmit information from one individual to another. While language is a form of communication, not all communication is language. Many species communicate with one another through their postures, movements, odors, or vocalizations. This communication is crucial for species that need to interact and develop social relationships with their conspecifics. However, many people have asserted that it is language that makes humans unique among all of the animal species (Corballis & Suddendorf, 2007; Tomasello & Rakoczy, 2003). This section will focus on what distinguishes language as a special form of communication, how the use of language develops, and how language affects the way we think.

Components of Language

Language, be it spoken, signed, or written, has specific components: a lexicon and grammar. Lexicon refers to the words of a given language. Thus, lexicon is a language’s vocabulary. Grammar refers to the set of rules that are used to convey meaning through the use of the lexicon (Fernández & Cairns, 2011). For instance, English grammar dictates that most verbs receive an “-ed” at the end to indicate past tense.

Words are formed by combining the various phonemes that make up the language. A phoneme (e.g., the sounds “ah” vs. “eh”) is a basic sound unit of a given language, and different languages have different sets of phonemes. Phonemes are combined to form morphemes, which are the smallest units of language that convey some type of meaning (e.g., “I” is both a phoneme and a morpheme). We use semantics and syntax to construct language. Semantics and syntax are part of a language’s grammar. Semantics refers to the process by which we derive meaning from morphemes and words. Syntax refers to the way words are organized into sentences (Chomsky, 1965; Fernández & Cairns, 2011).

We apply the rules of grammar to organize the lexicon in novel and creative ways, which allow us to communicate information about both concrete and abstract concepts. We can talk about our immediate and observable surroundings as well as the surface of unseen planets. We can share our innermost thoughts, our plans for the future, and debate the value of a college education. We can provide detailed instructions for cooking a meal, fixing a car, or building a fire. Through our use of words and language, we are able to form, organize, and express ideas, schema, and artificial concepts.

Language Development

Given the remarkable complexity of a language, one might expect that mastering a language would be an especially arduous task; indeed, for those of us trying to learn a second language as adults, this might seem to be true. However, young children master language very quickly with relative ease. B. F. Skinner (1957) proposed that language is learned through reinforcement. Noam Chomsky (1965) criticized this behaviorist approach, asserting instead that the mechanisms underlying language acquisition are biologically determined. The use of language develops in the absence of formal instruction and appears to follow a very similar pattern in children from vastly different cultures and backgrounds. It would seem, therefore, that we are born with a biological predisposition to acquire a language (Chomsky, 1965; Fernández & Cairns, 2011). Moreover, it appears that there is a critical period for language acquisition, such that this proficiency at acquiring language is maximal early in life; generally, as people age, the ease with which they acquire and master new languages diminishes (Johnson & Newport, 1989; Lenneberg, 1967; Singleton, 1995).

Children begin to learn about language from a very early age (Table 7.1). In fact, it appears that this is occurring even before we are born. Newborns show a preference for their mother’s voice and appear to be able to discriminate between the language spoken by their mother and other languages. Babies are also attuned to the languages being used around them and show preferences for videos of faces that are moving in synchrony with the audio of spoken language versus videos that do not synchronize with the audio (Blossom & Morgan, 2006; Pickens, 1994; Spelke & Cortelyou, 1981).

Stages of Language and Communication Development
Stage Age Developmental Language and Communication
1 0–3 months Reflexive communication
2 3–8 months Reflexive communication; interest in others
3 8–13 months Intentional communication; sociability
4 12–18 months First words
5 18–24 months Simple sentences of two words
6 2–3 years Sentences of three or more words
7 3–5 years Complex sentences; has conversations
Table7.1

DIG DEEPER: The Case of Genie

In the fall of 1970, a social worker in the Los Angeles area found a 13-year-old girl who was being raised in extremely neglectful and abusive conditions. The girl, who came to be known as Genie, had lived most of her life tied to a potty chair or confined to a crib in a small room that was kept closed with the curtains drawn. For a little over a decade, Genie had virtually no social interaction and no access to the outside world. As a result of these conditions, Genie was unable to stand up, chew solid food, or speak (Fromkin, Krashen, Curtiss, Rigler, & Rigler, 1974; Rymer, 1993). The police took Genie into protective custody.

Genie’s abilities improved dramatically following her removal from her abusive environment, and early on, it appeared she was acquiring language—much later than would be predicted by critical period hypotheses that had been posited at the time (Fromkin et al., 1974). Genie managed to amass an impressive vocabulary in a relatively short amount of time. However, she never developed a mastery of the grammatical aspects of language (Curtiss, 1981). Perhaps being deprived of the opportunity to learn language during a critical period impeded Genie’s ability to fully acquire and use language.

You may recall that each language has its own set of phonemes that are used to generate morphemes, words, and so on. Babies can discriminate among the sounds that make up a language (for example, they can tell the difference between the “s” in vision and the “ss” in fission); early on, they can differentiate between the sounds of all human languages, even those that do not occur in the languages that are used in their environments. However, by the time that they are about 1 year old, they can only discriminate among those phonemes that are used in the language or languages in their environments (Jensen, 2011; Werker & Lalonde, 1988; Werker & Tees, 1984).

After the first few months of life, babies enter what is known as the babbling stage, during which time they tend to produce single syllables that are repeated over and over. As time passes, more variations appear in the syllables that they produce. During this time, it is unlikely that the babies are trying to communicate; they are just as likely to babble when they are alone as when they are with their caregivers (Fernández & Cairns, 2011). Interestingly, babies who are raised in environments in which sign language is used will also begin to show babbling in the gestures of their hands during this stage (Petitto, Holowka, Sergio, Levy, & Ostry, 2004).

Generally, a child’s first word is uttered sometime between the ages of 1 year to 18 months, and for the next few months, the child will remain in the “one word” stage of language development. During this time, children know a number of words, but they only produce one-word utterances. The child’s early vocabulary is limited to familiar objects or events, often nouns. Although children in this stage only make one-word utterances, these words often carry larger meaning (Fernández & Cairns, 2011). So, for example, a child saying “cookie” could be identifying a cookie or asking for a cookie.

As a child’s lexicon grows, she begins to utter simple sentences and to acquire new vocabulary at a very rapid pace. In addition, children begin to demonstrate a clear understanding of the specific rules that apply to their language(s). Even the mistakes that children sometimes make provide evidence of just how much they understand about those rules. This is sometimes seen in the form of overgeneralization. In this context, overgeneralization refers to an extension of a language rule to an exception to the rule. For example, in English, it is usually the case that an “s” is added to the end of a word to indicate plurality. For example, we speak of one dog versus two dogs. Young children will overgeneralize this rule to cases that are exceptions to the “add an s to the end of the word” rule and say things like “those two gooses” or “three mouses.” Clearly, the rules of the language are understood, even if the exceptions to the rules are still being learned (Moskowitz, 1978).

Language and Thought

When we speak one language, we agree that words are representations of ideas, people, places, and events. The given language that children learn is connected to their culture and surroundings. But can words themselves shape the way we think about things? Psychologists have long investigated the question of whether language shapes thoughts and actions, or whether our thoughts and beliefs shape our language. Two researchers, Edward Sapir and Benjamin Lee Whorf began this investigation in the 1940s. They wanted to understand how the language habits of a community encourage members of that community to interpret language in a particular manner (Sapir, 1941/1964). Sapir and Whorf proposed that language determines thought. For example, in some languages, there are many different words for love. However, in English, we use the word love for all types of love. Does this affect how we think about love depending on the language that we speak (Whorf, 1956)? Researchers have since identified this view as too absolute, pointing out a lack of empiricism behind what Sapir and Whorf proposed (Abler, 2013; Boroditsky, 2011; van Troyer, 1994). Today, psychologists continue to study and debate the relationship between language and thought.

Learning Objectives

By the end of this section, you will be able to:

  • Describe problem solving strategies
  • Define algorithm and heuristic
  • Explain some common roadblocks to effective problem solving and decision making

People face problems every day—usually, multiple problems throughout the day. Sometimes these problems are straightforward: To double a recipe for pizza dough, for example, all that is required is that each ingredient in the recipe is doubled. Sometimes, however, the problems we encounter are more complex. For example, say you have a work deadline, and you must mail a printed copy of a report to your supervisor by the end of the business day. The report is time-sensitive and must be sent overnight. You finished the report last night, but your printer will not work today. What should you do? First, you need to identify the problem and then apply a strategy for solving the problem.

Problem-Solving Strategies

When you are presented with a problem—whether it is a complex mathematical problem or a broken printer, how do you solve it? Before finding a solution to the problem, the problem must first be clearly identified. After that, one of many problem solving strategies can be applied, hopefully resulting in a solution.

problem-solving strategy is a plan of action used to find a solution. Different strategies have different action plans associated with them (Table 7.2). For example, a well-known strategy is trial and error. The old adage, “If at first, you don’t succeed, try, try again” describes trial and error. In terms of your broken printer, you could try checking the ink levels, and if that doesn’t work, you could check to make sure the paper tray isn’t jammed. Or maybe the printer isn’t actually connected to your laptop. When using trial and error, you would continue to try different solutions until you solved your problem. Although trial and error is not typically one of the most time-efficient strategies, it is a commonly used one.

Problem-Solving Strategies
Method Description Example
Trial and error Continue trying different solutions until problem is solved Restarting phone, turning off WiFi, turning off bluetooth in order to determine why your phone is malfunctioning
Algorithm Step-by-step problem-solving formula Instruction manual for installing new software on your computer
Heuristic General problem-solving framework Working backwards; breaking a task into steps
Table7.2

Another type of strategy is an algorithm. An algorithm is a problem-solving formula that provides you with step-by-step instructions used to achieve a desired outcome (Kahneman, 2011). You can think of an algorithm as a recipe with highly detailed instructions that produce the same result every time they are performed. Algorithms are used frequently in our everyday lives, especially in computer science. When you run a search on the Internet, search engines like Google use algorithms to decide which entries will appear first in your list of results. Facebook also uses algorithms to decide which posts to display on your newsfeed. Can you identify other situations in which algorithms are used?

A heuristic is another type of problem solving strategy. While an algorithm must be followed exactly to produce a correct result, a heuristic is a general problem-solving framework (Tversky & Kahneman, 1974). You can think of these as mental shortcuts that are used to solve problems. A “rule of thumb” is an example of a heuristic. Such a rule saves the person time and energy when making a decision, but despite its time-saving characteristics, it is not always the best method for making a rational decision. Different types of heuristics are used in different types of situations, but the impulse to use a heuristic occurs when one of the five conditions is met (Pratkanis, 1989):

  • When one is faced with too much information
  • When the time to make a decision is limited
  • When the decision to be made is unimportant
  • When there is access to very little information to use in making the decision
  • When an appropriate heuristic happens to come to mind in the same moment

Working backward is a useful heuristic in which you begin solving the problem by focusing on the end result. Consider this example: You live in Washington, D.C., and have been invited to a wedding at 4 PM on Saturday in Philadelphia. Knowing that Interstate 95 tends to back up any day of the week, you need to plan your route and time your departure accordingly. If you want to be at the wedding service by 3:30 PM, and it takes 2.5 hours to get to Philadelphia without traffic, what time should you leave your house? You use the working backward heuristic to plan the events of your day on a regular basis, probably without even thinking about it.

Another useful heuristic is the practice of accomplishing a large goal or task by breaking it into a series of smaller steps. Students often use this common method to complete a large research project or a long essay for school. For example, students typically brainstorm, develop a thesis or main topic, research the chosen topic, organize their information into an outline, write a rough draft, revise and edit the rough draft, develop a final draft, organize the references list, and proofread their work before turning in the project. The large task becomes less overwhelming when it is broken down into a series of small steps.

EVERYDAY CONNECTION: Solving Puzzles

Problem-solving abilities can improve with practice. Many people challenge themselves every day with puzzles and other mental exercises to sharpen their problem-solving skills. Sudoku puzzles appear daily in most newspapers. Typically, a sudoku puzzle is a 9×9 grid. The simple sudoku below (Figure 7.7) is a 4×4 grid. To solve the puzzle, fill in the empty boxes with a single digit: 1, 2, 3, or 4. Here are the rules: The numbers must total 10 in each bolded box, each row, and each column; however, each digit can only appear once in a bolded box, row, and column. Time yourself as you solve this puzzle and compare your time with a classmate.

A four column by four row Sudoku puzzle is shown. The top left cell contains the number 3. The top right cell contains the number 2. The bottom right cell contains the number 1. The bottom left cell contains the number 4. The cell at the intersection of the second row and the second column contains the number 4. The cell to the right of that contains the number 1. The cell below the cell containing the number 1 contains the number 2. The cell to the left of the cell containing the number 2 contains the number 3.
Figure 7.7 How long did it take you to solve this sudoku puzzle? (You can see the answer at the end of this section.)

Here is another popular type of puzzle (Figure 7.8) that challenges your spatial reasoning skills. Connect all nine dots with four connecting straight lines without lifting your pencil from the paper:

A square shaped outline contains three rows and three columns of dots with equal space between them.
Figure 7.8 Did you figure it out? (The answer is at the end of this section.) Once you understand how to crack this puzzle, you won’t forget.

Take a look at the “Puzzling Scales” logic puzzle below (Figure 7.9). Sam Loyd, a well-known puzzle master, created and refined countless puzzles throughout his lifetime (Cyclopedia of Puzzles, n.d.).

A puzzle involving a scale is shown. At the top of the figure it reads: “Sam Loyds Puzzling Scales.” The first row of the puzzle shows a balanced scale with 3 blocks and a top on the left and 12 marbles on the right. Below this row it reads: “Since the scales now balance.” The next row of the puzzle shows a balanced scale with just the top on the left, and 1 block and 8 marbles on the right. Below this row it reads: “And balance when arranged this way.” The third row shows an unbalanced scale with the top on the left side, which is much lower than the right side. The right side is empty. Below this row it reads: “Then how many marbles will it require to balance with that top?”
Figure 7.9 What steps did you take to solve this puzzle? You can read the solution at the end of this section.

 

Pitfalls to Problem Solving

Not all problems are successfully solved, however. What challenges stop us from successfully solving a problem? Albert Einstein once said, “Insanity is doing the same thing over and over again and expecting a different result.” Imagine a person in a room that has four doorways. One doorway that has always been open in the past is now locked. The person, accustomed to exiting the room by that particular doorway, keeps trying to get out through the same doorway even though the other three doorways are open. The person is stuck—but she just needs to go to another doorway, instead of trying to get out through the locked doorway. A mental set is where you persist in approaching a problem in a way that has worked in the past but is clearly not working now.

Functional fixedness is a type of mental set where you cannot perceive an object being used for something other than what it was designed for. Duncker (1945) conducted foundational research on functional fixedness. He created an experiment in which participants were given a candle, a book of matches, and a box of thumbtacks. They were instructed to use those items to attach the candle to the wall so that it did not drip wax onto the table below. Participants had to use functional fixedness to solve the problem (Figure 7.10). During the Apollo 13 mission to the moon, NASA engineers at Mission Control had to overcome functional fixedness to save the lives of the astronauts aboard the spacecraft. An explosion in a module of the spacecraft damaged multiple systems. The astronauts were in danger of being poisoned by rising levels of carbon dioxide because of problems with the carbon dioxide filters. The engineers found a way for the astronauts to use spare plastic bags, tape, and air hoses to create a makeshift air filter, which saved the lives of the astronauts.

Figure a shows a book of matches, a box of thumbtacks, and a candle. Figure b shows the candle standing in the box that held the thumbtacks. A thumbtack attaches the box holding the candle to the wall.
Figure 7.10 In Duncker’s classic study, participants were provided the three objects in the top panel and asked to solve the problem. The solution is shown in the bottom portion.

In order to make good decisions, we use our knowledge and our reasoning. Often, this knowledge and reasoning is sound and solid. Sometimes, however, we are swayed by biases or by others manipulating a situation. For example, let’s say you and three friends wanted to rent a house and had a combined target budget of $1,600. The realtor shows you only very run-down houses for $1,600 and then shows you a very nice house for $2,000. Might you ask each person to pay more in rent to get the $2,000 home? Why would the realtor show you the run-down houses and the nice house? The realtor may be challenging your anchoring bias. An anchoring bias occurs when you focus on one piece of information when making a decision or solving a problem. In this case, you’re so focused on the amount of money you are willing to spend that you may not recognize what kinds of houses are available at that price point.

The confirmation bias is the tendency to focus on information that confirms your existing beliefs. For example, if you think that your professor is not very nice, you notice all of the instances of rude behavior exhibited by the professor while ignoring the countless pleasant interactions he is involved in on a daily basis. Hindsight bias leads you to believe that the event you just experienced was predictable, even though it really wasn’t. In other words, you knew all along that things would turn out the way they did. Representative bias describes a faulty way of thinking, in which you unintentionally stereotype someone or something; for example, you may assume that your professors spend their free time reading books and engaging in intellectual conversation because the idea of them spending their time playing volleyball or visiting an amusement park does not fit in with your stereotypes of professors.

Finally, the availability heuristic is a heuristic in which you make a decision based on an example, information, or recent experience that is that readily available to you, even though it may not be the best example to inform your decision. Biases tend to “preserve that which is already established—to maintain our preexisting knowledge, beliefs, attitudes, and hypotheses” (Aronson, 1995; Kahneman, 2011). These biases are summarized in Table 7.3.

Summary of Decision Biases
Bias Description
Anchoring Tendency to focus on one particular piece of information when making decisions or problem-solving
Confirmation Focuses on information that confirms existing beliefs
Hindsight Belief that the event just experienced was predictable
Representative Unintentional stereotyping of someone or something
Availability Decision is based upon either an available precedent or an example that may be faulty
Table7.3

Learning Objectives

By the end of this section, you will be able to:

  • Define intelligence
  • Explain the triarchic theory of intelligence
  • Identify the difference between intelligence theories
  • Explain emotional intelligence
  • Define creativity

Classifying Intelligence

What exactly is intelligence? The way that researchers have defined the concept of intelligence has been modified many times since the birth of psychology. British psychologist Charles Spearman believed intelligence consisted of one general factor, called g, which could be measured and compared among individuals. Spearman focused on the commonalities among various intellectual abilities and de-emphasized what made each unique. Long before modern psychology developed, however, ancient philosophers, such as Aristotle, held a similar view (Cianciolo & Sternberg, 2004).

Other psychologists believe that instead of a single factor, intelligence is a collection of distinct abilities. In the 1940s, Raymond Cattell proposed a theory of intelligence that divided general intelligence into two components: crystallized intelligence and fluid intelligence (Cattell, 1963). Crystallized intelligence is characterized as acquired knowledge and the ability to retrieve it. When you learn, remember, and recall information, you are using crystallized intelligence. You use crystallized intelligence all the time in your coursework by demonstrating that you have mastered the information covered in the course. Fluid intelligence encompasses the ability to see complex relationships and solve problems. Navigating your way home after being detoured onto an unfamiliar route because of road construction would draw upon your fluid intelligence. Fluid intelligence helps you tackle complex, abstract challenges in your daily life, whereas crystallized intelligence helps you overcome concrete, straightforward problems (Cattell, 1963).

Other theorists and psychologists believe that intelligence should be defined in more practical terms. For example, what types of behaviors help you get ahead in life? Which skills promote success? Think about this for a moment. Being able to recite all 45 presidents of the United States in order is an excellent party trick, but will knowing this make you a better person?

Robert Sternberg developed another theory of intelligence, which he titled the triarchic theory of intelligence because it sees intelligence as comprised of three parts (Sternberg, 1988): practical, creative, and analytical intelligence (Figure 7.12).

Three boxes are arranged in a triangle. The top box contains “Analytical intelligence; academic problem solving and computation.” There is a line with arrows on both ends connecting this box to another box containing “Practical intelligence; street smarts and common sense.” Another line with arrows on both ends connects this box to another box containing “Creative intelligence; imaginative and innovative problem solving.” Another line with arrows on both ends connects this box to the first box described, completing the triangle.
Figure 7.12 Sternberg’s theory identifies three types of intelligence: practical, creative, and analytical.

Practical intelligence, as proposed by Sternberg, is sometimes compared to “street smarts.” Being practical means you find solutions that work in your everyday life by applying knowledge based on your experiences. This type of intelligence appears to be separate from the traditional understanding of IQ; individuals who score high in practical intelligence may or may not have comparable scores in creative and analytical intelligence (Sternberg, 1988).

Analytical intelligence is closely aligned with academic problem solving and computations. Sternberg says that analytical intelligence is demonstrated by an ability to analyze, evaluate, judge, compare, and contrast. When reading a classic novel for a literature class, for example, it is usually necessary to compare the motives of the main characters of the book or analyze the historical context of the story. In a science course such as anatomy, you must study the processes by which the body uses various minerals in different human systems. In developing an understanding of this topic, you are using analytical intelligence. When solving a challenging math problem, you would apply analytical intelligence to analyze different aspects of the problem and then solve it section by section.

Creative intelligence is marked by inventing or imagining a solution to a problem or situation. Creativity in this realm can include finding a novel solution to an unexpected problem or producing a beautiful work of art or a well-developed short story. Imagine for a moment that you are camping in the woods with some friends and realize that you’ve forgotten your camp coffee pot. The person in your group who figures out a way to successfully brew coffee for everyone would be credited as having higher creative intelligence.

Multiple Intelligences Theory was developed by Howard Gardner, a Harvard psychologist and former student of Erik Erikson. Gardner’s theory, which has been refined for more than 30 years, is a more recent development among theories of intelligence. In Gardner’s theory, each person possesses at least eight intelligences. Among these eight intelligences, a person typically excels in some and falters in others (Gardner, 1983). Table 7.4 describes each type of intelligence.

Multiple Intelligences
Intelligence Type Characteristics Representative Career
Linguistic intelligence Perceives different functions of language, different sounds and meanings of words, may easily learn multiple languages Journalist, novelist, poet, teacher
Logical-mathematical intelligence Capable of seeing numerical patterns, strong ability to use reason and logic Scientist, mathematician
Musical intelligence Understands and appreciates rhythm, pitch, and tone; may play multiple instruments or perform as a vocalist Composer, performer
Bodily kinesthetic intelligence High ability to control the movements of the body and use the body to perform various physical tasks Dancer, athlete, athletic coach, yoga instructor
Spatial intelligence Ability to perceive the relationship between objects and how they move in space Choreographer, sculptor, architect, aviator, sailor
Interpersonal intelligence Ability to understand and be sensitive to the various emotional states of others Counselor, social worker, salesperson
Intrapersonal intelligence Ability to access personal feelings and motivations, and use them to direct behavior and reach personal goals Key component of personal success over time
Naturalist intelligence High capacity to appreciate the natural world and interact with the species within it Biologist, ecologist, environmentalist
Table7.4

Gardner’s theory is relatively new and needs additional research to better establish empirical support. At the same time, his ideas challenge the traditional idea of intelligence to include a wider variety of abilities, although it has been suggested that Gardner simply relabeled what other theorists called “cognitive styles” as “intelligences” (Morgan, 1996). Furthermore, developing traditional measures of Gardner’s intelligences is extremely difficult (Furnham, 2009; Gardner & Moran, 2006; Klein, 1997).

Gardner’s inter- and intrapersonal intelligences are often combined into a single type: emotional intelligence. Emotional intelligence encompasses the ability to understand the emotions of yourself and others, show empathy, understand social relationships and cues, and regulate your own emotions and respond in culturally appropriate ways (Parker, Saklofske, & Stough, 2009). People with high emotional intelligence typically have well-developed social skills. Some researchers, including Daniel Goleman, the author of Emotional Intelligence: Why It Can Matter More than IQ, argue that emotional intelligence is a better predictor of success than traditional intelligence (Goleman, 1995). However, emotional intelligence has been widely debated, with researchers pointing out inconsistencies in how it is defined and described, as well as questioning results of studies on a subject that is difficult to measure and study empirically (Locke, 2005; Mayer, Salovey, & Caruso, 2004)

The most comprehensive theory of intelligence to date is the Cattell-Horn-Carroll (CHC) theory of cognitive abilities (Schneider & McGrew, 2018). In this theory, abilities are related and arranged in a hierarchy with general abilities at the top, broad abilities in the middle, and narrow (specific) abilities at the bottom. The narrow abilities are the only ones that can be directly measured; however, they are integrated within the other abilities. At the general level is general intelligence. Next, the broad level consists of general abilities such as fluid reasoning, short-term memory, and processing speed. Finally, as the hierarchy continues, the narrow level includes specific forms of cognitive abilities. For example, short-term memory would further break down into memory span and working memory capacity.

Intelligence can also have different meanings and values in different cultures. If you live on a small island, where most people get their food by fishing from boats, it would be important to know how to fish and how to repair a boat. If you were an exceptional angler, your peers would probably consider you intelligent. If you were also skilled at repairing boats, your intelligence might be known across the whole island. Think about your own family’s culture. What values are important for Latinx families? Italian families? In Irish families, hospitality and telling an entertaining story are marks of the culture. If you are a skilled storyteller, other members of Irish culture are likely to consider you intelligent.

Some cultures place a high value on working together as a collective. In these cultures, the importance of the group supersedes the importance of individual achievement. When you visit such a culture, how well you relate to the values of that culture exemplifies your cultural intelligence, sometimes referred to as cultural competence.

Creativity

Creativity is the ability to generate, create, or discover new ideas, solutions, and possibilities. Very creative people often have intense knowledge about something, work on it for years, look at novel solutions, seek out the advice and help of other experts, and take risks. Although creativity is often associated with the arts, it is actually a vital form of intelligence that drives people in many disciplines to discover something new. Creativity can be found in every area of life, from the way you decorate your residence to a new way of understanding how a cell works.

Creativity is often assessed as a function of one’s ability to engage in divergent thinking. Divergent thinking can be described as thinking “outside the box;” it allows an individual to arrive at unique, multiple solutions to a given problem. In contrast, convergent thinking describes the ability to provide a correct or well-established answer or solution to a problem (Cropley, 2006; Gilford, 1967)

Learning Objectives

By the end of this section, you will be able to:

  • Explain how intelligence tests are developed
  • Describe the history of the use of IQ tests
  • Describe the purposes and benefits of intelligence testing

While you’re likely familiar with the term “IQ” and associate it with the idea of intelligence, what does IQ really mean? IQ stands for intelligence quotient and describes a score earned on a test designed to measure intelligence. You’ve already learned that there are many ways psychologists describe intelligence (or more aptly, intelligences). Similarly, IQ tests—the tools designed to measure intelligence—have been the subject of debate throughout their development and use.

When might an IQ test be used? What do we learn from the results, and how might people use this information? While there are certainly many benefits to intelligence testing, it is important to also note the limitations and controversies surrounding these tests. For example, IQ tests have sometimes been used as arguments in support of insidious purposes, such as the eugenics movement (Severson, 2011). The infamous Supreme Court Case, Buck v. Bell, legalized the forced sterilization of some people deemed “feeble-minded” through this type of testing, resulting in about 65,000 sterilizations (Buck v. Bell, 274 U.S. 200; Ko, 2016). Today, only professionals trained in psychology can administer IQ tests, and the purchase of most tests requires an advanced degree in psychology. Other professionals in the field, such as social workers and psychiatrists, cannot administer IQ tests. In this section, we will explore what intelligence tests measure, how they are scored, and how they were developed.

Measuring Intelligence

It seems that the human understanding of intelligence is somewhat limited when we focus on traditional or academic-type intelligence. How then, can intelligence be measured? And when we measure intelligence, how do we ensure that we capture what we’re really trying to measure (in other words, that IQ tests function as valid measures of intelligence)? In the following paragraphs, we will explore the how intelligence tests were developed and the history of their use.

The IQ test has been synonymous with intelligence for over a century. In the late 1800s, Sir Francis Galton developed the first broad test of intelligence (Flanagan & Kaufman, 2004). Although he was not a psychologist, his contributions to the concepts of intelligence testing are still felt today (Gordon, 1995). Reliable intelligence testing (you may recall from earlier chapters that reliability refers to a test’s ability to produce consistent results) began in earnest during the early 1900s with a researcher named Alfred Binet (Figure 7.13). Binet was asked by the French government to develop an intelligence test to use on children to determine which ones might have difficulty in school; it included many verbally based tasks. American researchers soon realized the value of such testing. Louis Terman, a Stanford professor, modified Binet’s work by standardizing the administration of the test and tested thousands of different-aged children to establish an average score for each age. As a result, the test was normed and standardized, which means that the test was administered consistently to a large enough representative sample of the population that the range of scores resulted in a bell curve (bell curves will be discussed later). Standardization means that the manner of administration, scoring, and interpretation of results is consistent. Norming involves giving a test to a large population so data can be collected comparing groups, such as age groups. The resulting data provide norms, or referential scores, by which to interpret future scores. Norms are not expectations of what a given group should know but a demonstration of what that group does know. Norming and standardizing the test ensures that new scores are reliable. This new version of the test was called the Stanford-Binet Intelligence Scale (Terman, 1916). Remarkably, an updated version of this test is still widely used today.

Photograph A shows a portrait of Alfred Binet. Photograph B shows six sketches of human faces. Above these faces is the label “Guide for Binet-Simon Scale. 223” The faces are arranged in three rows of two, and these rows are labeled “1, 2, and 3.” At the bottom it reads: “The psychological clinic is indebted for the loan of these cuts and those on p. 225 to the courtesy of Dr. Oliver P. Cornman, Associate Superintendent of Schools of Philadelphia, and Chairman of Committee on Backward Children Investigation. See Report of Committee, Dec. 31, 1910, appendix.”
Figure 7.13 French psychologist Alfred Binet helped to develop intelligence testing. (b) This page is from a 1908 version of the Binet-Simon Intelligence Scale. Children being tested were asked which face, of each pair, was prettier.

In 1939, David Wechsler, a psychologist who spent part of his career working with World War I veterans, developed a new IQ test in the United States. Wechsler combined several subtests from other intelligence tests used between 1880 and World War I. These subtests tapped into a variety of verbal and nonverbal skills because Wechsler believed that intelligence encompassed “the global capacity of a person to act purposefully, to think rationally, and to deal effectively with his environment” (Wechsler, 1958, p. 7). He named the test the Wechsler-Bellevue Intelligence Scale (Wechsler, 1981). This combination of subtests became one of the most extensively used intelligence tests in the history of psychology. Although its name was later changed to the Wechsler Adult Intelligence Scale (WAIS) and has been revised several times, the aims of the test remain virtually unchanged since its inception (Boake, 2002). Today, there are three intelligence tests credited to Wechsler, the Wechsler Adult Intelligence Scale-fourth edition (WAIS-IV), the Wechsler Intelligence Scale for Children (WISC-V), and the Wechsler Preschool and Primary Scale of Intelligence—IV (WPPSI-IV) (Wechsler, 2012). These tests are used widely in schools and communities throughout the United States, and they are periodically normed and standardized as a means of recalibration. As a part of the recalibration process, the WISC-V was given to thousands of children across the country, and children taking the test today are compared with their same-age peers (Figure 7.13).

The WISC-V is composed of 14 subtests, which comprise five indices, which then render an IQ score. The five indices are Verbal Comprehension, Visual Spatial, Fluid Reasoning, Working Memory, and Processing Speed. When the test is complete, individuals receive a score for each of the five indices and a Full Scale IQ score. The method of scoring reflects the understanding that intelligence is comprised of multiple abilities in several cognitive realms and focuses on the mental processes that the child used to arrive at his or her answers to each test item.

Interestingly, the periodic recalibrations have led to an interesting observation known as the Flynn effect. Named after James Flynn, who was among the first to describe this trend, the Flynn effect refers to the observation that each generation has a significantly higher IQ than the last. Flynn himself argues, however, that increased IQ scores do not necessarily mean that younger generations are more intelligent per se (Flynn, Shaughnessy, & Fulgham, 2012).

Ultimately, we are still left with the question of how valid intelligence tests are. Certainly, the most modern versions of these tests tap into more than verbal competencies, yet the specific skills that should be assessed in IQ testing, the degree to which any test can truly measure an individual’s intelligence, and the use of the results of IQ tests are still issues of debate (Gresham & Witt, 1997; Flynn, Shaughnessy, & Fulgham, 2012; Richardson, 2002; Schlinger, 2003).

The Bell Curve

The results of intelligence tests follow the bell curve, a graph in the general shape of a bell. When the bell curve is used in psychological testing, the graph demonstrates a normal distribution of a trait, in this case, intelligence, in the human population. Many human traits naturally follow the bell curve. For example, if you lined up all your female schoolmates according to height, it is likely that a large cluster of them would be the average height for an American woman: 5’4”–5’6”. This cluster would fall in the center of the bell curve, representing the average height for American women (Figure 7.14). There would be fewer women who stand closer to 4’11”. The same would be true for women of above-average height: those who stand closer to 5’11”. The trick to finding a bell curve in nature is to use a large sample size. Without a large sample size, it is less likely that the bell curve will represent the wider population. A representative sample is a subset of the population that accurately represents the general population. If, for example, you measured the height of the women in your classroom only, you might not actually have a representative sample. Perhaps the women’s basketball team wanted to take this course together, and they are all in your class. Because basketball players tend to be taller than average, the women in your class may not be a good representative sample of the population of American women. But if your sample included all the women at your school, it is likely that their heights would form a natural bell curve.

A graph of a bell curve is labeled “Height of U.S. Women.” The x axis is labeled “Height” and the y axis is labeled “Frequency.” Between the heights of five feet tall and five feet and five inches tall, the frequency rises to a curved peak, then begins dropping off at the same rate until it hits five feet ten inches tall.
Figure 7.14 Are you of below-average, average, or above-average height?

The same principles apply to intelligence test scores. Individuals earn a score called an intelligence quotient (IQ). Over the years, different types of IQ tests have evolved, but the way scores are interpreted remains the same. The average IQ score on an IQ test is 100. Standard deviations describe how data are dispersed in a population and give context to large data sets. The bell curve uses the standard deviation to show how all scores are dispersed from the average score (Figure 7.15). In modern IQ testing, one standard deviation is 15 points. So a score of 85 would be described as “one standard deviation below the mean.” How would you describe a score of 115 and a score of 70? Any IQ score that falls within one standard deviation above and below the mean (between 85 and 115) is considered average, and 68% of the population has IQ scores in this range. An IQ score of 130 or above is considered a superior level.

A graph of a bell curve is labeled “Intelligence Quotient Score.” The x axis is labeled “IQ,” and the y axis is labeled “Population.” Beginning at an IQ of 60, the population rises to a curved peak at an IQ of 100 and then drops off at the same rate ending near zero at an IQ of 140.
Figure 7.15 The majority of people have an IQ score between 85 and 115.

Only 2.2% of the population has an IQ score below 70 (American Psychological Association [APA], 2013). A score of 70 or below indicates significant cognitive delays. When these are combined with major deficits in adaptive functioning, a person is diagnosed with having an intellectual disability (American Association on Intellectual and Developmental Disabilities, 2013). Formerly known as mental retardation, the accepted term now is intellectual disability, and it has four subtypes: mild, moderate, severe, and profound (Table 7.5). The Diagnostic and Statistical Manual of Psychological Disorders lists criteria for each subgroup (APA, 2013).

Characteristics of Cognitive Disorders
Intellectual Disability Subtype Percentage of Population with Intellectual Disabilities Description
Mild 85% 3rd- to 6th-grade skill level in reading, writing, and math; may be employed and live independently
Moderate 10% Basic reading and writing skills; functional self-care skills; requires some oversight
Severe 5% Functional self-care skills; requires oversight of daily environment and activities
Profound <1% May be able to communicate verbally or nonverbally; requires intensive oversight
Table 7.5

On the other end of the intelligence spectrum are those individuals whose IQs fall into the highest ranges. Consistent with the bell curve, about 2% of the population falls into this category. People are considered gifted if they have an IQ score of 130 or higher, or superior intelligence in a particular area. Long ago, popular belief suggested that people of high intelligence were maladjusted. This idea was disproven through a groundbreaking study of gifted children. In 1921, Lewis Terman began a longitudinal study of over 1500 children with IQs over 135 (Terman, 1925). His findings showed that these children became well-educated, successful adults who were, in fact, well-adjusted (Terman & Oden, 1947). Additionally, Terman’s study showed that the subjects were above average in physical build and attractiveness, dispelling an earlier popular notion that highly intelligent people were “weaklings.” Some people with very high IQs elect to join Mensa, an organization dedicated to identifying, researching, and fostering intelligence. Members must have an IQ score in the top 2% of the population, and they may be required to pass other exams in their application to join the group.

DIG DEEPER: What’s in a Name? 

In the past, individuals with IQ scores below 70 and significant adaptive and social functioning delays were diagnosed with mental retardation. When this diagnosis was first named, the title held no social stigma. In time, however, the degrading word “retard” sprang from this diagnostic term. “Retard” was frequently used as a taunt, especially among young people, until the words “mentally retarded” and “retard” became an insult. As such, the DSM-5 now labels this diagnosis as “intellectual disability.” Many states once had a Department of Mental Retardation to serve those diagnosed with such cognitive delays, but most have changed their name to the Department of Developmental Disabilities or something similar in language.

Erin Johnson’s younger brother Matthew has Down syndrome. She wrote this piece about what her brother taught her about the meaning of intelligence:

His whole life, learning has been hard. Entirely possible – just different. He has always excelled with technology – typing his thoughts was more effective than writing them or speaking them. Nothing says “leave me alone” quite like a text that reads, “Do Not Call Me Right Now.” He is fully capable of reading books up to about a third-grade level, but he didn’t love it and used to always ask others to read to him. That all changed when his nephew came along, because he willingly reads to him, and it is the most heart-swelling, smile-inducing experience I have ever had the pleasure of witnessing.

When it comes down to it, Matt can learn. He does learn. It just takes longer, and he has to work harder for it, which if we’re being honest, is not a lot of fun. He is extremely gifted in learning things he takes an interest in, and those things often seem a bit “strange” to others. But no matter. It just proves my point – he can learn. That does not mean he will learn at the same pace, or even to the same level. It also, unfortunately, does not mean he will be allotted the same opportunities to learn as many others.

Here’s the scoop. We are all wired with innate abilities to retain and apply our learning and natural curiosities and passions that fuel our desire to learn. But our abilities and curiosities may not be the same.

The world doesn’t work this way though, especially not for my brother and his counterparts. Have him read aloud a book about skunks, and you may not get a whole lot from him. But have him tell you about skunks straight out of his memory, and hold onto your hats. He can hack the school’s iPad system, but he can’t tell you how he did it. He can write out every direction for a drive to our grandparents’ home in Florida, but he can’t drive.

Society is quick to deem him disabled and use demeaning language like the r-word to describe him, but in reality, we haven’t necessarily given him opportunities to showcase the learning he can do. In my case, I can escape the need to memorize how to change the oil in my car without anyone assuming I can’t do it, or calling me names when they find out I can’t. But Matthew can’t get through a day at his job without someone assuming he needs help. He is bright. Brighter than most anyone would assume. Maybe we need to redefine what is smart.

My brother doesn’t fit in the narrow schema of intelligence that is accepted in our society. But intelligence is far more than being able to solve 525 x 62 or properly introduce yourself to another. Why can’t we assume the intelligence of someone who can recite all of a character’s lines in a movie or remember my birthday a year after I told him/her a single time? Why is it we allow a person’s diagnosis or appearance to make us not just wonder if, but entirely doubt that they are capable? Maybe we need to cut away the sides of the box we have created for people so everyone can fit.

My brother can learn. It may not be what you know. It may be knowledge you would deem unimportant. It may not follow a traditional learning trajectory. But the fact remains – he can learn. Everyone can learn. And even though it is harder for him and harder for others still, he is not a “retard.” Nobody is.

When you use the r-word, you are insinuating that an individual, whether someone with a disability or not, is unintelligent, foolish, and purposeless. This in turn tells a person with a disability that they too are unintelligent, foolish, and purposeless. Because the word was historically used to describe individuals with disabilities and twisted from its original meaning to fit a cruel new context, it is forevermore associated with people like my brother. No matter how a person looks or learns or behaves, the r-word is never a fitting term. It’s time we waved it goodbye.

Why Measure Intelligence?

The value of IQ testing is most evident in educational or clinical settings. Children who seem to be experiencing learning difficulties or severe behavioral problems can be tested to ascertain whether the child’s difficulties can be partly attributed to an IQ score that is significantly different from the mean for her age group. Without IQ testing—or another measure of intelligence—children and adults needing extra support might not be identified effectively. In addition, IQ testing is used in courts to determine whether a defendant has special or extenuating circumstances that preclude him from participating in some way in a trial. People also use IQ testing results to seek disability benefits from the Social Security Administration.

Learning Objectives

By the end of this section, you will be able to:

  • Describe how genetics and environment affect intelligence
  • Explain the relationship between IQ scores and socioeconomic status
  • Describe the difference between a learning disability and a developmental disorder

High Intelligence: Nature or Nurture?

Where does high intelligence come from? Some researchers believe that intelligence is a trait inherited from a person’s parents. Scientists who research this topic typically use twin studies to determine the heritability of intelligence. The Minnesota Study of Twins Reared Apart is one of the most well-known twin studies. In this investigation, researchers found that identical twins raised together and identical twins raised apart exhibit a higher correlation between their IQ scores than siblings or fraternal twins raised together (Bouchard, Lykken, McGue, Segal, & Tellegen, 1990). The findings from this study reveal a genetic component to intelligence (Figure 7.15). At the same time, other psychologists believe that intelligence is shaped by a child’s developmental environment. If parents were to provide their children with intellectual stimuli from before they are born, it is likely that they would absorb the benefits of that stimulation, and it would be reflected in intelligence levels.

A chart shows correlations of IQs for people of varying relationships. The bottom is labeled “Percent IQ Correlation” and the left side is labeled “Relationship.” The percent IQ Correlation for relationships where no genes are shared, including adoptive parent-child pairs, similarly aged unrelated children raised together, and adoptive siblings are around 21 percent, 30 percent, and 32 percent, respectively. The percent IQ Correlation for relationships where 25 percent of genes are shared, as in half-siblings, is around 33 percent. The percent IQ Correlation for relationships where 50 percent of genes are shared, including parent-children pairs, and fraternal twins raised together, are roughly 44 percent and 62 percent, respectively. A relationship where 100 percent of genes are shared, as in identical twins raised apart, results in a nearly 80 percent IQ correlation.
Figure 7.16 The correlations of IQs of unrelated versus related persons reared apart or together suggest a genetic component to intelligence.

The reality is that aspects of each idea are probably correct. In fact, one study suggests that although genetics seem to be in control of the level of intelligence, the environmental influences provide both stability and change to trigger manifestation of cognitive abilities (Bartels, Rietveld, Van Baal, & Boomsma, 2002). Certainly, there are behaviors that support the development of intelligence, but the genetic component of high intelligence should not be ignored. As with all heritable traits, however, it is not always possible to isolate how and when high intelligence is passed on to the next generation.

Range of Reaction is the theory that each person responds to the environment in a unique way based on his or her genetic makeup. According to this idea, your genetic potential is a fixed quantity, but whether you reach your full intellectual potential is dependent upon the environmental stimulation you experience, especially in childhood. Think about this scenario: A couple adopts a child who has average genetic intellectual potential. They raise her in an extremely stimulating environment. What will happen to the couple’s new daughter? It is likely that the stimulating environment will improve her intellectual outcomes over the course of her life. But what happens if this experiment is reversed? If a child with an extremely strong genetic background is placed in an environment that does not stimulate him: What happens? Interestingly, according to a longitudinal study of highly gifted individuals, it was found that “the two extremes of optimal and pathological experience are both represented disproportionately in the backgrounds of creative individuals”; however, those who experienced supportive family environments were more likely to report being happy (Csikszentmihalyi & Csikszentmihalyi, 1993, p. 187).

Another challenge to determining the origins of high intelligence is the confounding nature of our human social structures. It is troubling to note that some ethnic groups perform better on IQ tests than others—and it is likely that the results do not have much to do with the quality of each ethnic group’s intellect. The same is true for socioeconomic status. Children who live in poverty experience more pervasive, daily stress than children who do not worry about the basic needs of safety, shelter, and food. These worries can negatively affect how the brain functions and develops, causing a dip in IQ scores. Mark Kishiyama and his colleagues determined that children living in poverty demonstrated reduced prefrontal brain functioning comparable to children with damage to the lateral prefrontal cortex (Kishyama, Boyce, Jimenez, Perry, & Knight, 2009).

The debate around the foundations and influences on intelligence exploded in 1969 when an educational psychologist named Arthur Jensen published the article “How Much Can We Boost I.Q. and Achievement” in the Harvard Educational Review. Jensen had administered IQ tests to diverse groups of students, and his results led him to the conclusion that IQ is determined by genetics. He also posited that intelligence was made up of two types of abilities: Level I and Level II. In his theory, Level I is responsible for rote memorization, whereas Level II is responsible for conceptual and analytical abilities. According to his findings, Level I remained consistent among the human race. Level II, however, exhibited differences among ethnic groups (Modgil & Routledge, 1987). Jensen’s most controversial conclusion was that Level II intelligence is prevalent among Asians, then Caucasians, then African Americans. Robert Williams was among those who called out racial bias in Jensen’s results (Williams, 1970).

Obviously, Jensen’s interpretation of his own data caused an intense response in a nation that continued to grapple with the effects of racism (Fox, 2012). However, Jensen’s ideas were not solitary or unique; rather, they represented one of many examples of psychologists asserting racial differences in IQ and cognitive ability. In fact, Rushton and Jensen (2005) reviewed three decades worth of research on the relationship between race and cognitive ability. Jensen’s belief in the inherited nature of intelligence and the validity of the IQ test to be the truest measure of intelligence are at the core of his conclusions. If, however, you believe that intelligence is more than Levels I and II, or that IQ tests do not control for socioeconomic and cultural differences among people, then perhaps you can dismiss Jensen’s conclusions as a single window that looks out on the complicated and varied landscape of human intelligence.

In a related story, parents of African American students filed a case against the State of California in 1979, because they believed that the testing method used to identify students with learning disabilities was culturally unfair as the tests were normed and standardized using white children (Larry P. v. Riles). The testing method used by the state disproportionately identified African American children as mentally retarded. This resulted in many students being incorrectly classified as “mentally retarded.”

What are Learning Disabilities?

Learning disabilities are cognitive disorders that affect different areas of cognition, particularly language or reading. It should be pointed out that learning disabilities are not the same thing as intellectual disabilities. Learning disabilities are considered specific neurological impairments rather than global intellectual or developmental disabilities. A person with a language disability has difficulty understanding or using spoken language, whereas someone with a reading disability, such as dyslexia, has difficulty processing what he or she is reading.

Often, learning disabilities are not recognized until a child reaches school age. One confounding aspect of learning disabilities is that they most often affect children with average to above-average intelligence. In other words, the disability is specific to a particular area and not a measure of overall intellectual ability. At the same time, learning disabilities tend to exhibit comorbidity with other disorders, like attention-deficit hyperactivity disorder (ADHD). Anywhere between 30–70% of individuals with diagnosed cases of ADHD also have some sort of learning disability (Riccio, Gonzales, & Hynd, 1994). Let’s take a look at three examples of common learning disabilities: dysgraphia, dyslexia, and dyscalculia.

Dysgraphia

Children with dysgraphia have a learning disability that results in a struggle to write legibly. The physical task of writing with a pen and paper is extremely challenging for the person. These children often have extreme difficulty putting their thoughts down on paper (Smits-Engelsman & Van Galen, 1997). This difficulty is inconsistent with a person’s IQ. That is, based on the child’s IQ and/or abilities in other areas, a child with dysgraphia should be able to write, but can’t. Children with dysgraphia may also have problems with spatial abilities.

Students with dysgraphia need academic accommodations to help them succeed in school. These accommodations can provide students with alternative assessment opportunities to demonstrate what they know (Barton, 2003). For example, a student with dysgraphia might be permitted to take an oral exam rather than a traditional paper-and-pencil test. Treatment is usually provided by an occupational therapist, although there is some question as to how effective such treatment is (Zwicker, 2005).

Dyslexia

Dyslexia is the most common learning disability in children. An individual with dyslexia exhibits an inability to correctly process letters. The neurological mechanism for sound processing does not work properly in someone with dyslexia. As a result, dyslexic children may not understand sound-letter correspondence. A child with dyslexia may mix up letters within words and sentences—letter reversals, such as those shown in Figure 7.17, are a hallmark of this learning disability—or skip whole words while reading. A dyslexic child may have difficulty spelling words correctly while writing. Because of the disordered way that the brain processes letters and sounds, learning to read is a frustrating experience. Some dyslexic individuals cope by memorizing the shapes of most words, but they never actually learn to read (Berninger, 2008).

Two columns and five rows all containing the word “teapot” are shown. “Teapot” is written ten times with the letters jumbled, sometimes appearing backwards and upside down.
Figure 7.17 These written words show variations of the word “teapot” as written by individuals with dyslexia.

Dyscalculia

Dyscalculia is difficulty in learning or comprehending arithmetic. This learning disability is often first evident when children exhibit difficulty discerning how many objects are in a small group without counting them. Other symptoms may include struggling to memorize math facts, organize numbers, or fully differentiate between numerals, math symbols, and written numbers (such as “3” and “three”).

Additional Supplemental Resources

Websites

  • Quick Draw 
    • Use Google’s QuickDraw web app on your phone to quickly draw 5 things for Google’s artificially intelligent neural net. When you are done, the app will show you what it thought each of the drawings was. How does this relate to the psychological idea of concepts, prototypes, and schemas? Check out here.  Works best in Chrome if used in a web browser
  • Speech and Language Developmental Milestones
    • This article lists information about a variety of different topics relating to speech development, including how speech develops and what research is currently being done regarding speech development.
  • Human Intelligence
    • The Human intelligence site includes biographical profiles of people who have influenced the development of intelligence theory and testing, in-depth articles exploring current controversies related to human intelligence, and resources for teachers.
  • The Jam Experiment
    • In 2000, psychologists Sheena Iyengar and Mark Lepper from Columbia and Stanford University published a study about the paradox of choice.  This is the original journal article.
  • Mensa 
    • Mensa, the high IQ society, provides a forum for intellectual exchange among its members. There are members in more than 100 countries around the world.  Anyone with an IQ in the top 2% of the population can join.
  • The Turing Test
    • This test developed in the 1950s is used to refer to some kinds of behavioral tests for the presence of mind, or thought, or intelligence in putatively minded entities such as machines.
  • Center for Parent Resources on Intellectual Disability
    • Your central “Hub” of information and products created for the network of Parent Centers serving families of children with disabilities.

Videos

  • Why our IQ levels are higher than our grandparents’ 
    • How have average IQ levels changed over time? Hear James Flynn discuss the “Flynn Effect” in this Ted Talk. Closed captioning available.
  • How to Make Choosing Easier 
    • We all want customized experiences and products — but when faced with 700 options, consumers freeze up. With fascinating new research, Sheena Iyengar demonstrates how businesses (and others) can improve the experience of choosing. This is the same researcher that is featured in your midterm exam.
  • IQ Score Distribution
    • What does an IQ Score distribution look like?  Where do most people fall on an IQ Score distribution?  Find out more in this video. Closed captioning available.
  • How I Hacked Online Dating – Ted Talk 
    • How do we solve problems?  How can data help us to do this?  Follow Amy Webb’s story of how she used algorithms to help her find her way to true love. Closed captioning available.
  • Ted-Ed: Do animals have language?
    • In this Ted-Ed video, explore some of the ways in which animals communicate, and determine whether or not this communication qualifies as language.  A variety of discussion and assessment questions are included with the video (free registration is required to access the questions). Closed captioning available.
  • Ted-Ed: The benefits of a bilingual brain
    • Watch this Ted-Ed video to learn more about the benefits of speaking multiple languages, including how bilingualism helps the brain to process information, strengthens the brain, and keeps the speaker more engaged in their world.  A variety of discussion and assessment questions are included with the video (free registration is required to access the questions). Closed captioning available.
  • Crash Course Video #15 – Cognition: How Your Mind Can Amaze and Betray You
    • This video is on how your mind can amaze and betray you includes information on topics such as concepts, prototypes, problem-solving and mistakes in thinking. Closed captioning available.
  • Crash Course Video #16 – Language
    • This video on language includes information on topics such as the development of language, language theories, and brain areas involved in language, as well as language disorders. Closed captioning available.
  • Crash Course Video #23 – Controversy of Intelligence
    • This video on the controversy of intelligence includes information on topics such as theories of intelligence, emotional intelligence, and measuring intelligence. Closed captioning available.
  • Crash Course Video #24 – Brains vs Bias
    • This video on the brains vs. bias includes information on topics such as intelligence testing, testing bias, and stereotype threat. Closed captioning available.

Access for free at https://openstax.org/books/psychology-2e/pages/1-introduction

Memory

7

A photograph shows a camera and a pile of photographs.
Figure 8.1 Photographs can trigger our memories and bring past experiences back to life. (credit: modification of work by Cory Zanker)

We may be top-notch learners, but if we don’t have a way to store what we’ve learned, what good is the knowledge we’ve gained?

Take a few minutes to imagine what your day might be like if you could not remember anything you had learned. You would have to figure out how to get dressed. What clothing should you wear, and how do buttons and zippers work? You would need someone to teach you how to brush your teeth and tie your shoes. Who would you ask for help with these tasks, since you wouldn’t recognize the faces of these people in your house? Wait . . . is this even your house? Uh oh, your stomach begins to rumble and you feel hungry. You’d like something to eat, but you don’t know where the food is kept or even how to prepare it. Oh dear, this is getting confusing. Maybe it would be best just go back to bed. A bed . . . what is a bed?

We have an amazing capacity for memory, but how, exactly, do we process and store information? Are there different kinds of memory, and if so, what characterizes the different types? How, exactly, do we retrieve our memories? And why do we forget? This chapter will explore these questions as we learn about memory.

Learning Objectives

By the end of this section, you will be able to:

  • Discuss the three basic functions of memory
  • Describe the three stages of memory storage
  • Describe and distinguish between procedural and declarative memory and semantic and episodic memory

Memory is an information processing system; therefore, we often compare it to a computer. Memory is the set of processes used to encode, store, and retrieve information over different periods of time (Figure 8.2).

A diagram shows three boxes, placed in a row from left to right, respectively titled “Encoding,” “Storage,” and “Retrieval.” One right-facing arrow connects “Encoding” to “Storage” and another connects “Storage” to “Retrieval.”
Figure 8.2 Encoding involves the input of information into the memory system. Storage is the retention of the encoded information. Retrieval, or getting the information out of memory and back into awareness, is the third function.

We get information into our brains through a process called encoding, which is the input of information into the memory system. Once we receive sensory information from the environment, our brains label or code it. We organize the information with other similar information and connect new concepts to existing concepts. Encoding information occurs through automatic processing and effortful processing.

If someone asks you what you ate for lunch today, more than likely you could recall this information quite easily. This is known as automatic processing, or the encoding of details like time, space, frequency, and the meaning of words. Automatic processing is usually done without any conscious awareness. Recalling the last time you studied for a test is another example of automatic processing. But what about the actual test material you studied? It probably required a lot of work and attention on your part in order to encode that information. This is known as effortful processing (Figure 8.3).

A photograph shows a person driving a car.
Figure 8.3 When you first learn new skills such as driving a car, you have to put forth effort and attention to encode information about how to start a car, how to brake, how to handle a turn, and so on. Once you know how to drive, you can encode additional information about this skill automatically. (credit: Robert Couse-Baker)

What are the most effective ways to ensure that important memories are well encoded? Even a simple sentence is easier to recall when it is meaningful (Anderson, 1984). Read the following sentences (Bransford & McCarrell, 1974), then look away and count backward from 30 by threes to zero, and then try to write down the sentences (no peeking back at this page!).

  1. The notes were sour because the seams split.
  2. The voyage wasn’t delayed because the bottle shattered.
  3. The haystack was important because the cloth ripped.

How well did you do? By themselves, the statements that you wrote down were most likely confusing and difficult for you to recall. Now, try writing them again, using the following prompts: bagpipe, ship christening, and parachutist. Next, count backward from 40 by fours, then check yourself to see how well you recalled the sentences this time. You can see that the sentences are now much more memorable because each of the sentences was placed in context. Material is far better encoded when you make it meaningful.

There are three types of encoding. The encoding of words and their meaning is known as semantic encoding. It was first demonstrated by William Bousfield (1935) in an experiment in which he asked people to memorize words. The 60 words were actually divided into 4 categories of meaning, although the participants did not know this because the words were randomly presented. When they were asked to remember the words, they tended to recall them in categories, showing that they paid attention to the meanings of the words as they learned them.

Visual encoding is the encoding of images, and acoustic encoding is the encoding of sounds, words in particular. To see how visual encoding works, read over this list of words: car, level, dog, truth, book, value. If you were asked later to recall the words from this list, which ones do you think you’d most likely remember? You would probably have an easier time recalling the words car, dog, and book, and a more difficult time recalling the words level, truth, and value. Why is this? Because you can recall images (mental pictures) more easily than words alone. When you read the words car, dog, and book you created images of these things in your mind. These are concrete, high-imagery words. On the other hand, abstract words like level, truth, and value are low-imagery words. High-imagery words are encoded both visually and semantically (Paivio, 1986), thus building a stronger memory.

Now let’s turn our attention to acoustic encoding. You are driving in your car and a song comes on the radio that you haven’t heard in at least 10 years, but you sing along, recalling every word. In the United States, children often learn the alphabet through song, and they learn the number of days in each month through rhyme: Thirty days hath September, / April, June, and November; / All the rest have thirty-one, / Save February, with twenty-eight days clear, / And twenty-nine each leap year.” These lessons are easy to remember because of acoustic encoding. We encode the sounds the words make. This is one of the reasons why much of what we teach young children is done through song, rhyme, and rhythm.

Which of the three types of encoding do you think would give you the best memory of verbal information? Some years ago, psychologists Fergus Craik and Endel Tulving (1975) conducted a series of experiments to find out. Participants were given words along with questions about them. The questions required the participants to process the words at one of the three levels. The visual processing questions included such things as asking the participants about the font of the letters. The acoustic processing questions asked the participants about the sound or rhyming of the words, and the semantic processing questions asked the participants about the meaning of the words. After participants were presented with the words and questions, they were given an unexpected recall or recognition task.

Words that had been encoded semantically were better remembered than those encoded visually or acoustically. Semantic encoding involves a deeper level of processing than the shallower visual or acoustic encoding. Craik and Tulving concluded that we process verbal information best through semantic encoding, especially if we apply what is called the self-reference effect. The self-reference effect is the tendency for an individual to have a better memory for information that relates to oneself in comparison to material that has less personal relevance (Rogers, Kuiper, & Kirker, 1977). Could semantic encoding be beneficial to you as you attempt to memorize the concepts in this chapter?

Storage

Once the information has been encoded, we have to somehow retain it. Our brains take the encoded information and place it in storage. Storage is the creation of a permanent record of information.

In order for a memory to go into storage (i.e., long-term memory), it has to pass through three distinct stages: Sensory MemoryShort-Term Memory, and finally Long-Term Memory. These stages were first proposed by Richard Atkinson and Richard Shiffrin (1968). Their model of human memory (Figure 8.4), called Atkinson and Shiffrin’s model, is based on the belief that we process memories in the same way that a computer processes information.

A flow diagram consists of four boxes with connecting arrows. The first box is labeled “sensory input.” An arrow leads to the second box, which is labeled “sensory memory.” An arrow leads to the third box which is labeled “short-term memory (STM).” An arrow points to the fourth box, labeled “long-term memory (LTM),” and an arrow points in the reverse direction from the fourth to the third box. Above the short-term memory box, an arrow leaves the top-right of the box and curves around to point back to the top-left of the box; this arrow is labeled “rehearsal.” Both the “sensory memory” and “short-term memory” boxes have an arrow beneath them pointing to the text “information not transferred is lost.”
Figure 8.4 According to the Atkinson-Shiffrin model of memory, information passes through three distinct stages in order for it to be stored in long-term memory.

Atkinson and Shiffrin’s model is not the only model of memory. Baddeley and Hitch (1974) proposed a working memory model in which short-term memory has different forms. In their model, storing memories in short-term memory is like opening different files on a computer and adding information. The working memory files hold a limited amount of information. The type of short-term memory (or computer file) depends on the type of information received. There are memories in visual-spatial form, as well as memories of spoken or written material, and they are stored in three short-term systems: a visuospatial sketchpad, an episodic buffer (Baddeley, 2000), and a phonological loop. According to Baddeley and Hitch, a central executive part of memory supervises or controls the flow of information to and from the three short-term systems, and the central executive is responsible for moving information into long-term memory.

Sensory Memory

In the Atkinson-Shiffrin model, stimuli from the environment are processed first in sensory memory: storage of brief sensory events, such as sights, sounds, and tastes. It is very brief storage—up to a couple of seconds. We are constantly bombarded with sensory information. We cannot absorb all of it, or even most of it. And most of it has no impact on our lives. For example, what was your professor wearing the last class period? As long as the professor was dressed appropriately, it does not really matter what she was wearing. Sensory information about sights, sounds, smells, and even textures, which we do not view as valuable information, we discard. If we view something as valuable, the information will move into our short-term memory system.

Short-Term Memory

Short-term memory (STM) is a temporary storage system that processes incoming sensory memory. The terms short-term and working memory are sometimes used interchangeably, but they are not exactly the same. Short-term memory is more accurately described as a component of working memory. Short-term memory takes information from sensory memory and sometimes connects that memory to something already in long-term memory. Short-term memory storage lasts 15 to 30 seconds. Think of it as the information you have displayed on your computer screen, such as a document, spreadsheet, or website. Then, the information in STM goes to long-term memory (you save it to your hard drive), or it is discarded (you delete a document or close a web browser).

Rehearsal moves information from short-term memory to long-term memory. Active rehearsal is a way of attending to information to move it from short-term to long-term memory. During active rehearsal, you repeat (practice) the information to be remembered. If you repeat it enough, it may be moved into long-term memory. For example, this type of active rehearsal is the way many children learn their ABCs by singing the alphabet song. Alternatively, elaborative rehearsal is the act of linking new information you are trying to learn to existing information that you already know. For example, if you meet someone at a party and your phone is dead but you want to remember his phone number, which starts with area code 203, you might remember that your uncle Abdul lives in Connecticut and has a 203 area code. This way, when you try to remember the phone number of your new prospective friend, you will easily remember the area code. Craik and Lockhart (1972) proposed the levels of processing hypothesis that states the deeper you think about something, the better you remember it.

You may find yourself asking, “How much information can our memory handle at once?” To explore the capacity and duration of your short-term memory, have a partner read the strings of random numbers (Figure 8.5) out loud to you, beginning each string by saying, “Ready?” and ending each by saying, “Recall,” at which point you should try to write down the string of numbers from memory.

A series of numbers includes two rows, with six numbers in each row. From left to right, the numbers increase from four digits to five, six, seven, eight, and nine digits. The first row includes “9754,” “68259,” “913825,” “5316842,” “86951372,” and “719384273,” and the second row includes “6419,” “67148,” “648327,” “5963827,” “51739826,” and “163875942.”
Figure 8.5 Work through this series of numbers using the recall exercise explained above to determine the longest string of digits that you can store.

Note the longest string at which you got the series correct. For most people, the capacity will probably be close to 7 plus or minus 2. In 1956, George Miller reviewed most of the research on the capacity of short-term memory and found that people can retain between 5 and 9 items, so he reported the capacity of short-term memory was the “magic number” 7 plus or minus 2. However, more contemporary research has found working memory capacity is 4 plus or minus 1 (Cowan, 2010). Generally, recall is somewhat better for random numbers than for random letters (Jacobs, 1887) and also often slightly better for information we hear (acoustic encoding) rather than information we see (visual encoding) (Anderson, 1969).

Memory trace decay and interference are two factors that affect short-term memory retention. Peterson and Peterson (1959) investigated short-term memory using the three-letter sequences called trigrams (e.g., CLS) that had to be recalled after various time intervals between 3 and 18 seconds. Participants remembered about 80% of the trigrams after a 3-second delay, but only 10% after a delay of 18 seconds, which caused them to conclude that short-term memory decayed in 18 seconds. During decay, the memory trace becomes less activated over time, and the information is forgotten. However, Keppel and Underwood (1962) examined only the first trials of the trigram task and found that proactive interference also affected short-term memory retention. During proactive interference, previously learned information interferes with the ability to learn new information. Both memory trace decay and proactive interference affect short-term memory. Once the information reaches long-term memory, it has to be consolidated at both the synaptic level, which takes a few hours and into the memory system, which can take weeks or longer.

Long-term Memory

Long-term memory (LTM) is the continuous storage of information. Unlike short-term memory, long-term memory storage capacity is believed to be unlimited. It encompasses all the things you can remember that happened more than just a few minutes ago. One cannot really consider long-term memory without thinking about the way it is organized. Really quickly, what is the first word that comes to mind when you hear “peanut butter”? Did you think of jelly? If you did, you probably have associated peanut butter and jelly in your mind. It is generally accepted that memories are organized in semantic (or associative) networks (Collins & Loftus, 1975). A semantic network consists of concepts, and as you may recall from what you’ve learned about memory, concepts are categories or groupings of linguistic information, images, ideas, or memories, such as life experiences. Although individual experiences and expertise can affect concept arrangement, concepts are believed to be arranged hierarchically in the mind (Anderson & Reder, 1999; Johnson & Mervis, 1997, 1998; Palmer, Jones, Hennessy, Unze, & Pick, 1989; Rosch, Mervis, Gray, Johnson, & Boyes-Braem, 1976; Tanaka & Taylor, 1991). Related concepts are linked, and the strength of the link depends on how often two concepts have been associated.

Semantic networks differ depending on personal experiences. Importantly for memory, activating any part of a semantic network also activates the concepts linked to that part to a lesser degree. The process is known as spreading activation (Collins & Loftus, 1975). If one part of a network is activated, it is easier to access the associated concepts because they are already partially activated. When you remember or recall something, you activate a concept, and the related concepts are more easily remembered because they are partially activated. However, the activations do not spread in just one direction. When you remember something, you usually have several routes to get the information you are trying to access, and the more links you have to a concept, the better your chances of remembering.

There are two types of long-term memory: explicit and implicit (Figure 8.6). Understanding the difference between explicit memory and implicit memory is important because aging, particular types of brain trauma, and certain disorders can impact explicit and implicit memory in different ways. Explicit memories are those we consciously try to remember, recall, and report. For example, if you are studying for your chemistry exam, the material you are learning will be part of your explicit memory. In keeping with the computer analogy, some information in your long-term memory would be like the information you have saved on the hard drive. It is not there on your desktop (your short-term memory), but most of the time you can pull up this information when you want it. Not all long-term memories are strong memories, and some memories can only be recalled using prompts. For example, you might easily recall a fact, such as the capital of the United States, but you might struggle to recall the name of the restaurant at which you had dinner when you visited a nearby city last summer. A prompt, such as that the restaurant was named after its owner, might help you recall the name of the restaurant. Explicit memory is sometimes referred to as declarative memory because it can be put into words. Explicit memory is divided into episodic memory and semantic memory.

Episodic memory is information about events we have personally experienced (i.e., an episode). For instance, the memory of your last birthday is an episodic memory. Usually, episodic memory is reported as a story. The concept of episodic memory was first proposed in the 1970s (Tulving, 1972). Since then, Tulving and others have reformulated the theory, and currently, scientists believe that episodic memory is memory about happenings in particular places at particular times—the what, where, and when of an event (Tulving, 2002). It involves recollection of visual imagery as well as the feeling of familiarity (Hassabis & Maguire, 2007). Semantic memory is knowledge about words, concepts, and language-based knowledge and facts. Semantic memory is typically reported as facts. Semantic means having to do with language and knowledge about language. For example, answers to the following questions like “what is the definition of psychology” and “who was the first African American president of the United States” are stored in your semantic memory.

Implicit memories are long-term memories that are not part of our consciousness. Although implicit memories are learned outside of our awareness and cannot be consciously recalled, implicit memory is demonstrated in the performance of some task (Roediger, 1990; Schacter, 1987). Implicit memory has been studied with cognitive demand tasks, such as performance on artificial grammars (Reber, 1976), word memory (Jacoby, 1983; Jacoby & Witherspoon, 1982), and learning unspoken and unwritten contingencies and rules (Greenspoon, 1955; Giddan & Eriksen, 1959; Krieckhaus & Eriksen, 1960). Returning to the computer metaphor, implicit memories are like a program running in the background, and you are not aware of their influence. Implicit memories can influence observable behaviors as well as cognitive tasks. In either case, you usually cannot put the memory into words that adequately describe the task. There are several types of implicit memories, including procedural, priming, and emotional conditioning.

A diagram consists of three rows of boxes. The box in the top row is labeled “long-term memory;” a line from the box separates into two lines leading to two boxes on the second row, labeled “explicit memory” and “implicit memory.” From each of the second row boxes, lines split and lead to additional boxes. From the “explicit memory” box are two boxes labeled “episodic (events and experiences)” and “semantic (concepts and facts).” From the “implicit memory” box are three boxes labeled “procedural (How to do things),” “Priming (stimulus exposure affects responses to a later stimulus),” and “emotional conditioning (Classically conditioned emotional responses).”
Figure 8.6 There are two components of long-term memory: explicit and implicit. Explicit memory includes episodic and semantic memory. Implicit memory includes procedural memory and things learned through conditioning.

Implicit procedural memory is often studied using observable behaviors (Adams, 1957; Lacey & Smith, 1954; Lazarus & McCleary, 1951). Implicit procedural memory stores information about the way to do something, and it is the memory for skilled actions, such as brushing your teeth, riding a bicycle, or driving a car. You were probably not that good at riding a bicycle or driving a car the first time you tried, but you were much better after doing those things for a year. Your improved bicycle riding was due to learning balancing abilities. You likely thought about staying upright in the beginning, but now you just do it. Moreover, you probably are good at staying balanced, but cannot tell someone the exact way you do it. Similarly, when you first learned to drive, you probably thought about a lot of things that you just do now without much thought. When you first learned to do these tasks, someone may have told you how to do them, but everything you learned since those instructions that you cannot readily explain to someone else as the way to do it is implicit memory.

Implicit priming is another type of implicit memory (Schacter, 1992). During priming exposure to a stimulus affects the response to a later stimulus. Stimuli can vary and may include words, pictures, and other stimuli to elicit a response or increase recognition. For instance, some people really enjoy picnics. They love going into nature, spreading a blanket on the ground, and eating a delicious meal. Now, unscramble the following letters to make a word.

 

AETPL

 

What word did you come up with? Chances are good that it was “plate.”

Had you read, “Some people really enjoy growing flowers. They love going outside to their garden, fertilizing their plants, and watering their flowers,” you probably would have come up with the word “petal” instead of plate.

Do you recall the earlier discussion of semantic networks? The reason people are more likely to come up with “plate” after reading about a picnic is that plate is associated (linked) with picnic. Plate was primed by activating the semantic network. Similarly, “petal” is linked to flower and is primed by flower. Priming is also the reason you probably said jelly in response to peanut butter.

Implicit emotional conditioning is the type of memory involved in classically conditioned emotion responses (Olson & Fazio, 2001). These emotional relationships cannot be reported or recalled but can be associated with different stimuli. For example, specific smells can cause specific emotional responses for some people. If there is a smell that makes you feel positive and nostalgic, and you don’t know where that response comes from, it is an implicit emotional response. Similarly, most people have a song that causes a specific emotional response. That song’s effect could be an implicit emotional memory (Yang, Xu, Du, Shi, & Fang, 2011).

LINK TO LEARNING: Watch this video about superior autobiographical memory from the television news show 60 Minutes to learn more.

So you have worked hard to encode (via effortful processing) and store some important information for your upcoming final exam. How do you get that information back out of storage when you need it? The act of getting information out of memory storage and back into conscious awareness is known as retrieval. This would be similar to finding and opening a paper you had previously saved on your computer’s hard drive. Now it’s back on your desktop, and you can work with it again. Our ability to retrieve information from long-term memory is vital to our everyday functioning. You must be able to retrieve information from memory in order to do everything from knowing how to brush your hair and teeth, to driving to work, to knowing how to perform your job once you get there.

There are three ways you can retrieve information out of your long-term memory storage system: recall, recognition, and relearning. Recall is what we most often think about when we talk about memory retrieval: it means you can access information without cues. For example, you would use recall for an essay test. Recognition happens when you identify information that you have previously learned after encountering it again. It involves a process of comparison. When you take a multiple-choice test, you are relying on recognition to help you choose the correct answer. Here is another example. Let’s say you graduated from high school 10 years ago, and you have returned to your hometown for your 10-year reunion. You may not be able to recall all of your classmates, but you recognize many of them based on their yearbook photos.

The third form of retrieval is relearning, and it’s just what it sounds like. It involves learning information that you previously learned. Whitney took Spanish in high school, but after high school, she did not have the opportunity to speak Spanish. Whitney is now 31, and her company has offered her an opportunity to work in their Mexico City office. In order to prepare herself, she enrolls in a Spanish course at the local community center. She’s surprised at how quickly she’s able to pick up the language after not speaking it for 13 years; this is an example of relearning.

Learning Objectives

By the end of this section, you will be able to:

  • Explain the brain functions involved in memory
  • Recognize the roles of the hippocampus, amygdala, and cerebellum

Are memories stored in just one part of the brain, or are they stored in many different parts of the brain? Karl Lashley began exploring this problem, about 100 years ago, by making lesions in the brains of animals such as rats and monkeys. He was searching for evidence of the engram: the group of neurons that serve as the “physical representation of memory” (Josselyn, 2010). First, Lashley (1950) trained rats to find their way through a maze. Then, he used the tools available at the time—in this case, a soldering iron—to create lesions in the rats’ brains, specifically in the cerebral cortex. He did this because he was trying to erase the engram, or the original memory trace that the rats had of the maze.

Lashley did not find evidence of the engram, and the rats were still able to find their way through the maze, regardless of the size or location of the lesion. Based on his creation of lesions and the animals’ reaction, he formulated the equipotentiality hypothesis: if part of one area of the brain involved in memory is damaged, another part of the same area can take over that memory function (Lashley, 1950). Although Lashley’s early work did not confirm the existence of the engram, modern psychologists are making progress locating it. For example, Eric Kandel has spent decades studying the synapse and its role in controlling the flow of information through neural circuits needed to store memories (Mayford, Siegelbaum, & Kandel, 2012).

Many scientists believe that the entire brain is involved with memory. However, since Lashley’s research, other scientists have been able to look more closely at the brain and memory. They have argued that memory is located in specific parts of the brain, and specific neurons can be recognized for their involvement in forming memories. The main parts of the brain involved with memory are the amygdala, the hippocampus, the cerebellum, and the prefrontal cortex (Figure 8.8).

An illustration of a brain shows the location of the amygdala, hippocampus, cerebellum, and prefrontal cortex.
Figure 8.8The amygdala is involved in fear and fear memories. The hippocampus is associated with declarative and episodic memory as well as recognition memory. The cerebellum plays a role in processing procedural memories, such as how to play the piano. The prefrontal cortex appears to be involved in remembering semantic tasks.

The Amygdala

First, let’s look at the role of the amygdala in memory formation. The main job of the amygdala is to regulate emotions, such as fear and aggression (Figure 8.8). The amygdala plays a part in how memories are stored because storage is influenced by stress hormones. For example, one researcher experimented with rats and the fear response (Josselyn, 2010). Using Pavlovian conditioning, a neutral tone was paired with a foot shock to the rats. This produced a fear memory in the rats. After being conditioned, each time they heard the tone, they would freeze (a defense response in rats), indicating a memory for the impending shock. Then the researchers induced cell death in neurons in the lateral amygdala, which is the specific area of the brain responsible for fear memories. They found the fear memory faded (became extinct). Because of its role in processing emotional information, the amygdala is also involved in memory consolidation: the process of transferring new learning into long-term memory. The amygdala seems to facilitate encoding memories at a deeper level when the event is emotionally arousing.

Another group of researchers also experimented with rats to learn how the hippocampus functions in memory processing (Figure 8.8). They created lesions in the hippocampi of the rats and found that the rats demonstrated memory impairment on various tasks, such as object recognition and maze running. They concluded that the hippocampus is involved in memory, specifically normal recognition memory as well as spatial memory (when the memory tasks are like recall tests) (Clark, Zola, & Squire, 2000). Another job of the hippocampus is to project information to cortical regions that give memories meaning and connect them with other memories. It also plays a part in memory consolidation: the process of transferring new learning into long-term memory.

Injury to this area leaves us unable to process new declarative memories. One famous patient, known for years only as H. M., had both his left and right temporal lobes (hippocampi) removed in an attempt to help control the seizures he had been suffering from for years (Corkin, Amaral, González, Johnson, & Hyman, 1997). As a result, his declarative memory was significantly affected, and he could not form new semantic knowledge. He lost the ability to form new memories, yet he could still remember information and events that had occurred prior to the surgery.

The Cerebellum and Prefrontal Cortex

Although the hippocampus seems to be more of a processing area for explicit memories, you could still lose it and be able to create implicit memories (procedural memory, motor learning, and classical conditioning), thanks to your cerebellum (Figure 8.8). For example, one classical conditioning experiment is to accustom subjects to blink when they are given a puff of air to the eyes. When researchers damaged the cerebellums of rabbits, they discovered that the rabbits were not able to learn the conditioned eye-blink response (Steinmetz, 1999; Green & Woodruff-Pak, 2000).

Other researchers have used brain scans, including positron emission tomography (PET) scans, to learn how people process and retain information. From these studies, it seems the prefrontal cortex is involved. In one study, participants had to complete two different tasks: either looking for the letter a in words (considered a perceptual task) or categorizing a noun as either living or non-living (considered a semantic task) (Kapur et al., 1994). Participants were then asked which words they had previously seen. Recall was much better for the semantic task than for the perceptual task. According to PET scans, there was much more activation in the left inferior prefrontal cortex in the semantic task. In another study, encoding was associated with left frontal activity, while retrieval of information was associated with the right frontal region (Craik et al., 1999).

Neurotransmitters

There also appear to be specific neurotransmitters involved with the process of memory, such as epinephrine, dopamine, serotonin, glutamate, and acetylcholine (Myhrer, 2003). There continues to be discussion and debate among researchers as to which neurotransmitter plays which specific role (Blockland, 1996). Although we don’t yet know which role each neurotransmitter plays in memory, we do know that communication among neurons via neurotransmitters is critical for developing new memories. Repeated activity by neurons leads to increased neurotransmitters in the synapses and more efficient and more synaptic connections. This is how memory consolidation occurs.

It is also believed that strong emotions trigger the formation of strong memories, and weaker emotional experiences form weaker memories; this is called arousal theory (Christianson, 1992). For example, strong emotional experiences can trigger the release of neurotransmitters, as well as hormones, which strengthen memory; therefore, our memory for an emotional event is usually better than our memory for a non-emotional event. When humans and animals are stressed, the brain secretes more of the neurotransmitter glutamate, which helps them remember the stressful event (McGaugh, 2003). This is clearly evidenced by what is known as the flashbulb memory phenomenon.

flashbulb memory is an exceptionally clear recollection of an important event (Figure 8.9). Where were you when you first heard about the pandemic? Can you remember where you were and what you were doing? This is an example of a flashbulb memory: a record of an atypical and unusual event that has very strong emotional associations. 

Learning Objectives

By the end of this section, you will be able to:

  • Compare and contrast the two types of amnesia
  • Discuss the unreliability of eyewitness testimony
  • Discuss encoding failure
  • Discuss the various memory errors
  • Compare and contrast the two types of interference

You may pride yourself on your amazing ability to remember the birthdates and ages of all of your friends and family members, or you may be able to recall vivid details of your 5th birthday party at Chuck E. Cheese’s. However, all of us have at times felt frustrated, and even embarrassed, when our memories have failed us. There are several reasons why this happens.

Amnesia

Amnesia is the loss of long-term memory that occurs as a result of disease, physical trauma, or psychological trauma. Endel Tulving (2002) and his colleagues at the University of Toronto studied K. C. for years. K. C. suffered a traumatic head injury in a motorcycle accident and then had severe amnesia. Tulving writes,

the outstanding fact about K.C.’s mental make-up is his utter inability to remember any events, circumstances, or situations from his own life. His episodic amnesia covers his whole life, from birth to the present. The only exception is the experiences that, at any time, he has had in the last minute or two. (Tulving, 2002, p. 14)

Anterograde Amnesia

There are two common types of amnesia: anterograde amnesia and retrograde amnesia (Figure 8.10). Anterograde amnesia is commonly caused by brain trauma, such as a blow to the head. With anterograde amnesia, you cannot remember new information, although you can remember information and events that happened prior to your injury. The hippocampus is usually affected (McLeod, 2011). This suggests that damage to the brain has resulted in the inability to transfer information from short-term to long-term memory; that is, the inability to consolidate memories.

Many people with this form of amnesia are unable to form new episodic or semantic memories but are still able to form new procedural memories (Bayley & Squire, 2002). This was true of H. M., which was discussed earlier. The brain damage caused by his surgery resulted in anterograde amnesia. H. M. would read the same magazine over and over, having no memory of ever reading it—it was always new to him. He also could not remember people he had met after his surgery. If you were introduced to H. M. and then you left the room for a few minutes, he would not know you upon your return and would introduce himself to you again. However, when presented the same puzzle several days in a row, although he did not remember having seen the puzzle before, his speed at solving it became faster each day (because of relearning) (Corkin, 1965, 1968).

A single-line flow diagram compares two types of amnesia. In the center is a box labeled “event” with arrows extending from both sides. Extending to the left is an arrow pointing left to the word “past”; the arrow is labeled “retrograde amnesia.” Extending to the right is an arrow pointing right to the word “present”; the arrow is labeled “anterograde amnesia.”
Figure 8.10This diagram illustrates the timeline of retrograde and anterograde amnesia. Memory problems that extend back in time before the injury and prevent retrieval of information previously stored in long-term memory are known as retrograde amnesia. Conversely, memory problems that extend forward in time from the point of injury and prevent the formation of new memories are called anterograde amnesia.

Retrograde Amnesia

Retrograde amnesia is the loss of memory for events that occurred prior to the trauma. People with retrograde amnesia cannot remember some or even all of their past. They have difficulty remembering episodic memories. What if you woke up in the hospital one day and there were people surrounding your bed claiming to be your spouse, your children, and your parents? The trouble is you don’t recognize any of them. You were in a car accident, suffered a head injury, and now have retrograde amnesia. You don’t remember anything about your life prior to waking up in the hospital. This may sound like the stuff of Hollywood movies, and Hollywood has been fascinated with the amnesia plot for nearly a century, going all the way back to the film Garden of Lies from 1915 to more recent movies such as the Jason Bourne spy thrillers. However, for real-life sufferers of retrograde amnesia, like former NFL football player Scott Bolzan, the story is not a Hollywood movie. Bolzan fell, hit his head, and deleted 46 years of his life in an instant. He is now living with one of the most extreme cases of retrograde amnesia on record.

The formulation of new memories is sometimes called construction, and the process of bringing up old memories is called reconstruction. Yet as we retrieve our memories, we also tend to alter and modify them. A memory pulled from long-term storage into short-term memory is flexible. New events can be added and we can change what we think we remember about past events, resulting in inaccuracies and distortions. People may not intend to distort facts, but it can happen in the process of retrieving old memories and combining them with new memories (Roediger & DeSoto, 2015).

Suggestibility

When someone witnesses a crime, that person’s memory of the details of the crime is very important in catching the suspect. Because memory is so fragile, witnesses can be easily (and often accidentally) misled due to the problem of suggestibility. Suggestibility describes the effects of misinformation from external sources that leads to the creation of false memories. In the fall of 2002, there was a sniper in the DC area. These shootings went on in a variety of places for over three weeks. During this time, as you can imagine, people were terrified to leave their homes, go shopping, or even walk through their neighborhoods. Police officers and the FBI worked frantically to solve the crimes, and a tip hotline was set up. Law enforcement received over 140,000 tips, which resulted in approximately 35,000 possible suspects (Newseum, n.d.).

Most of the tips were dead ends until a white van was spotted at the site of one of the shootings. The police chief went on national television with a picture of the white van. After the news conference, several other eyewitnesses called to say that they too had seen a white van fleeing from the scene of the shooting. At the time, there were more than 70,000 white vans in the area. Police officers, as well as the general public, focused almost exclusively on white vans because they believed the eyewitnesses. Other tips were ignored. When the suspects were finally caught, they were driving a blue sedan.

As illustrated by this example, we are vulnerable to the power of suggestion, simply based on something we see on the news. Or we can claim to remember something that in fact is only a suggestion someone made. It is the suggestion that is the cause of the false memory.

Eyewitness Misidentification

Even though memory and the process of reconstruction can be fragile, police officers, prosecutors, and the courts often rely on eyewitness identification and testimony in the prosecution of criminals. However, faulty eyewitness identification and testimony can lead to wrongful convictions (Figure 8.11).

A bar graph is titled “Leading cause of wrongful conviction in DNA exoneration cases (source: Innocence Project).” The x-axis is labeled “leading cause,” and the y-axis is labeled “percentage of wrongful convictions (first 239 DNA exonerations).” Four bars show data: “eyewitness misidentification” is the leading cause in about 75% of cases, “forensic science” in about 49% of cases, “false confession” in about 23% of cases, and “informant” in about 18% of cases.
Figure 8.11In studying cases where DNA evidence has exonerated people from crimes, the Innocence Project discovered that eyewitness misidentification is the leading cause of wrongful convictions (Yeshiva University, 2009).

How does this happen? In 1984, Jennifer Thompson, then a 22-year-old college student in North Carolina, was attacked. As she was being attacked, she tried to memorize every detail of his face and physical characteristics, vowing that if she survived, she would help get him convicted. After the police were contacted, a composite sketch was made of the suspect, and Jennifer was shown six photos. She chose two, one of which was of Ronald Cotton. After looking at the photos for 4–5 minutes, she said, “Yeah. This is the one,” and then she added, “I think this is the guy.” When questioned about this by the detective who asked, “You’re sure? Positive?” She said that it was him. Then she asked the detective if she did OK, and he reinforced her choice by telling her she did great. These kinds of unintended cues and suggestions by police officers can lead witnesses to identify the wrong suspect. The district attorney was concerned about her lack of certainty the first time, so she viewed a lineup of seven men. She said she was trying to decide between numbers 4 and 5, finally deciding that Cotton, number 5, “Looks most like him.” He was 22 years old.

By the time the trial began, Jennifer Thompson had absolutely no doubt that she was attacked by Ronald Cotton. She testified at the court hearing, and her testimony was compelling enough that it helped convict him. How did she go from, “I think it’s the guy” and it “Looks most like him,” to such certainty? Wells and Quinlivan (2009) assert it’s suggestive police identification procedures, such as stacking lineups to make the defendant stand out, telling the witness which person to identify, and confirming witnesses’ choices by telling them “Good choice,” or “You picked the guy.”

After Cotton was convicted, he was sent to prison for life plus 50 years. After 4 years in prison, he was able to get a new trial. Jennifer Thompson once again testified against him. This time Ronald Cotton was given two life sentences. After serving 11 years in prison, DNA evidence finally demonstrated that Ronald Cotton did not commit the crime, was innocent and had served over a decade in prison for a crime he did not commit.

The Misinformation Effect

Cognitive psychologist Elizabeth Loftus has conducted extensive research on memory. She has studied false memories as well as recovered memories of childhood abuse. Loftus also developed the misinformation effect paradigm, which holds that after exposure to additional and possibly inaccurate information, a person may misremember the original event.

According to Loftus, an eyewitness’s memory of an event is very flexible due to the misinformation effect. To test this theory, Loftus and John Palmer (1974) asked 45 U.S. college students to estimate the speed of cars using different forms of questions (Figure 8.12). The participants were shown films of car accidents and were asked to play the role of the eyewitness and describe what happened. They were asked, “About how fast were the cars going when they (smashed, collided, bumped, hit, contacted) each other?” The participants estimated the speed of the cars based on the verb used.

Participants who heard the word “smashed” estimated that the cars were traveling at a much higher speed than participants who heard the word “contacted.” The implied information about speed, based on the verb they heard, had an effect on the participants’ memory of the accident. In a follow-up one week later, participants were asked if they saw any broken glass (none was shown in the accident pictures). Participants who had been in the “smashed” group were more than twice as likely to indicate that they did remember seeing glass. Loftus and Palmer demonstrated that a leading question encouraged them to not only remember the cars were going faster but to also falsely remember that they saw broken glass.

Photograph A shows two cars that have crashed into each other. Part B is a bar graph titled “perceived speed based on questioner’s verb (source: Loftus and Palmer, 1974).” The x-axis is labeled “questioner’s verb, and the y-axis is labeled “perceived speed (mph).” Five bars share data: “smashed” was perceived at about 41 mph, “collided” at about 39 mph, “bumped” at about 37 mph, “hit” at about 34 mph, and “contacted” at about 32 mph.
Figure 8.12When people are asked leading questions about an event, their memory of the event may be altered. (credit a: modification of work by Rob Young)

Controversies over Repressed and Recovered Memories

Other researchers have described how whole events, not just words, can be falsely recalled, even when they did not happen. The idea that memories of traumatic events could be repressed has been a theme in the field of psychology, beginning with Sigmund Freud, and the controversy surrounding the idea continues today.

Recall of false autobiographical memories is called false memory syndrome. This syndrome has received a lot of publicity, particularly as it relates to memories of events that do not have independent witnesses—often the only witnesses to the abuse are the perpetrator and the victim (e.g., abuse).

On one side of the debate are those who have recovered memories of childhood abuse years after it occurred. These researchers argue that some children’s experiences have been so traumatizing and distressing that they must lock those memories away in order to lead some semblance of a normal life. They believe that repressed memories can be locked away for decades and later recalled intact through hypnosis and guided imagery techniques (Devilly, 2007).

On the other side, Loftus has challenged the idea that individuals can repress memories of traumatic events from childhood and then recover those memories years later through therapeutic techniques such as hypnosis, guided visualization, and age regression. Loftus is not saying that childhood trauma doesn’t happen, but she does question whether or not those memories are accurate, and she is skeptical of the questioning process used to access these memories, given that even the slightest suggestion from the therapist can lead to misinformation effects.

Ever since Loftus published her first studies on the suggestibility of eyewitness testimony in the 1970s, social scientists, police officers, therapists, and legal practitioners have been aware of the flaws in interview practices. Consequently, steps have been taken to decrease the suggestibility of witnesses. One way is to modify how witnesses are questioned. When interviewers use neutral and less leading language, children more accurately recall what happened and who was involved (Goodman, 2006; Pipe, 1996; Pipe, Lamb, Orbach, & Esplin, 2004). Another change is in how police lineups are conducted. It’s recommended that a blind photo lineup be used. This way the person administering the lineup doesn’t know which photo belongs to the suspect, minimizing the possibility of giving leading cues. Additionally, judges in some states now inform jurors about the possibility of misidentification. Judges can also suppress eyewitness testimony if they deem it unreliable.

Forgetting

“I’ve a grand memory for forgetting,” quipped Robert Louis Stevenson. Forgetting refers to the loss of information from long-term memory. We all forget things, like a loved one’s birthday, someone’s name, or where we put our car keys. As you’ve come to see, memory is fragile, and forgetting can be frustrating and even embarrassing. But why do we forget? To answer this question, we will look at several perspectives on forgetting.

Encoding Failure

Sometimes memory loss happens before the actual memory process begins, which is an encoding failure. We can’t remember something if we never stored it in our memory in the first place. This would be like trying to find a book on your e-reader that you never actually purchased and downloaded. Often, in order to remember something, we must pay attention to the details and actively work to process the information (effortful encoding). Lots of times we don’t do this. For instance, think of how many times in your life you’ve seen a penny. Can you accurately recall what the front of a U.S. penny looks like? When researchers Raymond Nickerson and Marilyn Adams (1979) asked this question, they found that most Americans don’t know which one it is. The reason is most likely encoding failure. Most of us never encode the details of the penny. We only encode enough information to be able to distinguish it from other coins. If we don’t encode the information, then it’s not in our long-term memory, so we will not be able to remember it.

Four illustrations of nickels have minor differences in the placement and orientation of text.
Figure 8.13Can you tell which coin, (a), (b), (c), or (d) is the accurate depiction of a US nickel? The correct answer is (c).

Memory Errors

Psychologist Daniel Schacter (2001), a well-known memory researcher, offers seven ways our memories fail us. He calls them the seven sins of memory and categorizes them into three groups: forgetting, distortion, and intrusion (Table 8.1).

Schacter’s Seven Sins of Memory
Sin Type Description Example
Transience Forgetting Accessibility of memory decreases over time Forget events that occurred long ago
absentmindedness Forgetting Forgetting caused by lapses in attention Forget where your phone is
Blocking Forgetting Accessibility of information is temporarily blocked Tip of the tongue
Misattribution Distortion Source of memory is confused Recalling a dream memory as a waking memory
Suggestibility Distortion False memories Result from leading questions
Bias Distortion Memories distorted by current belief system Align memories to current beliefs
Persistence Intrusion Inability to forget undesirable memories Traumatic events
Table 8.1

Let’s look at the first sin of the forgetting errors: transience, which means that memories can fade over time. Here’s an example of how this happens. Nathan’s English teacher has assigned his students to read the novel To Kill a Mockingbird. Nathan comes home from school and tells his mom he has to read this book for class. “Oh, I loved that book!” she says. Nathan asks her what the book is about, and after some hesitation, she says, “Well . . . I know I read the book in high school, and I remember that one of the main characters is named Scout, and her father is an attorney, but I honestly don’t remember anything else.” Nathan wonders if his mother actually read the book, and his mother is surprised she can’t recall the plot. What is going on here is storage decay: unused information tends to fade with the passage of time.

In 1885, German psychologist Hermann Ebbinghaus analyzed the process of memorization. First, he memorized lists of nonsense syllables. Then he measured how much he learned (retained) when he attempted to relearn each list. He tested himself over different periods of time from 20 minutes later to 30 days later. The result is his famous forgetting curve (Figure 8.14). Due to storage decay, an average person will lose 50% of the memorized information after 20 minutes and 70% of the information after 24 hours (Ebbinghaus, 1885/1964). Your memory for new information decays quickly and then eventually levels out.

A line graph has an x-axis labeled “elapsed time since learning” with a scale listing these intervals: 0, 20, and 60 minutes; 9, 24, and 48 hours; and 6 and 31 days. The y-axis is labeled “retention (%)” with a scale of zero to 100. The line reflects these approximate data points: 0 minutes is 100%, 20 minutes is 55%, 60 minutes is 40%, 9 hours is 37%, 24 hours is 30%, 48 hours is 25%, 6 days is 20%, and 31 days is 10%.
Figure 8.14 The Ebbinghaus forgetting curve shows how quickly memory for new information decays.

Are you constantly losing your cell phone? Have you ever driven back home to make sure you turned off the stove? Have you ever walked into a room for something, but forgotten what it was? You probably answered yes to at least one, if not all, of these examples—but don’t worry, you are not alone. We are all prone to committing the memory error known as absentmindedness, which describes lapses in memory caused by breaks in attention or our focus being somewhere else.

A photograph shows Morgan Freeman.
Figure 8.15 Blocking is also known as the tip-of-the-tongue (TOT) phenomenon. The memory is right there, but you can’t seem to recall it, just like not being able to remember the name of that very famous actor, Morgan Freeman. (credit: modification of work by D. Miller)

Now let’s take a look at the three errors of distortion: misattribution, suggestibility, and bias. Misattribution happens when you confuse the source of your information. Let’s say Alejandra was dating Lucia and they saw the first Hobbit movie together. Then they broke up and Alejandra saw the second Hobbit movie with someone else. Later that year, Alejandra and Lucia get back together. One day, they are discussing how the Hobbit books and movies are different and Alejandra says to Lucia, “I loved watching the second movie with you and seeing you jump out of your seat during that super scary part.” When Lucia responded with a puzzled and then angry look, Alejandra realized she’d committed the error of misattribution.

The second distortion error is suggestibility. Suggestibility is similar to misattribution since it also involves false memories, but it’s different. With misattribution, you create the false memory entirely on your own, which is what the victim did in the Donald Thomson case above. With suggestibility, it comes from someone else, such as a therapist or police interviewer asking leading questions of a witness during an interview.

Memories can also be affected by bias, which is the final distortion error. Schacter (2001) says that your feelings and view of the world can actually distort your memory of past events. There are several types of bias:

  • Stereotypical bias involves racial and gender biases. For example, when Asian American and European American research participants were presented with a list of names, they more frequently incorrectly remembered typical African American names such as Jamal and Tyrone to be associated with the occupation basketball player, and they more frequently incorrectly remembered typical White names such as Greg and Howard to be associated with the occupation of politician (Payne, Jacoby, & Lambert, 2004).
  • Egocentric bias involves enhancing our memories of the past (Payne et al., 2004). Did you really score the winning goal in that big soccer match, or did you just assist?
  • Hindsight bias happens when we think an outcome was inevitable after the fact. This is the “I knew it all along” phenomenon. The reconstructive nature of memory contributes to hindsight bias (Carli, 1999). We remember untrue events that seem to confirm that we knew the outcome all along.

Have you ever had a song play over and over in your head? How about a memory of a traumatic event, something you really do not want to think about? When you keep remembering something, to the point where you can’t “get it out of your head” and it interferes with your ability to concentrate on other things, it is called persistence. It’s Schacter’s seventh and last memory error. It’s actually a failure of our memory system because we involuntarily recall unwanted memories, particularly unpleasant ones (Figure 8.16). For instance, you witness a horrific car accident on the way to work one morning, and you can’t concentrate on work because you keep remembering the scene.

A photograph shows two soldiers physically fighting.
Figure 8.16Many veterans of military conflicts involuntarily recall unwanted, unpleasant memories. (credit: Department of Defense photo by U.S. Air Force Tech. Sgt. Michael R. Holzworth)

Interference

Sometimes information is stored in our memory, but for some reason it is inaccessible. This is known as interference, and there are two types: proactive interference and retroactive interference (Figure 8.17). Have you ever gotten a new phone number or moved to a new address, but right after you tell people the old (and wrong) phone number or address? When the new year starts, do you find you accidentally write the previous year? These are examples of proactive interference: when old information hinders the recall of newly learned information. Retroactive interference happens when information learned more recently hinders the recall of older information. For example, this week you are studying memory and learn about the Ebbinghaus forgetting curve. Next week you study lifespan development and learn about Erikson’s theory of psychosocial development, but thereafter have trouble remembering Ebbinghaus’s work because you can only remember Erickson’s theory.

A diagram shows two types of interference. A box with the text “learn combination to high school locker, 17–04–32” is followed by an arrow pointing right toward a box labeled “memory of old locker combination interferes with recall of new gym locker combination, ??–??–??”; the arrow connecting the two boxes contains the text “proactive interference (old information hinders recall of new information.” Beneath that is a second part of the diagram. A box with the text “knowledge of new email address interferes with recall of old email address, nvayala@???” is followed by an arrow pointing left toward the “early event” box and away from another box labeled “learn sibling’s new college email address, npatel@siblingcollege.edu”; the arrow connecting the two boxes contains the text “retroactive interference (new information hinders recall of old information.”
Figure 8.17Sometimes forgetting is caused by a failure to retrieve information. This can be due to interference, either retroactive or proactive.

Learning Objectives

By the end of this section, you will be able to:

  • Recognize and apply memory-enhancing strategies
  • Recognize and apply effective study techniques

Most of us suffer from memory failures of one kind or another, and most of us would like to improve our memories so that we don’t forget where we put the car keys or, more importantly, the material we need to know for an exam. In this section, we’ll look at some ways to help you remember better, and at some strategies for more effective studying.

Memory-Enhancing Strategies

What are some everyday ways we can improve our memory, including recall? To help make sure information goes from short-term memory to long-term memory, you can use memory-enhancing strategies. One strategy is rehearsal, or the conscious repetition of information to be remembered (Craik & Watkins, 1973). Think about how you learned your multiplication tables as a child. You may recall that 6 x 6 = 36, 6 x 7 = 42, and 6 x 8 = 48. Memorizing these facts is rehearsal.

Another strategy is chunking: you organize information into manageable bits or chunks (Bodie, Powers, & Fitch-Hauser, 2006). Chunking is useful when trying to remember information like dates and phone numbers. Instead of trying to remember 5205550467, you remember the number as 520-555-0467. So, if you met an interesting person at a party and you wanted to remember his phone number, you would naturally chunk it, and you could repeat the number over and over, which is the rehearsal strategy.

You could also enhance memory by using elaborative rehearsal: a technique in which you think about the meaning of new information and its relation to knowledge already stored in your memory (Tigner, 1999). Elaborative rehearsal involves both linking the information to knowledge already stored and repeating the information. For example, in this case, you could remember that 520 is an area code for Arizona and the person you met is from Arizona. This would help you better remember the 520 prefix. If the information is retained, it goes into long-term memory.

Mnemonic devices are memory aids that help us organize information for encoding (Figure 8.18). They are especially useful when we want to recall larger bits of information such as steps, stages, phases, and parts of a system (Bellezza, 1981). Brian needs to learn the order of the planets in the solar system, but he’s having a hard time remembering the correct order. His friend Kelly suggests a mnemonic device that can help him remember. Kelly tells Brian to simply remember the name Mr. VEM J. SUN, and he can easily recall the correct order of the planets: Mercury, Venus, Earth, Mars, Jupiter, Saturn, Uranus, and Neptune. You might use a mnemonic device to help you remember someone’s name, a mathematical formula, or the order of mathematical operations.

A photograph shows a person’s two hands clenched into fists so the knuckles show. The knuckles are labeled with the months and the number of days in each month, with the knuckle protrusions corresponding to the months with 31 days, and the indentations between knuckles corresponding to February and the months with 30 days.
Figure 8.18 This is a knuckle mnemonic to help you remember the number of days in each month. Months with 31 days are represented by the protruding knuckles and shorter months fall in the spots between knuckles. (credit: modification of work by Cory Zanker)

It seems the more vivid or unusual the mnemonic, the easier it is to remember. The key to using any mnemonic successfully is to find a strategy that works for you.

What if you want to remember items you need to pick up at the store? Simply say them out loud to yourself. A series of studies (MacLeod, Gopie, Hourihan, Neary, & Ozubko, 2010) found that saying a word out loud improves your memory for the word because it increases the word’s distinctiveness. Feel silly, saying random grocery items aloud? This technique works equally well if you just mouth the words. Using these techniques increased participants’ memory for the words by more than 10%. These techniques can also be used to help you study.

How to Study Effectively

Based on the information presented in this chapter, here are some strategies and suggestions to help you hone your study techniques (Figure 8.19). The key with any of these strategies is to figure out what works best for you.

A photograph shows students studying.
Figure 8.19 Memory techniques can be useful when studying for class. (credit: Barry Pousman)
  • Use elaborative rehearsal: In a famous article, Fergus Craik and Robert Lockhart (1972) discussed their belief that information we process more deeply goes into long-term memory. Their theory is called levels of processing. If we want to remember a piece of information, we should think about it more deeply and link it to other information and memories to make it more meaningful. For example, if we are trying to remember that the hippocampus is involved with memory processing, we might envision a hippopotamus with an excellent memory and then we could better remember the hippocampus.
  • Apply the self-reference effect: As you go through the process of elaborative rehearsal, it would be even more beneficial to make the material you are trying to memorize personally meaningful to you. In other words, make use of the self-reference effect. Write notes in your own words. Write definitions from the text, and then rewrite them in your own words. Relate the material to something you have already learned for another class, or think about how you can apply the concepts to your own life. When you do this, you are building a web of retrieval cues that will help you access the material when you want to remember it.
  • Use distributed practice: Study across time in short durations rather than trying to cram it all in at once. Memory consolidation takes time, and studying across time allows time for memories to consolidate. In addition, cramming can cause the links between concepts to become so active that you get stuck in a link, and it prevents you from accessing the rest of the information that you learned.
  • Rehearse, rehearse, rehearse: Review the material over time, in spaced and organized study sessions. Organize and study your notes, and take practice quizzes/exams. Link the new information to other information you already know well.
  • Study efficiently: Students are great highlighters, but highlighting is not very efficient because students spend too much time studying the things they already learned. Instead of highlighting, use index cards. Write the question on one side and the answer on the other side. When you study, separate your cards into those you got right and those you got wrong. Study the ones you got wrong and keep sorting. Eventually, all your cards will be in the pile you answered correctly.
  • Be aware of interference: To reduce the likelihood of interference, study during a quiet time without interruptions or distractions (like television or music).
  • Keep moving: Of course, you already know that exercise is good for your body, but did you also know it’s also good for your mind? Research suggests that regular aerobic exercise (anything that gets your heart rate elevated) is beneficial for memory (van Praag, 2008). Aerobic exercise promotes neurogenesis: the growth of new brain cells in the hippocampus, an area of the brain known to play a role in memory and learning.
  • Get enough sleep: While you are sleeping, your brain is still at work. During sleep, the brain organizes and consolidates information to be stored in long-term memory (Abel & Bäuml, 2013).
  • Make use of mnemonic devices: As you learned earlier in this chapter, mnemonic devices often help us to remember and recall information. There are different types of mnemonic devices, such as the acronym. An acronym is a word formed by the first letter of each of the words you want to remember. For example, even if you live near one, you might have difficulty recalling the names of all five Great Lakes. What if I told you to think of the word Homes? HOMES is an acronym that represents Huron, Ontario, Michigan, Erie, and Superior: the five Great Lakes. Another type of mnemonic device is an acrostic: you make a phrase of all the first letters of the words. For example, if you are taking a math test and you are having difficulty remembering the order of operations, recalling the following sentence will help you: “Please Excuse My Dear Aunt Sally,” because the order of mathematical operations is Parentheses, Exponents, Multiplication, Division, Addition, Subtraction. There also are jingles, which are rhyming tunes that contain keywords related to the concept, such as i before e, except after c.

Additional Supplemental Resources

Websites

  • The Innocence Project 
    • The Innocence Project is a non-profit that exonerates the wrongly convicted through DNA testing and reforms the criminal justice system to prevent future injustices.

Videos

  • Bugs Bunny Effect 
    • Elizabeth Loftus describes the creation of a false memory. Closed captioning not available.
  • Ted-Ed: How memories form and how we lose them 
    • In this Ted-Ed video, learn more about the way our brains work to form memories, as well as how we can lose memories over time.  A variety of discussion and assessment questions are included with the video (free registration is required to access the questions). Closed captioning available.
  • Eyewitness Testimony (Video) Part 1 
  • Eyewitness Testimony (Video) Part 2
    • In these two videos, follow the story of Ronald Cotton, who was falsely accused and convicted of a crime he didn’t commit, and learn how fragile our memory really is. Closed captioning not available.
  • Tip of the Tongue 
    • What is the Tip of the Tongue phenomenon?  Watch this video to learn more about how you can overcome its effects. Closed captioning available.
  • How reliable is your memory?  
    • In this TED talk, psychologist Elizabeth Loftus discusses her research and work in the field of memory, as well as a variety of ways in which our memory can be unreliable. Closed captioning available.
  • Crash Course Video #13 – How We Make Memories 
    • This video on how we make memories includes information on topics such as stages of memory, mnemonics, and levels of processing. Closed captioning available.
  • Crash Course Video #14 – Remembering and Forgetting 
    • This video on remembering and forgetting includes information on topics such as implicit and explicit memory, encoding, retrieval, and the misinformation effect. Closed captioning available.
  • Chunking: Learning Technique for Better Memory and Understanding 

    • Try chunking next time you feel the limits of your working memory. Just like how clever restaurants chunks their menus into starters, mains, desserts, with 3-4 options each. With chunking, it’s easy to compare our options and make a decision.
  • The Memory Palace: Can You Do It?

    • A Memory Palace is an imaginary location in your mind where you can store mental images to remember facts, strings of numbers, shopping lists or all kinds of things. It’s hugely popular among memory champions.
  • What happens when you remove the hippocampus? – Sam Kean

    • When Henry Molaison (now widely known as H.M.) cracked his skull in an accident, he began blacking out and having seizures. In an attempt to cure him, daredevil surgeon Dr. William Skoville removed H.M.’s hippocampus. Luckily, the seizures did go away — but so did his long-term memory! Sam Kean walks us through this astonishing medical case, detailing everything H.M. taught us about the brain and memory.
  • Brain Games Car Crash Memory Experiment

    • Clip of car crash memory experiment from the Brain Games episode “Retrain Your Brain”. All Rights belong to the National Geographic Channel

 

Lifespan Development

8

A picture shows two intertwined hands. One is the large hand of an adult, and the other is the tiny hand of an infant. The infant’s entire hand grasp is about the size of a single adult finger.
Figure 9.1 How have you changed since childhood? How are you the same? What will your life be like 25 years from now? Fifty years from now? Lifespan development studies how you change as well as how you remain the same over the course of your life. (credit: modification of work by Giles Cook)

Welcome to the story of your life. In this chapter we explore the fascinating tale of how you have grown and developed into the person you are today. We also look at some ideas about who you will grow into tomorrow. Yours is a story of lifespan development (Figure 9.1), from the start of life to the end.

The process of human growth and development is more obvious in infancy and childhood, yet your development is happening this moment and will continue, minute by minute, for the rest of your life. Who you are today and who you will be in the future depends on a blend of genetics, environment, culture, relationships, and more, as you continue through each phase of life. You have experienced firsthand much of what is discussed in this chapter. Now consider what psychological science has to say about your physical, cognitive, and psychosocial development, from the womb to the tomb.

Learning Objectives

By the end of this section, you will be able to:

  • Define and distinguish between the three domains of development: physical, cognitive, and psychosocial
  • Discuss the normative approach to development
  • Understand the three major issues in development: continuity and discontinuity, one common course of development or many unique courses of development, and nature versus nurture

Developmental psychologists study how humans change and grow from conception through childhood, adolescence, adulthood, and death. From the moment we are conceived until the moment we die, we continue to develop. They view development as a lifelong process that can be studied scientifically across three developmental domains—physical, cognitive, and psychosocial development. Physical development involves growth and changes in the body and brain, the senses, motor skills, and health and wellness. Cognitive development involves learning, attention, memory, language, thinking, reasoning, and creativity. Psychosocial development involves emotions, personality, and social relationships. We refer to these domains throughout the chapter.

CONNECT THE CONCEPTS: Research Methods in Developmental Psychology

You’ve learned about a variety of research methods used by psychologists. Developmental psychologists use many of these approaches in order to better understand how individuals change mentally and physically over time. These methods include naturalistic observations, case studies, surveys, and experiments, among others.

Naturalistic observations involve observing behavior in its natural context. A developmental psychologist might observe how children behav