The below mentioned article provides notes on human memory.
Note # 1. Human Memory – Two Influential Views:
Psychologists, like other scientists, often construct models of the processes they study. These models are overviews describing the nature and operation of the processes in question. Following this standard scientific pattern, psychologists have proposed several models of human memory. Here, we’ll focus on two such models that have been very influential.
The Atkinson and Shiffrin Model:
Many researchers have found computer memory to be useful as a working model for human memory—a way of thinking about it generally. And this basic analogy played a key role in a highly influential model of memory proposed by Atkinson and Shiffrin (1968), sometimes known as the modal model of memory.
These researchers noted that both human memory and computer memory must accomplish three basic tasks:
(1) Encoding—converting information into a form that can be entered into memory;
(2) Storage—somehow retaining information over varying periods of time; and
(3) Retrieval— locating and accessing specific information when it is needed at later times.
Taking note of this basic fact, Atkinson and Shiffrin went on to propose a model of human memory.
Let’s consider storage first. The model proposed by Atkinson and Shiffrin suggests that we possess three distinct systems for storing information. One of these, known as sensory memory, provides temporary storage of information brought to us by our senses. If you’ve ever watched someone wave a flashlight in a dark room and perceived trails of light behind it, you are familiar with the operation of sensory memory.
A second type of memory is known as short-term memory. (As we’ll soon see, psychologists now usually refer to this kind of memory as working memory). Short-term memory holds relatively small amounts of information for brief periods of time, usually thirty seconds or less. This is the memory system you use when you look up a phone number and dial it.
Our third memory system, long-term memory, allows us to retain vast amounts of information for very long periods of time. It is this memory system that permits you to remember events that happened a few hours ago, yesterday, last month—or many years in the past. And it is long-term memory that allows you to remember factual information such as the capital of your state, the name of the president.
How does information move from one memory system to another? Atkinson and Shiffrin proposed that this involves the operation of active control processes that act as filters, determining which information will be retained. Information in sensory memory enters short-term memory when it becomes the focus of our attention, whereas sensory impressions that do not engage attention fade and quickly disappear.
So, where memory is concerned, selective attention—our ability to pay attention to only some aspects of the world around us while largely ignoring others—often plays a crucial role. In contrast, information in short- term memory enters long-term storage through elaborative rehearsal—when we think about its meaning and relate it to other information already in long-term memory.
Unless we engage in such cognitive effort, information in short-term memory, too, quickly fades away and is lost. In contrast, merely repeating information silently to ourselves (maintenance rehearsal) does not necessarily move information from short-term to long-term memory.
In sum, the Atkinson and Shiffrin model linked the study of human memory firmly to the general information-processing perspective that is an important aspect of all cognitive psychology today. Let’s turn, to the modern view of memory—one that pulls together advances in memory research and in our understanding of how consciousness (including memory) emerges from the functioning of the brain.
Neural Network Models: Parallel Processing of Information:
As you read these words, you are performing some amazing feats of memory. For instance, you are able to recognize each word very quickly (most of us can read several hundred words per minute) and to understand its meaning.
This means that you must somehow recognize each letter and the patterns these letters make (specific words), and must do this for literally thousands of different words as you read. How can we account for the speed with which we can accomplish this and many other cognitive tasks? The answer proposed by many psychologists is through parallel processing.
Modern Computers are Serial Devices:
They work on information one step at a time. In contrast, our brains appear to process information in a parallel fashion; this means that many modules—collections of interconnected neurons—process information in different ways simultaneously. These modules may be scattered widely at different locations in the brain.
Neural network models suggest that it is the rich interconnectedness of our neural units that accounts for our ability to process information so quickly. These models also propose that information in memory is not located in a specific place within the brain; rather, it is represented by patterns of activation that spread over many processing units and by the strength of the activation across these various units.
Perhaps the following analogy, offered by Lindsay and Reed (1995), will help: Think of neural networks as being like a spider web with millions of strands connecting various units.
The tighter these strands, the stronger the connections among various units. Incoming information “pulls” on certain strands, thus activating other units, just as a fly that lands on a spider’s web sends vibrations along the strands to other locations—including the center, where the spider is located. No analogy is perfect, and this one certainly isn’t, but it may help you to grasp the nature of this model of memory.
Note # 2. Kinds of Information Stored in Memory:
From your own experience, our memories hold many kinds of information. Some of it is factual; for instance, you heard today that one of your good friends is moving to another town, and you remember this information and think about it as you drive to work. Similarly, you have a vast store of information we might term “general knowledge”—everything from “The earth revolves around the sun” to “Hawaii is located in the Pacific Ocean, somewhere between California and China.”
But your memory holds much more than factual information. Can you play a musical instrument? Ride a bicycle? Type on a keyboard by touch? If so, you realize that you also have another, distinctly different type of information stored in memory—information that allows you to perform such activities. So memory actually holds several kinds of information. Putting that important issue aside, we’ll now examine the kinds of information stored in memory. This, in turn, will help us develop a clearer picture of memory itself.
Working Memory: The Workbench of Consciousness:
In a sense, working memory is the workbench of consciousness—the “place” where information we are using right now is held and processed. Let’s take a look first at how working memory operates, and then at an influential model that describes its basic nature.
Evidence for the existence of this kind of memory system was soon provided by several findings, but the most important of these involved what is known as the serial position curve. The serial position curve has to do with the fact that when we memorize a list of words (or other stimuli), the words at the beginning and at the end of the list are remembered better than words in the middle.
Why does this effect occur? One possible answer, supported by the results of many studies, involves the existence of two memory systems—one that holds information for a few seconds and another that stores information for longer periods of time. You remember very well the last words you heard—arecency effect—because they are still present in working memory when you are asked to recall them.
And you remember the words at the start of the list because they have already been entered into long-term memory. Words in the middle, in contrast, have vanished from working memory and are not present in long-term memory. The result? You remember few of them at this point in time.
But assuming that working memory exists, how much can it hold? Research findings again suggest a clear answer: As a storage system, working memory can hold only about seven (plus or minus two) discrete items. Beyond that point the system becomes overloaded, and if new information enters, existing information is lost. However, each of these “items” can contain several separate bits of information—bits that are somehow related and can be grouped together into meaningful units.
When this is the case, each piece of information is described as a chunk, and the total amount of information held in chunks can be quite large. For example, consider the following list of letters- IBFIMBWBMATWIAC. After hearing or reading it once, how many could you remember? Probably no more than about seven. But imagine that instead, the letters were presented as follows: FBI, IBM, BMW, TWA, CIA.
Could you remember more now? In all likelihood you could, because now the letters are grouped in meaningful chunks—the initials of famous organizations. Because of the process of chunking, working memory can hold a larger amount of information than you might guess, even though it can retain only seven to nine separate items at once.
Processing in Working Memory: There’s a Lot Going On!
How, precisely, does working memory operate? While there’s not total agreement on this issue, there is growing evidence for a model proposed by Baddeley (1992).
According to this theory, working memory consists of three major parts:
(1) A phonological loop that processes information relating to the sounds of words;
(2) A visuospatial sketch pad that processes visual and spatial information (i.e., information about the visual appearance of objects, such as color and shape and where they are located in space); and
(3) A central executive that supervises and coordinates the other two components.
Several kinds of evidence offer support for this view of working memory. Many studies employing neuroimaging—scans of people’s brains while they work on various tasks—indicate that spatial and phonological information is processed in different areas. This finding supports the distinction between the phonological loop and visuospatial sketch pad.
Additional findings indicate that the visuospatial sketch pad processes both visual and spatial information and, moreover, that these two kinds of information may be processed at different locations in the brain. Finally, very recent research has even, through brain-imaging techniques, been able to observe specific regions of the brain in which spatial information (e.g., the location of target stimuli within a visual field) is rehearsed, and so retained, in working memory.
What about the central executive—the component that regulates the other activities of working memory— how do we know that it exists too? One line of evidence supporting the existence of the central executive involves a concurrent task paradigm in which participants work on two tasks at the same time: a primary task such as adding digits and, at the same time, a distracting second task—for example, generating items at random from familiar item sets such as the alphabet or a set of ten numbers, or pushing buttons in a specific sequence.
The reasoning is that the more similar the distracting task is to the primary task, the more it will disrupt the planning and control functions of the central executive, and so the poorer the performance on the primary task will be. This is precisely what happens.
Other evidence for the existence of the central executive is provided by research on individuals who have suffered extensive injury to the frontal lobes—where the executive function is, presumably, centered. Such persons, described as suffering from the dysexecutive syndrome, are unable to make decisions.
They sit for hours choosing a meal in a restaurant, are easily distracted, and show a tendency toward perseveration—they continue to pursue an initial goal instead of switching to other goals once the first one is met. All these effects are consistent with the view that working memory includes a central executive that plays a key role in coordinating a wide range of mental processes.
Memory for Factual Information: Episodic and Semantic Memory:
Now that we’ve examined some of the techniques used by psychologists to study memory, let’s return to our discussion of the different kinds of information stored in memory. One important type involves factual information. Memory for such information is sometimes termed explicit or declarative memory, because we can bring it into consciousness and report it verbally.
It consists of two major types: episodic memory and semantic memory. Episodic memory holds information we acquired at a specific time and place; it is the kind of memory that allows you to go back in time and to remember specific thoughts or experiences you had in the past. This is the kind of memory studied by psychologists in experiments in which participants are presented with lists of words, numbers, and so on, and later are tested for memory of this information.
Semantic memory, in contrast, holds information of a more general nature—information we do not remember acquiring at a specific time or place. Such memory includes the meaning of words, the properties of objects, typical events in everyday life, and the countless facts we all learn during our school years (e.g., e = mc2; Jawaharlal Nehru was the first prime minister of India; there are about 6 billion people in the world). Let’s take a closer look at memory for these two important kinds of information.
Episodic Memory: Some Factors that Affect It:
As a student, you have lots of firsthand experience with the functioning of episodic memory. Often, you must commit to memory lists of definitions, terms, or formulas. What can you do to improve such memory? Research on semantic memory suggests that many factors influence it, but that among these the most important are the amount and spacing of practice.
The first finding seems fairly obvious; the more often we practice information, the more of it we can retain. However, the major gains occur at first, and then further improvements in memory slow down. For this reason, spacing (or distribution) of practice is important too. Spreading out your efforts to memorize information over time is helpful.
For instance, two sessions of thirty minutes are often better, in terms of retaining information, than one session of sixty minutes. This suggests that memories somehow consolidate or grow stronger with the passage of time; we’ll examine this idea and evidence relating to it shortly.
Another factor that has a powerful effect on retention is the kind of processing we perform. When we study a list of words, we can simply read them or listen to them; or, alternatively, we can think about them in various ways. As you probably know from your own studying, it is possible to read the same pages in a text over and over again without remembering much of the information they contain. However, if you actively think about the material and try to understand it (e.g., its meaning, its relationship to other information), you stand a better chance of remembering it when the exam booklets are handed out.
Two psychologists, Craik and Lockhart (1972), took careful account of this fact in an influential theory of memory known as the levels of processing view. They suggested that the more deeply information is processed, the more likely it is to be retained. What are these levels of processing like?
Shallow processing involves little mental effort and might consist of repeating a word or making a simple sensory judgment about it—for example, do two words or letters look alike? A deeper level of processing might involve more complex comparisons—for example, do two words rhyme? A much deeper level of processing would include attention to meaning—for instance, do two words have the same meaning? Does a word make sense when used in a specific sentence?
Considerable evidence suggests that the deeper the level of processing that takes place when we encounter new information, the more likely the information is to enter long-term memory. However, important questions still exist with respect to this model.
For example, it is difficult to specify in advance just what constitutes a deep versus a shallow level of processing. Second, it is not clear that a person can read a word over and over again and not be aware of, or think about its meaning. In fact, several forms of processing may occur at once. So, because of these potential confusions, it is difficult to speak about discrete levels or processing.
Another, and very important, factor that influences episodic memory involves what are known as retrieval cues—stimuli that Retrieval Cues. Stimuli associated are associated with information stored in memory and so can help with information stored in memory ring the information to mind at times when it cannot be recalled that can aid in its retrieval, spontaneously. Many studies suggest that such cues can often help us remember.
Indeed, the more retrieval cues we have, the better our ability to remember information entered into episodic memory although even a large number of retrieval cues is no guarantee that we’ll remember something we should remember! Perhaps the most intriguing research on this topic involves what is known as context-dependent memory: the fact that material learned in one environment or context is easier to remember in a similar environment or context than in a very different one. Many illustrations of this effect exist, but one of the most intriguing—and unusual—is a study conducted by Godden and Baddeley (1975).
In this experiment, participants were experienced deep-sea divers. They learned a list of words either on the beach or beneath fifteen feet of water. Then they tried to recall the words, either in the same environment in which they had learned them or in the other setting. Results offered clear support for the impact of context—in this case, physical setting. Words learned on land were recalled much better in this location than under water, and vice versa.
Interestingly, additional findings suggest that it is not necessary to be in the location or context where information was first entered into long-term memory; merely imagining this setting may be sufficient. In other words, we seem capable of generating our own context-related retrieval cues.
So, if you study for an exam in your room and then take the exam in a very different setting, it may be helpful to imagine yourself back in your room when you try to remember specific information; doing so may provide you with additional, self-generated retrieval cues.
External cues are not the only ones that can serve as aids to memory, however; a growing body of evidence indicates that our own internal states can sometimes play this role, too. The most general term for this kind of effect is state-dependent retrieval, which refers to the fact that it is often easier to recall information stored in long-term memory when our internal state is similar to that which existed when the information was first entered into memory.
For example, suppose that while studying for an exam, you drink lots of coffee. Thus, the effects of caffeine are present while you memorize the information in question. On the day of the test, should you also drink lots of coffee? The answer appears to be yes and not just for the boost in alertness this may provide.
In addition, being in the same physical state may provide you with retrieval cues that may help boost your performance. The basic principle that underlies all these effects is sometimes described as the encoding specificity principle, which states that retrieval of information is successful to the extent that the retrieval cues match the cues the learner used during the study phase. The more these cues are similar, the more memory is facilitated.
Semantic Memory: How Information is Organized in Memory:
Now let’s turn to semantic memory—memory of a general nature that we don’t remember acquiring at a specific time or in a specific place. Because each of us already possesses a very large amount of information in semantic memory, psychologists have focused primarily on how such information is organized, rather than on how it is entered into memory in the first place.
One important element of such organization consists of concepts—mental categories for objects or events that are similar to one another in certain ways. For instance, the words bicycle, airplane, automobile, and elevator are included in the concept for vehicles or means of transportation. The words shoes, shirts, jeans and jackets are included in the concept clothing. Here we’ll just briefly consider their role in semantic memory.
Concepts in semantic memory seem to exist in networks reflecting the relationships between them—semantic networks.
In the network model of semantic memory, the meaning of a concept reflects its links or associations with other, adjoining concepts. Another view is that the meaning of concepts derives from prototypes—abstract, idealized representations that capture an average or typical notion of members of the category. For instance, the prototype for professor in your semantic memory represents all the professors you have encountered and may suggest that professors are, on average, middle-aged, absent-minded, slightly rumpled looking, and so on.
Still another view is that any given concept is represented in memory not in terms of an overall average (a prototype), but in terms of an exemplar—an example of the category that the individual can readily bring to mind. So for example, when you read the word fruit, what comes to mind? Probably an apple, a pear, or an orange. These are exemplars of the concept fruit, and in deciding whether a new object you encounter is a fruit, you may bring one or more of these exemplars to mind and compare the new object to them.
One final question: Is there any concrete evidence that episodic memory and semantic memory, which both store factual information, actually differ? There definitely is. In some medical patients, diseases or operations that have damaged certain parts of the brain leave semantic memory intact while diminishing episodic memory, or vice versa.
In addition, other research using PET scans or recordings from individual brain cells indicate that different brain regions are active when individuals attempt to recall general information (from semantic memory) as opposed to information they acquired in a specific context (from episodic memory). So there do seem to be grounds for the distinction between semantic and episodic memory.
Memory for Skills: Procedural Memory:
There are countless everyday experiences that often we have information in our memories that we can’t readily put into words. Our ability to store such information is known as procedural memory, or sometimes as implicit memory. Both terms are informative. We often know how to perform some action but can’t describe this knowledge to others (e.g., can Mark McGwire tell me how he hits so many home runs?), and what we can’t put into words is, in one sense, implicit.
Given that information stored in procedural memory can’t be described verbally, how can we study it? One way is through the priming effect: the fact that having seen or heard a stimulus once may facilitate our recognizing it on a later occasion, even if we are unaware that this is happening.
Some experts on memory refer to the priming effect as a difference between remembering and knowing. Remembering means being able to report an event and the circumstances under which it occurred; knowing is the familiarity we have with a stimulus even when we can’t remember it explicitly—a familiarity that may strongly influence our behavior.
In an intriguing study Erdley and D’Agostino (1989) exposed one group of participants to adjectives related to the trait of honesty (honorable, truthful, and sincere). The words were flashed on a screen so quickly, that participants were unaware of them—they merely saw a blur. Different words, unrelated to honesty (what, little, many), were flashed on the screen for another group of participants.
Later, both groups read a description of an imaginary person—one that portrayed her in ambiguous terms. Finally, they rated this person on several dimensions, some of which were related to honesty. Results indicated that the participants exposed to the honesty-related words rated her higher on this trait than those exposed to the neutral words. So, even though participants were unaware of the words, they were still affected by them through a process of automatic priming.
Priming is not the only source of evidence for the existence of procedural memory, however. Additional evidence is provided by the way in which many skills are acquired. Initially, as we learn a skill, we think about what we are doing and can describe our actions and what we are learning verbally. As we master the skill, however, this declarative (explicit) knowledge is replaced by procedural knowledge, and we gradually become less and less able to describe precisely how we perform the actions in question.
What about memory itself—can it be viewed as a skill that can be improved? Absolutely; and for now, we should note that memory does indeed improve with practice. For instance, consider the case of J. C., a waiter who was able to remember as many as twenty different orders without writing them down. How did he do this? Two psychologists who studied this individual found that he had devised a scheme of encoding orders in terms of basic categories such as entree (e.g., chicken, steak), temperature (rare, medium), and starches (e.g., rice, fries). He then used the initial letters of these and other food-related categories to form words and phrases that had meaning for him. Then he used these words or phrases as retrieval cues for the orders.
For instance, if someone ordered steak cooked rare, with fries, he might form the phrase “Sue rarely fights.” When he got to the kitchen, he translated this back into the order. The point is that over time, J. C. became so good at using this system that he no longer had to think about making up phrases; he did it automatically as he took orders without writing them down and could concentrate on amusing customers by flaunting his skill before their eyes!
Other people who show what might be termed super memory provide even more dramatic examples of procedural memory in action. For instance, consider this incident: Just before a concert, a musician in the orchestra came to the great conductor Arturo Toscanini and told him that one of the keys on his instrument was broken. Toscanini thought for a moment and then said, “It is all right—that note does not occur in tonight’s concert.”
In a flash, he had somehow examined all of the notes to be played and concluded that the broken key wouldn’t matter! Could Toscanini explain how he did this? We doubt it, but instances like this demonstrate the amazing capacities of a well-developed procedural memory.