Cultural Software » Chapter 11: Metaphor, Metonymy, and Cognitive Models

Earlier I mentioned Clifford Geertz's suggestion that cultural under through classical rhetorical figures like metaphor and me have taken this insight one step further. They argue that metaphor and metonymy are special cases of cognitive modeling that people use to understand the world.1 People create more complex cognitive models from existing ones through imaginative extension. This process of imaginative extension also resembles the classical rhetorical tropes of metaphor and metonymy.

Cognitive Models

Like other forms of cultural software, cognitive models like those involved in metaphor and metonymy can be distributed widely through human language. On the other hand, many people share the same cognitive models because they create similar ones independently through their own experiences. George Lak for example, that the most basic cognitive models derive from our experiences as individuals living within a body.2 Many familiar metaphors are based on models of bodily movement; metaphors of improvement are often based on forward motion. Like good bricoleurs, human beings use their experiences as embodied individuals as models or image schemas to understand other parts of the world around them; these basic models, in turn, are the building blocks of increasingly intricate and complicated cognitive mod Some of the most basic image schemas are those of objects in a container, source-path-goal, linkage, part and whole, center and periphery, up and down, and front and back.3 All of these primitive schemas are originally derived from bodily movements and embodied experience.

The idea that cognitive structures emerge from bodily experience has a long history.4 The approach has interesting similarities to Pierre Bourdieu's theory of habitus. A habitus is a set of generative principles of understanding shared by members of a given culture. Like Lakoff and Johnson, Bourdieu argues that conceptual development of the habitus often involves analogies to bodily experiences and bodily movements.5 A still earlier anticipation of the idea appears in Giambattista Vico's New Science, where Vico states that a universal principle of etymology is that "words are carried over from bodies and from the properties of bodies to signify the institutions of the mind and spirit."6

Although these thinkers stress the role of the human body in shaping very basic conceptual schemas, it is hardly necessary that every human being individually re-create all of her cognitive schemas through bodily experience. Cog in others through communication and social learning. The theory of cognitive models assumes a historical development from basic sets of perceptual schemas that have a basis in human morphology. Further developments and modifications of these schemas, however, are culturally relative, so we should not expect to find the same cognitive models in every culture. Cognitive models and schemas, like other forms of cultural soft­ware, survive and reproduce with different degrees of success in different ecologies. Hence we should expect different cultures to produce different cognitive models and thus to employ different metaphors and metonymies.

Metaphoric Models

Metaphoric understanding operates according to the formula X is Y. The is is not the is of identity. It connotes the modeling of one thing in terms of another, or, more generally, the mapping of one domain of experience onto another.7 An example of such a model is the metaphor "understanding is seeing." This metaphor does not assert the identity of seeing with understanding. Rather, it models the process of understanding on the experience of sight. Other common examples of metaphorical modeling include "time is money," "rational argument is war," and "anger is heat." Like networks of association, metaphorical models are cultural heuristics. They help us understand some things in terms of others that we already understand.

Everyday language offers abundant evidence of metaphorical modeling. Be model understanding on seeing, we also routinely describe under standing in terms of sight and vision, as is evident in expressions like "I see what you mean," "He needs to make his views clear," or "She saw through his deception." Moreover, we often use multiple models to understand the same phenomenon. For example, there are at least two common metaphorical models for arguments. An argument can be a building ("She constructed an excellent thesis") or a journey ("I don't see where she's going with that line of reasoning").8 Similarly, we understand time as money (when we spend or waste time) or as movement (when time passes).

Because A is modeled on B, properties of B are assumed also to apply to A, or A will be understood or described as having corresponding features. Thus metaphorical models have conceptual or logical entailments. If an argument is a building, for example, then it must be supported by foundations just as a building is. If its foundations are weak, then the argument is shaky. On the other hand, if an argument is a journey, its conclusion is the end of the journey.9 If the argument is deficient, one goes nowhere with it, or one goes astray. If the argument is effective, one reaches the desired conclusion-one goes in the right direction. These conceptual entailments are part of the power that metaphorical models possess as heuristics. When we compare A to B, we see elements within A and their relations to each other because we already under they relate to each other. Thus a metaphorical model not only describes but also structures understanding. It not only com its signal value as a heuristic. A metaphorical mapping imports and applies a ready-made structure that is already understood.

Metaphorical models can be combined with other metaphorical models, producing increasingly complex structures of metaphorical entailment.10 An example of a complex metaphorical model is the relation between ideas and minds. It combines two metaphors: "ideas are objects" and "the mind is a container." Taken together these produce the model of "ideas are objects con entailments: we "grasp" ideas, we "hold" them in our minds. When we understand, we have the right idea "in" our mind: we "get" it.

This model interacts with others in turn. A common metaphor for communication is sending. Under this model, people communicate by sending ideas contained in linguistic expressions (words) from one place to another. When people communicate, the ideas in one person's mind (contained in her words or expressions) travel to and are received into another person's mind. This model combines a number of metaphors: "ideas are objects," "minds are containers," "linguistic expressions are containers" and "communication is send logical entailments. We see this in expressions like "Your reasons came through to me," "I didn't quite catch what you meant," "I wish I could put my ideas into words," or "There's a lot packed into what he is saying."11 We can find evidence of cognitive modeling not only in common expressions but also in etymology. Metaphorical extension is a useful way of creating new words. Tracing the roots of a word often reveals the metaphorical models that allowed new meanings to be created out of old ones.12

Metaphoric models differ from structuralist homologies in two important ways. First, unlike structuralist homologies, metaphorical models or mappings do not necessarily involve relations of conceptual opposition. In the sentence "They camped at the foot of the mountain," for example, the mountain is understood in terms of the human body. The base of the mountain is compared to a human foot, which is the lowest part of the body and supports the body when it stands. This metaphor maps the relation of foot to body (which is a relation of part to whole) onto a physical object. If we were to try to state this mapping in terms of a homology, we would say that body : foot:: mountain base of mountain. But the relation between the body and foot is not one of conceptual opposition. It involves at least three relations: (1) part to whole (synecdoche), (2) relative position (lowest part), and (3) function (support).

Second, metaphorical mappings make use of schemas or gestalts that cannot always be reduced to relations between two opposed terms. We compare a mountain to a body in the expression "foot of the mountain." Nevertheless, a body has many parts that bear many different relations to each other. The source-path-goal schema underlying metaphors like "life is a journey" obviously has more than two items. The metaphor "ideas are objects in the mind" (which relies on the metaphor that "the mind is a container") involves a com metaphor is made possible by a gestalt or visual model that allows us to employ it--a gestalt that includes the understanding that containers have an inside and an outside, that things can be placed in them and out of them, and so on. This gestalt contains many different features, not merely two.

Ideological Effects of Metaphoric Reasoning

Metaphors produce ideological effects because they are selective accounts of experience. Understanding X in terms of Y emphasizes only some features and discounts others. It organizes our imagination about X in one way rather than another. We model X according to the features and relationships between elements found in Y, although we might have modeled it on a completely different set of elements and relations.13 A common way of speaking of rational argument in our culture, for example, is through metaphors of war and combat.14 The metaphorical model is “rational argument is war.” We speak of demolishing an opponent's arguments, of marshaling evidence, and so on. Taken together, these military metaphors form a coherent set of mutually reinforcing entailments. If rational argument is war, then the other person in the argument is an opponent and the goal of rational argument is to win the argument. One does this by preparing the best defenses, attacking the other person's weak points, shooting down her arguments, and forcing her to capitulate. The parties contend with each other until one party is unable to continue and either surrenders (by agreeing, which signifies that she has lost the argument) or retreats (by changing the subject). In this way the metaphor "rational argument is war" paints an entire portrait of human relations and appropriate behaviors in rational argument.

The potential for ideological effects from this metaphorical model flows directly from the ways in which the model is partial and selective. Comparing rational argument to war captures certain features of rational argument: that individuals strive to better each other in rational argument, that argument is a test of a certain type of strength, and that the participants regard each other as opponents or adversaries. At the same time, this metaphor directs us away from other possible features. The combat metaphor is not the only possible way to understand rational argument. Consider the metaphor of rational argument as cooperation or as a joint enterprise: Rational argument is a co enterprise designed to achieve some mutually desired goal--for example, truth, justice, or accommodation of interests. The parties work to perspectives. When agreement is reached, it is not the defeat or overpowering of one's opponent but the satisfactory attainment of a shared goal.

The combat and the cooperation models of rational argument focus on different features of rational argument. Both explain many of the same features of rational argument, but each does so in a different way. Under the combat model, for example, argument can also lead to truth, but the process has quite different ramifications. One example is the adversary system of justice in Anglo contest of adversaries in adjudication is designed (ideally) to lead to truth. Another is the familiar justification of free speech in terms of a "marketplace of ideas," in which individuals compete with each other to persuade their audience, just as they compete for market share in the economic marketplace.15 In the adversary model truth emerges not from cooperative striving for accuracy and validity but from the conflict of opposing stories, and from the partial, self-interested motives of the opposing sides. Moving from the example of law to that of science or the humanities, the adversarial model in the academy suggests that truth best emerges from individuals who seek to increase their reputational capital by promoting their pet theories and demolishing the competing theories of other academics.

As we might expect, this model of striving toward truth by demolishing the claims of opponents suppresses other aspects of rational argument-for example, the view that truth might best be approached by sharing different perspectives and by attempting to understand perspectives quite different from one's own. The adversarial view also downplays aspects highlighted by a more cooperative view of reason: that progress is cumulative rather than mutually destructive, and that reasoning builds on other people's insights rather than clearing them away to replace them with one's own. In the specific context of legal disputes, the adversarial model deemphasizes the possibility that dispute resolution might best be served by trying to make the parties understand each other's different points of view, with the goal of reaching a mutually acceptable accommodation of their interests.

Thus we can see two different ways in which metaphoric models produce ideological effects. First, metaphoric models selectively describe a situation, and in so doing help to suppress alternative conceptions. By imagining the world one way, we make it more difficult to imagine it in other ways. As Lakoff and Johnson put it, using such a metaphor, "which allows us to focus only on those aspects of our experience that it highlights, leads us to view the entailments of the metaphor as being true," or natural, or what "goes without saying." Metaphors like "rational argument is war," they point out, "have the power to define reality ... through a coherent network of entailments that highlight some features of reality and hide others.16

Second, and perhaps more important, metaphorical description positively produces social reality as much as it suppresses aspects of it. A metaphor like "rational argument is war" defines and "structure[s] ... what we do and how we understand what we are doing when we argue."17 Thus it is important to understand that a metaphor does more than simply allow us to understand the process of argumentation. This metaphorical model also helps to constitute social conventions of argument and hence helps constitute social reality. Metaphors like "rational argument is war" define the parameters of appropriate social conduct. If argument is war rather than cooperation, we are likely to treat the people we argue with differently, and we will expect different treatment as well.

To be a member of a culture that thinks about argument in these terms is precisely what makes it possible for us to win and lose arguments, for the other person to be an adversary or opponent, for one to be able to gain and lose ground in argument, to plan and execute attacks, and so on.18

The constitutive function of metaphor, in short, does not simply distort reality. Rather, it makes reality; like narrative construction, metaphor has the power to "make itself true" in social practice. The prevalent social metaphor that rational argument is war creates a series of real expectations about intellectual activity and appropriate behavior in intellectual life that one neglects at one's peril. Indeed, the metaphor even shapes the possible modes of its denial. Suppose, for example, that a person claims to have won an argument with us. To dispute the claim is already to accept elements of the metaphorical scheme. It becomes quite difficult to avoid talking in terms of winning and losing an argument when others insist upon employing this metaphorical model; to join in the fray means that the metaphor has already worked its power. In politics it is often said that one side has successfully "defined the terms of debate" that the other side must follow. If metaphor were merely a convenient way of de this could be so. But if a set of socially enforceable conventions and expectations are being created and main could be wielded.

Metaphorical models are classic examples of the ambivalent nature of cultural software. They assist understanding in some respects even as they hinder it in others. Their power stems precisely from their ability to empower understanding by shaping and hence limiting it. To counter this power, we must deconstruct the metaphorical model. We must reveal its metaphorical character by demonstrating that the figural mapping is not logically compelled and showing how it suppresses or downplays important features of a situation.19 Nevertheless, often one cannot demonstrate this without offering a competing metaphorical model. We may not realize how the adversarial model of argument is limited until we think about argument as a cooperative venture. We need a new vantage point from which to see the limitations of our previous vantage point, a vantage point that a contrasting metaphorical model provides.

This shift of heuristics is characteristic of the way cultural software operates. Metaphorical models assist understanding by prefiguring it; without them, un may be difficult or even impossible. Thus one often can counter the power of a metaphorical model and its logical entailments only by substituting a competing metaphorical model that can serve as an alternative heuristic. One can attempt to move parties from confrontation to mediation, for example, by redescribing what they are doing as a cooperative venture. One tries to show that the parties actually have a common shared goal (peace, jus in the interests of both to reach that goal.

Metonymic Models

A second kind of cognitive model is a metonymic model. In classical rhetoric, a metonymy substitutes one thing for another that it bears some relationship to. For example, we often associate institutions with their geographical locations. Thus we speak of "Washington" to mean the U.S. government, the "Pentagon" to refer to the Defense Department, and "Hollywood" to refer to the American movie industry. Note that each of these associations is slightly different. The American movie industry was once actually located in Holly places, but its capital is Washington. Other implicit conceptual relations in metonymies can be between a piece of clothing and the person who wears it ("Who's the suit?"), a part of the body and a person ("We need a fresh face"), an object and the person who uses it ("There's a new gun in town, sheriff"), a person who controls an institution and the institution controlled ("Bush defeated Hussein"), a producer and a product ("Do you want to listen to some Mozart?"), a place and an event that occurred there ("No more Vietnams”), and a part and the whole (“I’ve got some new wheels”).20 The last example has a special name in classical rhetoric-synecdoche. But for purposes of this discussion I shall treat synecdoche as a special case of metonymy.

A metonymic model generally takes the form "B for A." A, the target concept, is understood as B, the metonymic. The relation between A and B is defined by a conceptual schema. This schema may be a gestalt (for example, a body), a familiar form of association (for example, between users and objects they use), a causal relation, or a standard script or narrative. In our first ex institutions exist in places. In the sentence "I got here by hopping on a train," the implicit cog embarkation stands for the entire process of traveling.21

Like metaphorical models, metonymic models are heuristics. One reason to understand A as B is that B is more salient or easier to remember. For example, it is easier to think about a typical case than a distribution of differing entities. But like metaphorical models, metonymic models can produce ideological effects because the features of B can be confused with those of A. We may confuse effects with causes, symbols with the things they stand for, parts of a social system with the social system itself, individual actors with the institutions they represent, and so on. Some varieties of what Marxist theory calls reification may be metonymic in character because they confuse products of a social system with the system of social relations and social power.

Some of the most important and pervasive ideological effects of metonymy arise from a special kind of metonymic model, in which an instance of a cat category. One example of such a metonym is a social stereotype, another is a paragon or exemplary case. These metonymic models are special cases of the many cognitive models employed in human categorization.

Cognitive Models of Categorization

Human categorization is a complicated psychological process. In the past twenty-five years, psychologists and cognitive scientists have discovered that people often categorize in ways that do not conform to commonsense views about categorization and predication. These commonsense views are sometimes called the classical theory of categories. According to this theory, members of each category share common properties that are necessary and sufficient for membership in the category. These defining properties apply equally well to all members. There are no "second-class" members of a category, at least if the common properties they share have been rigorously defined.22

The classical model of categories can be a theory of human mental operations or a theory of the underlying logical structure of the world. As a theory of human mental operations, it has come under increasing attack. Numerous philosophical, linguistic, and psychological studies have suggested that human intelligence categorizes in several different ways, some of which do not fit the classical conception of categorization.

We can think of Wittgenstein's famous notion of family resemblance as an early critique of the classical model. Wittgenstein noted that the word game does not fit the classical description of a category because there is no single feature that all games share. Instead, games have what Wittgenstein called a "family resemblance"; like members of a family, different games have different properties in common but need not share any single property.23

A systematic challenge to the classical theory of categories arose out of psychological studies conducted in the 1970s by Eleanor Rosch.24 Rosch demonstrated that people do not experience all members of a category as equally good examples of the category. Subjects asked to rate particular examples of a category (like "bird" or "chair") judge certain examples as more representative than others. Most subjects, for example, rated a robin as a more representative example of a bird than an ostrich or a chicken and rated a desk chair as more representative of the category "chair" than a rocking chair, barber chair, or beanbag chair. Rosch called these more representative examples prototypes. Her experiments showed that such prototypical examples gave rise to many unexpected psychological phenomena, which she called prototype effects.

Prototype effects occur when one member of the category, the prototype, displays asymmetrical or hierarchial relationships with other members of the category. The prototype may thus be seen as more representative of the category than other members. Subsequent experiments showed that prototypes displayed other interesting features. When subjects were asked to identify whether an example was a member of a category or not, response times were generally shorter for prototypical examples. Subjects were more likely to offer prototypes as examples when asked to list or draw representative members of the category. Subjects applied the notion of similarity asymmetrically when comparing more and less representative examples of categories. Americans who thought of the United States as a highly representative example of a country were asked to give similarity rankings for pairs of countries. These subjects thought that Mexico was more similar to the United States than the United States was to Mexico. Finally, subjects also were more willing to generalize new information about a prototype to a less representative example than to infer that new information about a less representative example was also true of the prototype. Thus sub an island would affect ducks than the other way around.25

Usually a representative example of a category has a bundle of different properties; less representative examples share some of these properties but not all of them, and different examples share different groups of features. The result can be analogized to a central example with different links of similarity shooting out from it in different directions to other less representative examples. Hence this type of category is called a radial category.26 Radial categories manifest prototype effects and Wittgensteinian family resemblances.

It is important to distinguish between the claim that categories display prototype effects and a different claim about categories-that categories have fuzzy boundaries. It is difficult to tell, for example, where the concept "tall" begins and ends. In the case of "bird," however, there is no dispute that robins, ostriches, and penguins are all birds, and yet subjects still report one of these examples as more representative than the others. Wittgenstein's famous ex different features of categories-fuzziness and the presence of prototype effects. Wittgenstein as fixed in advance to be used effectively, and he also asserted that the concept of "game" had no single property in common but only family resemblances. Pro resemblances can occur in concepts that have fuzzy boundaries, but they can also exist with respect to examples where there is no dispute about boundaries.

The psychological research of Rosch and her successors seems to show that human categorization employs several different models, some that appear to behave like classical categories, and many others that display prototype effects and family resemblances. It is by no means clear that these nonclassical models all operate in the same way. Prototype effects are precisely what their name implies-effects of cognitive models-and many different kinds of cognitive models can produce them.27 Psychologists and cognitive scientists are still divided over how many and what different kinds of models are involved in human Categorization.28 Nevertheless, even though different cognitive models may create prototype effects, we can refer to prototype effects collectively as examples of metonymic thinking, regardless of the cognitive model that produces them. They are metonymic because, whenever these effects occur, prototypes, ex subcategories serve as metonyms for the category. Prototypes, exemplars, and subcategories are used either (1) to represent an entire category, (2) to draw inferences about the entire category, or (3) to provide a normative model for the category. Understanding a category in terms of a prototype, exemplar, or subcategory is often a useful aid to understanding and working with the category and its members. Yet like all heuristics, sometimes metonymic thinking can go terribly wrong.

One of the most pervasive examples of metonymic thinking is the tendency to view all human categories as classical categories. This employs the classical category as a metonym-and hence as the normative and descriptive model classification. Precisely because this heuristic is so pervasive, it is an inexhaustible source of ideological effects.

In general, metonymic thinking produces ideological effects when nonclassical categories produce prototype effects whose existence and scope are not fully recognized. As a result, prototypes are improperly employed to make inferences and judgments. In these situations reliance on the assumption that all human categorizations conform to classical categories is misplaced.

Here are some examples of the kinds of prototype effects that, in the right circumstances, can produce ideological effects. They generally fall into two categories: prototype effects that involve some form of stereotyping, and prototype effects that involve some form of unspoken norm.

Overgeneralization

People may assume that all members of a category have the same characteristics as prototypes or prototypical examples. From a prototype of woman, for example, people may infer assumptions about all women's behaviors, preferences and abilities. The classic example of this ideological effect is a social stereotype.29 Stereotypes can be positive or negative, and they can be derived from other forms of cultural software. Stereotypes about men and women, for example, may be produced by social scripts, conceptual homologies, and net man may be viewed as rational and stable, while the stereotypical women is viewed as intuitive and emotional. These symmetrical stereotypes, in turn, can lead to equally symmetrical overgeneralizations and inappropriate inferences.

Prototypes as Indicators of Relative Prevalence

People may view prototypes or prototypical examples as the most common version of the category, so that other members of the category are viewed as rare, unusual, or exceptional cases, even though in fact these "exceptions" may be just as common as the prototypical examples. In the media in the United States, for example, one often finds a prototype of young black males as un involved with criminals, gangs, or drugs, easily prone to violence, and likely to get in trouble with the law. When one discovers young black men who do not fit this prototype, they are assumed to be rare and exceptional cases. Indeed, such individuals may be assumed to be especially bright, especially hardworking, and so on. Moreover, viewing them as exceptional and special means that the prototype remains unchallenged and may even be reinforced.

Salient Examples

A special kind of stereotyping involves salient examples. This phenomenon is also related to the availability heuristic discussed in Chapter 8. Prototypical examples often are constructed from or identified with familiar, memorable, or salient examples of a phenomenon (as opposed to typical or frequent examples). People then use these salient examples to make judgments about the probability of events or features of an unknown situation, even if salience and probability are not correlated. Amos Tversky and Daniel Kahneman discovered that when a group of people were asked for the probability that an earthquake causing a major flood would occur in California in 1983, they tended to give higher probability estimates than a similar group who were asked to estimate the probability of a major flood in North America in 1983.30 Politicians often use salient examples to suggest inferences that are unwarranted or at the very least con Ronald Reagan's anecdotes about "welfare queens" driving expensive cars to pick up their welfare checks confirmed a set of stereotypes about poor people as undeserving members of minority groups.

Prototypes as Default Characterizations

People use prototypes as "gap fillers"--to flesh out or supply features of unknown or partially known events or members within a category. If we hear that a man is a bachelor but know nothing else about him, we may create a picture of him in which we assume that he eats out often in restaurants, lives in an apartment, spends lots of money on clothes, and so on. We will use the features borrowed from a commonly employed social stereotype of a bachelor to fill in what is unknown about him. These features may, of course, be widely divergent from the bachelor's actual existence; he may love to cook at home or be a monk, for example.31

Asymmetrical Inferences from Prototypes to Nonprototypes

People routinely assume that inferences about prototypical examples apply to all members of the category, but inferences about nonprototypical cases are not routinely assumed to apply either to the category as a whole or to proto ideology of white supremacy, blacks are associated with criminality and whites with lawfulness. (This is another example of pro the prototype of the category "black" is associated with criminality, then examples of blacks who commit crimes or black politicians who are corrupt tend to produce or reinforce inferences that all blacks are untrustworthy. By contrast, numerous examples of law- blacks are generally trustworthy. Conversely, because noncriminality is projected onto whites, no amount of crime by whites or examples of unscrupulous behavior by white politicians will lead to the conclusion that whites are criminals or that white politicians cannot be trusted. Thus District of Columbia Mayor Marion Barry's conviction for cocaine use reflected badly on all black politicians in ways that Richard Nixon's criminal activities did not reflect on all white politicians.

Prototypes as Reference Points

Prototypical examples are more likely to be used as reference points for comparing or understanding phenomena. Thus if Kansas City is seen as a prototypical American community, then its citizens' tastes and preferences are more likely to be seen as representative of tastes and preferences of Americans in communities like New York, New Orleans, and Miami than the other way around. This heuristic partially explains the often-noted phenomenon of the "white norm" or the "male norm" in everyday understandings of the social world. If people judge males and whites to be more representative examples of human beings than females and blacks, white males become a prototype for the category "human being." Hence the experiences, preferences, and under will more naturally be viewed as reference points for evaluating the social world. Conversely, the experiences, preferences, and un will be seen as less representative, special, unusual, quirky, different, or even deviant.

Prototypes as Evaluative Norms

Prototypical examples can also be viewed as the "best example" in an evaluative sense. Instead of a stereotype that represents the most common feature of a category, a prototype can be a norm or ideal. A paragon is one example of a normative prototype. George Washington might be viewed as a normative prototype of a politician. A second kind of normative prototype is an ideal type--for example, the economist's conception of a perfectly rational individual with complete information. A third kind of normative prototype involves a set of norms for appropriate behavior. Thus a normative prototype of a father might be someone who spends time with his children and makes sacrifices for their welfare and their betterment. When people employ a normative prototype, members of the category that fail to match the prototype are to that degree viewed as imperfect, deficient, less valuable, or abnormal. Fathers who do not live up to the normative prototype of fatherhood, for example, may be considered poor examples of fathers. Thus there are two different ways that we can say that an individual is not a good example of a category. On the one hand the individual may be atypical, or may lack many of the most common features of the category. On the other hand, the individual may fail to live up to a normative conception of the category. Often people conflate these ways of not being a good example. This confusion may also produce ideological effects.

Although I have discussed these seven effects of prototypes separately, they often overlap or work in tandem. Lakoff offers a good example of the cumulative effects of metonymic reasoning through an analysis of the expression "working mother."32 The qualifier "working" suggests that working mothers are somehow special and different from the most representative examples of mothers. Yet changes in economic and social conditions have led more American woment to work outside the home than ever before. Why then aren't working mothers prototypical examples of motherhood in American culture?

Lakoff argues that the expression "working mother" is based on an implicit cognitive model of motherhood in our culture based on nurturance. Many other models of motherhood are possible, including being a birth mother, a genetic mother, the wife of the child's father, and so on. The nurturance model of motherhood, in turn, yields a prototypical case of nurturance in our culture-- the kind of nurturance that the housewife-mother provides for her children--and thus it offers a prototypical example of motherhood, the housewife-mother. Moreover, there is a prototypical concept of work, which is done away from home and does not include the rearing of children. The term "working mother" is defined in contrast with the prototype of the nurturance-mother and in conformity with the prototype of work as work done away from home. Because the concept "working mother" is informed by these cognitive models--each with its own paradigmatic cases--it signifies more than simply the intersection of working people and mothers. Lakoff offers the example of a birth mother who gives her child up for adoption and then takes a full-time job outside the home. She is working and she is a mother, but most people would not think that she is a working mother.33

Thus, powerful forms of metonymic reasoning are implicit in everyday concepts like "working mother." Instead of classical categories of work and motherhood, we have cognitive models that stand for these categories. The nurturance model stands for the category of motherhood as a whole, and the representative example of this model, the housewife-mother, stands for the category of all mothers. This is confirmed by the following English sentences:

la. "She's a mother, but she doesn't take very good care of her children."
1b. "She's a mother, but she takes very good care of her children."
2a. "She's a mother, but she works away from home."
2b. "She's a mother, but she doesn't work away from home."34

Sentences lb and 2b seem surprising or unusual, while sentences la and 2a seem more conventional if traditional. That is because "the word but in English is used to mark a situation which is in contrast to some model that serves as a norm."35 The implicit norm is the nurturing housewife mother.

If the nurturance model serves as the most representative model of motherhood, and the housewife-mother serves as the most representative example within this model, we might expect to see one or more of the following prototype effects:

1. People view non-housewife-mothers as exceptional or deviant cases. (Prototype as most common example of category.)
2. People assume that characteristics, preferences, and other features of housewife-mothers apply to all mothers. (Prototype as "gap filler" or salient example.)
3. People assume that housewife-mothers define the norm of appropriate nurturance and motherhood; conversely, they assume that women who do not correspond to this model are deficient or insufficiently nurturing. (Prototype as paragon, ideal type, or normative model.)

In contemporary U.S. society, a large number of women work outside of the home. Changes in society will gradually undermine the prototype as most common example, although it is also possible that people's estimates of the percentage of women in the workforce lag behind actual numbers precisely because of this prototype effect. Nevertheless, these changes in expectations may not change the tendency to view the housewife-mother as the salient example for inferences about mothers. Equally important, they may not change the tendency to view the housewife-mother as a normative model of appropriate nurturing behavior. Prototypes do not have to be the most common examples to function as paragons and ideals.

The structuralist models that we discussed in Chapter 10 would note similar effects produced by different means. The cultural meaning of "working mother" is produced through a network of cultural oppositions: mother : father:: nurturance : work:: family : market:: private : public. These homologies are both descriptive and evaluative; they distinguish the opposed terms and suggest their appropriate authority and normative significance. For example, work outside the home is differentiated from and privileged over nurturance. At the same time, these homologies "distribute": they assign appropriate roles for which each party or each concept is best suited.

The category "working mother" is a mediation or subcategorization of this homology. Working mothers are opposed both to so-called nonworking moth and to working fathers. This produces the following homologies:

"nonworking" mother : working mother :: mother : father :: nurturance : work :: family : market;
and
working mother : working father :: nurturance : work :: family : market.

In a patriarchal system that privileges women only in roles assigned to the "feminine," working mothers are assessed a double penalty. In both of these homologies working mothers are assigned the inferior associations of each term, and the opposite concepts of nonworking mother and working father receive positive and superior associations. The cultural meanings conveyed are that working mothers are less good at the private responsibilities of family and nurturance than nonworking mothers, and they are less good at the public world of work and exchange than working fathers.

These homologies have ideological power because they implicitly demarcate normal, natural, and privileged associations about mothers and fathers, nurturance and outside work. Many so-called nonworking mothers are less nurturing than many working mothers, and many working fathers are less com associations--with its double pen features (nonnurturance, incompetence) of nonworking mothers and working fathers invisible and projects them onto working mothers.

This example shows how different forms of cultural software can produce similar ideological effects. There are two different explanations for this. The first is that conceptual homologies and cognitive models do not really correspond to different tools of understanding. Each of them is simply a model for the same cognitive process, and the two happen to converge to describe the same basic ideological phenomena. Further experience will show which model is the best description of our understanding, or will reveal an even better model. A second possibility is that there are really several different ideological mechanisms and that they tend to be mutually reinforcing in certain situations, though not in others. This would mean that patriarchal or racist attitudes, for example, are overdetermined by many different forms of ideological mechanisms and ideological effects.

My guess is that both these possibilities have a grain of truth in them. We may eventually discover that the structuralist models developed by Levi-Strauss and others are special cases of the cognitive models that produce metaphor and metonymy; or we may eventually discover that both types of models point obliquely toward a third as yet undiscovered cognitive mechanism that subsumes both.

Nevertheless, I do not think that all of the various ideological mechanisms that I have described are ultimately one and the same. The heuristics and biases that Tversky and Kahneman discovered cannot easily be reduced to the mechanisms of ego defense implicit in Festinger's cognitive dissonance model, nor can either be readily assimilated into the narrative, structuralist, or metaphorical models of cultural understanding.

In the past four chapters, for example, we have seen how racist ideological effects can be produced by dissonance reduction among subordinate groups, by conceptual imperialism among dominant groups, by faulty inferences from prototypes and salient examples, by conceptual homologies that oppose black projection of superior and inferior associations, by social scripts featuring stock characters and expectations about ethnic groups, and by recurrent cultural narratives about the American "savage war." It is highly unlikely that all of these effects are produced by the same mechanism. It is much more likely that racial attitudes are produced by many cross-cutting forms of human understanding, which, taken together, have deep roots in our tools of cultural understanding and hence possess great power over our imaginations. Racism and sexism are motley and variegated, despite their admitted power in our lives. They are produced by many different kinds of cognitive tools, and these tools have repeatedly been used to create new ones, carrying into each innovation their potential ideological effects. Unjust attitudes about race and gender are woven deeply into the fabric of our thought; and in this weaving more than one stitch and more than one thread have been used. Ideological mechanisms are the result of bricolage and circumstance; their heterogeneity and disorder are the best evidence of their historical emergence.

Posted by The Editors on May 13, 2008
Tags: Uncategorized

Total comments on this page: 1

How to read/write comments

Comments on specific paragraphs:

Click the icon to the right of a paragraph

  • If there are no prior comments there, a comment entry form will appear automatically
  • If there are already comments, you will see them and the form will be at the bottom of the thread

Comments on the page as a whole:

Click the icon to the right of the page title (works the same as paragraphs)

Comments

No comments yet.

[…] chapter from Balkin’s Cultural Software on Metaphor, Metonymy and Cognitive Models […]

February 25, 2011 3:36 am
Name (required)
E-mail (required - never shown publicly)
URI