Fellowship in Museum Practice: Exit Presentation

    Marget A. Lindauer, PhD


  1. I'm going to do three things in my presentation. First, I'll give a very general and very brief sense of my overall research interests. Second I'll talk about my historical research of evaluation trends in the museum profession. And third, I'll offer a very preliminary account of the specific approach to evaluation that I'm using to evaluate America on the Move, which recently opened at the National Museum of American History. I should emphasize that anything I say about AOTM is preliminary. I haven't finished transcribing visitor interviews let alone immersing in analysis. So please consider my comments to reflect a work in progress. But first, I start with a general sense of my overall research interests. For those of you who heard my presentation at the outset of my fellowship, this part will be familiar. I entered the museum profession in the mid 1980s. The time span of my career has paralleled the emergence of new museolgy AND the increasing institutional significance of museum education. As I've kept up with both developments, I have been struck by the fact that those two bodies of literature typically do not cross-reference one another.
  2. Speaking very generally, museum education literature focuses primarily on techniques and standards for professional practice.
  3. While new museology has been driven by moral and political analyses of professional practice.
  4. During my doctoral work in education I discovered a third body of literature—curriculum studies—which, as a whole, addresses both technical standards and moral political aspects of education.
  5. So it occurred to me, if the three bodies of literature can be reconciled, the field of museum education might explicitly address moral and political challenges posed by new museology in relationship to teaching techniques and standards of practice.
  6. My overall research interest is in mapping the field of museum education in a way that allows us to reflect upon historical trends and to consider our options. I'm looking at four sort of sub-categories within education.
  7. Practice, considered through the lens of Curriculum Theory, which poses the questions: In what way will we organize and present educational materials? And what kind of relationship between teacher and learner (or museum and visitor) will we foster?
  8. Learning Theory, which asks “Through what physiological and/or social processes do people learn?
  9. Educational Philosophy, which challenges us to explicitly articulate, “For what social purpose are we teaching this material?
  10. And evaluation (or research theory) which addresses the question, “Through what research process will we discern what people learned?”
  11. I'm particularly interested in thinking about how these questions relate to one another? How does the way in which we answer any one of these resonates with the ways in which we answer the others.
  12. During my four-month fellowship, I've focused on evaluation trends within the museum profession
  13. etting out to map the field of exhibit evaluation. [An aside, while my research has focused on summative evaluation of exhibits, my analysis also can be applied more widely to thinking about ways to evaluate any museum program.
  14. I began my exploration inside the visitor studies field, reviewing visitor studies journals, museum association conferences and workshops, and technical services reports published by the American Association of Museums. And I found remarkable homogeneity
  15. the types of exhibit evaluation that are described: front-end, formative, remedial, and summative
  16. the data collection methods that have been used; primarily quantitative but some qualitative
  17. and the statistical data analysis methods that have been prescribed
  18. I then turned to literature outside the museum field, looking at evaluation studies, a field that includes a conglomerate of researchers from the disciplines of education, sociology, and psychology. Within the publications devoted to evaluation studies, I found far more diversity of evaluation types
  19. This list—quasi-experimental, goals-based, goal-free, naturalistic, responsive, stakeholders', fourth-generation, connoisseurship, hermeneutic—isn't even complete. Needless to say, I was overwhelmed not only be the size of the list but also by the fact that these terms are not part of the visitor studies lexicon.
  20. I set out to reconcile these differences and return back again to the field of museum education with a better sense of why particular practices historically have been preferred. The first thing I did was to recognize that I had used the phrase “Types of Evaluation” to refer to two different things.
  21. In the visitor studies literature, it referred to when a study was carried out, before, during, or after exhibit development.
  22. In the evaluation studies, it refers to research design, which articulates the way in which a research (or evaluative) question relates to
  23. research theory, which generally outlines a researcher's aims and assumptions regarding what knowledge can be known and how that knowledge can be known through prescribed
  24. data collection and analysis methods.
  25. As I became familiar with the similarities and differences among these research designs, in terms of research theory and practice, I was able to contextualize visitor studies within evaluation studies. The remarkable homogeniety that I found within visitor studies represents quasi-experimental and goals-based research designs, which I'll explain very generally in order to then consider the question of WHY these two research designs have been pervasive.
  26. These two approaches to evaluation developed
  27. within a positivist research paradigm. In the 1930s, when the earliest visitor studies were conducted, positivist theory had a huge following in the social sciences. It came from the physical sciences, where the aim of research was to discover universal laws that explained natural phenomena.
  28. As it was adopted among the social sciences, the aim of research was to discover laws that explain social phenomena.
  29. A positivist paradigm accordingly assumes that there is an objective reality that exists outside the human mind. That reality can be objectively known, but depends upon data collection and analysis methods for removing researchers' subjectivity from the process of investigating.
  30. cause-and-effect hypothesis statements. The earliest visitors studies (those carried out from the 1930s through the 1960s) posed such cause-and-effect statements as,
  31. exhibit element (x) will cause museum visitors to (y). For example, uniformly grouping paintings closely together will cause museum fatigue; obscuring exits will cause visitors to spend more time in the exhibit; or increasing the font size of object labels will cause visitors to read more text.
  32. A positivist research theory dictates particular data collection and analysis methods for assuring trustworthy results.
  33. These include: defining, isolating, and measuring variables,
  34. Developing a reliable instrument for measuring those variables (meaning that the same results will be achieved irrespective of the specific researcher and that the instrument does not impose upon the visitors behavior or response. For, example in a reliable survey, questions are phrased in a way that does not encourage visitors to respond in one way or another)
  35. •  And representative sampling (meaning that the visitors selected for evaluation represent demographic, or other qualities, of the entire population of visitors to an exhibition. After many, many studies focusing on the effects of various exhibit features on visitor behavior, some basic rules thumb for design developed. But these studies also led to a general consensus that individual exhibit elements cannot really be considered in isolation, that an exhibition is experienced as a gestalt. Beginning in the 1970s, the cause-and-effect hypotheses generally shifted to consider the exhibit as a whole as a cause, with a desired outcome as an effect. For example,
  36. Attending this exhibit will cause visitors to recall specified facts or ideas.
  37. Attending this exhibit will cause visitors to exit with more knowledge than that with which they entered.
  38. Attending this exhibit will cause visitors to receive an intended message. Or
  39. Attending this exhibit will cause visitors to appreciate a particular perspective. These cause-and-effect approaches to evaluation emerged alongside a broad societal focus on accountability, ushered in by the Great Society Legislation during Lyndon Johnson's presidential administration. As government money became available for social and educational programs, evaluation of those programs was mandated. Also during the late 1960s and early 1970s, the American Association of Museums lobbied for museums to be officially recognized as educational institutions so that they might qualify for some of this funding. As federal granting agencies including the National Science Foundation, the national endowments for the arts and humanities, and others were established museums were included among the list of organizations that could compete for funds, and evaluation of grant-funded exhibitions became mandated. The following logic took hold: If you claim that museums are educational institutions, and they receive taxpayers' money for producing educational exhibitions, then you must demonstrate that exhibits effectively educate. [Pause] Insofar as accountability is meant to serve governmental or managerial task of deciding regarding whether or not (or at what levels) programs or services ought to be funded, the evaluation stakes can be quite high. Unfortunately, the overall findings from cause-and-effect research designs for evaluation have been bleak. Museum consultant Beverly Serrell suggests that the best you can hope for is that 51% of your visitors will receive the intended message and that in order to accomplish this you have to repeat that intended message several times throughout the display. Should we conclude that the many, many exhibits that score less than 51% educationally are flawed? Museums aren't alone in suffering from dismal evaluative reports that are supposed to address accountability. In fact, this is why, beginning in the 1970s, the broader field of evaluation studies has explored the possibility that maybe it wasn't the programs themselves but rather the ways of assessing were their value that was flawed.
  40. And from that query, diverse research designs developed. So, why haven't some of these approaches influenced the museum field? Looking back on the history of visitor studies, I found that the answer is really rather simple. At a seminal conference on museum evaluation sponsored by the Smithsonian in 1977, most of the educational psychologists around the table subscribed to a positivist research paradigm. And they collectively went on to establish the visitor studies research journals and organizations, and train graduate students whose work is disseminated through those publications and conferences. So, the field of visitor studies is populated by a large number of positivist-trained researchers participating in well-organized association with one another and generating what has become a mountain of literature. But in my exploration, I also found that the field has not been as homogenous as I first thought. At the 1977 conference, there was one researcher, Robert Wolf who endorsed and carried out a number of exhibit evaluations that represented a non-positivist approach.
  41. which he called naturalistic evaluation. He carried out a number of naturalistic evaluations for several Smithsonian museums in the late 1970s and early 1980s. He passed away in 1986, and his work and his methodology largely have been ignored in the visitor studies literature ever since. A handful of others have conducted similar kinds of studies, but those are published few and far between the volume of quasi-experimental and goals-based approaches.
  42. Most of these studies follow a responsive or stakeholders' research design, which evolved from a naturalistic approach.
  43. All three of these .
  44. subscribe to an interpretivist research theory, which acknowledges that phenomena studied in the physical and life sciences may indeed be governed by universal and discoverable laws.
  45. But the social world is more complex, multi-faceted, and ever-changing. The overarching aim therefore is to discover the multiple truths that account for multiple realities to coincide. The interpretivist researcher assumes that the social world cannot be entirely known, partly because it is always seen through a human lens.
  46. Research cannot be separated from the researcher. This does not mean that research findings are less trustworthy but rather the rules for
  47. qualitative data collection and interpretive analysis are different than those for positivist research.
  48. First, the researcher looks at a range of data sources, speaking individually to people involved in developing an exhibition, looking at written accounts of the process, examining the exhibition itself, watching visitors go through it, and speaking to a wide range of people—those visiting alone, as family groups, or with friends; people of diverse ages, racial or ethnic groups, and from various geographic locations.
  49. Second, in an interpretivist paradigm, the researcher is the instrument. This means that the evaluator must make stakeholders' (exhibit development team members and visitors) at ease talking about their expectation, hopes, opinions, and experiences. It means adjusting the pace and rhythm of a conversation based on the interviewees apparent comfort level. It means adjusting the content or order of interview questions to address what interviewees say in a way that appears natural while also gently re-directing a conversation that goes too far off topic. The interview should feel like a casual conversation rather than a list of questions. Within an interpretivist paradigm the interview is more of a focused art form than a science or pre-determined survey. As the research instrument, the evaluator also must conscientiously acknowledge her own subjectivity, primarily in order to keep it in check as much as possible. For example, have my own opinions of America on the Move, which I articulated to myself before starting any interviews so that I wouldn't use those conversations as a way to explore my own thoughts.
  50. And third, an evaluation report written from an interpretivist paradigm includes rich description of the investigative process and findings. This is akin to investigative reporting, the researcher sets out to give the reader enough detail vicariously experience the evaluation context and process.
  51. Going back to research design, whereas a positivist research design is based on cause-and-effect investigations, an interpretivist research design is driven by more open-ended exploratory questions. For example, in my evaluation of America on the Move, I'm setting out to answer the overarching question,
  52. “What is the range of educational experiences that this exhibit accommodates or elicits?” I don't start with a pre-determined set of categories and ask visitors to select among them but rather am interested in knowing their accounts of their visit and in ways their accounts characterize the educational value of an exhibit. This overarching question addresses the issue of accountability—assessing the ways in which the exhibit is educationally meritorious.
  53. I also am exploring a secondary question: “What are the similarities/differences among various stakeholders' educational expectations and hope for (or experiences in) this exhibit.” In this questions, both exhibit development team members and visitors are stakeholders and no one stakeholders' characterization of makes an exhibit educationally meritorious is held as a sole criterion or set of criteria against which to value the success of an exhibit. In fact, some stakeholders' opinions may contradict one another.
  54. And finally, I will ask “Are there any patterns among those similarities and differences? As I said at the out set of my presentation, I haven't finished transcribing visitor interviews, let alone become immersed in analysis. I am however comfortable giving you a little flavor for the sort of findings generated within an interpretivist paradigm.
  55. My comments address the second question—exploring the similarities and differences among stakeholders' expectations, hopes, experiences.
  56. expectations: Visitors come in with a deeply entrenched sense of transportation as a metaphor for progress. This exhibitions is not designed to explicitly confront that metaphor but rather to sidestep it by representing complex relationships among the ways in which historical developments within transportation has affected landscapes, lifestyles, communities, and commerce and vice versa.
  57. No visitor will exit the exhibit and rattle off those themes, but they may have some sense of one or more of those organizational themes
  58. Many visitors will have nostalgic experiences
  59. Different visitors will leave with different kinds of outcomes. For example, one visitor may gain new knowledge about a place while another may learn something new about a particular vehicle or artifact on display, while another may gain a new perspective on the relationship between transportation and American history
  60. Hopes: That visitors won't be confused or overwhelmed or feel lost.
  61. That the different modes of delivering information accommodate different learning styles.
  62. That visitors make a connection between