Data Collection at the Museum
As an intern at the American Museum of Natural History, my projects are all focused on using technology to engage visitors. We use telepresence robots to bring experts to the museum and to send visitors to other museums that may be thousands of miles away. We also have augmented a story and coloring sheet for children with a mobile app. But, the tools we use to capture data and our observations in order to evaluate the projects are limited to pen and paper.
At first I was somewhat surprised that pen and paper were the primary tools we would use for observation, but I quickly came to appreciate the flexibility of pen and paper in a situation where we have a general idea of the data we’d like to capture. Because we are interacting with and observing in total between 100 and 200 people in an environment that hundreds more pass through, it is important to be able to react to the unexpected and easily capture it. Although the medium probably does lead to some irregularities and occasional mistakes in our work, I think the ability to capture an unexpected behavior outweighs a count of visitors that is off by 1 or 2 people.
The observation sheets we use are surveys printed out on regular copy paper and attached to clipboards. Although we are collecting data on three different projects, two of them have similar audiences and all of them share goals, which our observation sheets reflect. Each project has its own version, and each version has a similar structure. All versions begin with an area to capture some quantitative data. For every visitor that interacts with the projects, we are attempting to capture what time they start and finish in addition to how they interact with the project. There is a list of project specific interactions and when a visitor or group performs that interaction we simply check off that box on the list. This makes it easy to look at the data understand how visitors are interacting and what features they are using.
The top section of the observation sheet also contains space to make notes about the group: How many people are there? What sort of a group? (Family, Child and caregiver, etc.) How many children are present? How old are they? What are their genders? Many off these are not necessarily relevant to some projects, but are very useful in other projects. The target audience for one of the projects is children, so it is helpful to know how many participate and how old they are. This lets us know the makeup of our audience.
If the top section of the observation sheet collects slightly more quantitative data, the bottom section is reserved for interview questions. After a visitor participates in one of the activities we are observing, whenever possible we try to approach them or their group for an interview. There are two types of interview questions: questions about learning and questions about reactions. The projects aimed at adults or older children each have three questions which correlate to the three learning goals for the projects. We want visitors who participate to understand where the culture is from, what the connection between the culture and exhibit is, and that the culture is still alive today (in other words, that it is not a historical exhibit but an anthropological one.)
The questions about the reactions of visitors seem to have a few different purposes, or at least a few different types of answers. We frequently ask why they stopped to participate. This question lets us know what they were attracted to about the activity, what they noticed before they stopped, or why they were motivated to do so. We also ask if they have anything else to share. Sometimes they don’t have anything to say or aren’t sure how to answer, but frequently visitors use this as an opportunity to tell us what they liked about the exhibit. While occasionally visitors give suggestions, this question is often the source of a “visitor comments” section in the reports which are constructed based on this data.
Working with these collection tools has shown me the importance of having more defined questions and leaving space in the data for processing after it has all been collected. While we could give visitors a multiple choice, three question test after they participate we would not be able to create such a nuanced view of the visitor experience and which elements are working effectively. While a multiple choice test and numbered satisfaction survey might be easier to process and be more appropriate for large volumes of visitors, at this scale it seems more appropriate to interact with the visitors ourselves and collect less quantitative data, but gain a better and more fine-grained understanding of the visitor experience.
But this is only the first part of the process. After this data is collected comes perhaps the most difficult, but also most interesting task: analysis.
0 Comments