CAEF: A**ess This!
Yesterday, I attended the second in a series of events presented by the Chicago Arts Educators Forum, an initiative started by Merissa Shunk and Nicole Losurdo and sponsored by CAPE. This community of teachers, teaching artists, and organizations explores common challenges and opportunities in arts education in the Chicago area.
This day of discussions and workshops centered around assessment, everyone’s favorite part of the process when designing an educational program or residency. Confronting the negativity that surrounds this process head-on, the organizers created a parking garage for frustrations (participants wrote their biggest challenges on sheets of paper taped to toy cars and “parked” them for the day) and an anonymous confessional that also served as the event’s video documentation.
Why so negative? Many artists and organizations view assessment as something they must do for their funders and for the public. So many of us have found ourselves daunted by the task of evaluating the same programs several different ways using the specific criteria presented by those who have provided support. It begins to feel like the process of assessment is about teaching to the test – making sure that the outcome fit the objectives set forth by the organization and its funders.
But what other purposes can this process serve? A question that became a lightbulb moment for many participants was: “Who is this assessment for?” Of course, we’re responsible to those who provide support, but the assessment and evaluation process is also meaningful tools for students, teachers, teaching artists, and organizations if done in a way that captures the depth of the work. In this way, we begin to connect our larger objectives and the activities that accomplish them to our assessment tools, rather than putting the cart before the horse by using a standardized method.
Another theme that resurfaced multiple times was the question of how to quantify social and emotional progress, or literacy and cognitive skills that become evident in work samples more clearly than in a multiple-choice test. In the case studies we examined, many organizations found themselves asking students to take pre- and post-residency surveys, asking questions like “Do you feel a personal connection to these characters” on a scale from 1-5. Often, the difference in responses wasn’t meaningful.
A great start to the answer of this question was presented in Dennie Palmer Wolf’s keynote presentation. She displayed pre- and post-residency work samples from the same student, showing the difference in the vocabulary and depth after working with the teaching artist. One could feasibly assign a number scale to these factors to chart progress, in addition to having the samples available for review. Or, she showed diaries of a day in the life of two students, one of which was participating in an arts program, with yellow highlights on the parts of the day where the student felt personally and deeply engaged. Having five of those moments instead of one is a measurable and meaningful effect of the influence this program has.
The day really helped me and the rest of our staff think much differently about how we assess, evaluate, measure, and document our work, and how connected those tools must be to our own objectives rather than a pre-designed template. The funny part is, making these tools authentic in this way will result in data that can then be pulled to highlight the factors a funder will want to see, while telling a richer story that will be meaningful to our organization, the students, teachers, parents, and schools we serve.