Tuesday 28 January 2014

Assignment 1 - Evaluation of... an evaluation!!

I found the following program evaluation online for a student transit pass at the University of Wisconsin-Milwakee (UMW) called... wait for it... the UPASS.

The goal of the UPASS implementation was to increase the number of transit riders which would decrease the number of students that drive to the university which would have a positive impact on the environment.  Furthermore, the evaluators also found after they collected the data that the implementation of the UPASS had a positive impact on student enrollment and retention with UMW.

This evaluation follows Scrivin's goal based evaluation model as all the data collected helps the reader identify the several ways that the goal was met (or not met).  Each section of the evaluation compares data from before the implementation of the UPASS to afterward.  The evaluators must have known what the goal of the UPASS was before the evaluation began, making this model differ to Scriven's goal free evaluation model.  In this case, I think it was imperative that the evaluators know the goals of the UPASS so they can assess the effectiveness of the UPASS itself (comparison of data).

One of the strengths of this evaluation is the number of ways the evaluators were able to compare data before and after implementation.  For example, the evaluators look at obvious comparisons like "increased transit ridership" and more specific comparisons like how "freshman students... showed high rates of transit usage compared to other students."  Other pieces of data were also examined like what factors cause students to choose transit like how close their residence is to a pick up area.  Considering all of the data presented, I feel they did thorough data collection.

Another strength for this evaluation is that it is simple, concise, and easy to understand.  The program was explained at the beginning of the evaluation and the immediate goals were discussed early on.  As you read through the evaluation, the data becomes more complex yet is still easy to relate back to the initial goals of the program.  This makes it easy for someone to use this evaluation if they were wanting to implement a UPASS for their own school.

One of the weaknesses that I can see is that all of the data is presented as a percentage.  This does not give me an idea of how many students are involved.  Therefore, if I were to use this evaluation to determine if my university could use a UPASS I would have no frame of reference in that sense.  I am also curious as to whether all of the students were forced to be part of this program, or if they had the opportunity to opt out (and how flexible that opt out process was).

This evaluaiton, although simple, was concise and to the point.  I am left with an understanding of how data was used to evaluate the UPASS program at UWM.  After reading the evaluation, I get an overwhelming feeling that the program was a success for this university.