Last week, we discussed different ways to describe the return on investment from a particular program. A convincing ROI can go a long way in generating support for a scaling strategy. In today’s post, Sanford C. “Sandy” Shugart, president of Valencia College, reminds us of how important it is to keep testing that ROI. The returns from a pilot might change dramatically when you expand to more people and disciplines. Maintaining fidelity to the proven model is essential if you want to maintain (and continuing improving) results.
Valencia College entered DEI with a history of innovation and a clear sense of areas to scale for effect in developmental programs. Specifically, the college planned to:
Valencia College entered DEI with a history of innovation and a clear sense of areas to scale for effect in developmental programs. Specifically, the college planned to:
- expand dramatically the use of supplemental instruction (SI) in developmental mathematics and other gateway courses
- grow the number of paired courses designed to function as learning communities
- expand the offering of student success courses while embedding the success strategies in other first-year courses
- expand a transition program for certain at-risk students called Bridges to Success
In the early going, we believed the initial stages of scaling—clarifying the model, recruiting adopters from the faculty and students, training the implementers, and having the will and support to reallocate scarce resources—would present the greatest challenges. We were wrong. Early stages of implementation were marked by enthusiasm, collaboration, and high expectations for success. The performance data were eagerly anticipated each term and became important levers in the conversations and revisions to the programs that followed.
Scaling went rather well, with each treatment showing positive early results and the general trend in student performance moving toward accelerated success—more credit hours earned in fewer academic terms by each succeeding cohort of students. With this success and the word of mouth among both students and faculty about the work, recruiting adopters became much easier. In fact, demand for SI-supported sections quickly grew beyond the initial disciplines and courses selected. (Valencia served more than 11,000 students in SI-supported course sections last year.) These were generally positive signs. However, as we scaled to greater and greater numbers of course sections, faculty, and students, we began to discover a new set of challenges.
SI will serve as an example, though we had similar challenges in other innovations. As we reviewed data from the SI sections versus “non-SI” sections, we began to find significant differences in the effect on indicators like student persistence to the end of course and grade distribution. Some of these seemed almost random, while others clearly showed a pattern (by campus, by discipline, etc.). Deeper inquiry revealed that the effect for students who actually participated in the supplemental activities that make SI effective were almost uniformly high for all students. What the discrepancy in the data revealed were major differences in implementation. It seems some faculty had found ways to encourage, perhaps even require, active student engagement in supplemental learning activities, while others had managed to achieve very little of this engagement. These differences existed in spite of aggressive training efforts prior to implementation.
Similarly, we found that while SI was especially effective in many of the mathematics gateway courses, the effect was much less striking in many of the courses to which it was added in the later years of our work (economics, political science, etc.) Again, we inquired more deeply and discovered not only differences in methods, but very different understandings among the faculty (and students) about the purposes of SI and the outcomes we were seeking.
We had, as our projects reached maturity, encountered the problem of scaling with fidelity to the model. It seems that our discipline of innovation needs to include ongoing evaluation of the purposes and steps to implementation, continuing staff training, and rigorous data analysis to assure that the treatment in which we are investing doesn’t take on a life of its own after institutionalization. A useful lesson!
Scaling went rather well, with each treatment showing positive early results and the general trend in student performance moving toward accelerated success—more credit hours earned in fewer academic terms by each succeeding cohort of students. With this success and the word of mouth among both students and faculty about the work, recruiting adopters became much easier. In fact, demand for SI-supported sections quickly grew beyond the initial disciplines and courses selected. (Valencia served more than 11,000 students in SI-supported course sections last year.) These were generally positive signs. However, as we scaled to greater and greater numbers of course sections, faculty, and students, we began to discover a new set of challenges.
SI will serve as an example, though we had similar challenges in other innovations. As we reviewed data from the SI sections versus “non-SI” sections, we began to find significant differences in the effect on indicators like student persistence to the end of course and grade distribution. Some of these seemed almost random, while others clearly showed a pattern (by campus, by discipline, etc.). Deeper inquiry revealed that the effect for students who actually participated in the supplemental activities that make SI effective were almost uniformly high for all students. What the discrepancy in the data revealed were major differences in implementation. It seems some faculty had found ways to encourage, perhaps even require, active student engagement in supplemental learning activities, while others had managed to achieve very little of this engagement. These differences existed in spite of aggressive training efforts prior to implementation.
Similarly, we found that while SI was especially effective in many of the mathematics gateway courses, the effect was much less striking in many of the courses to which it was added in the later years of our work (economics, political science, etc.) Again, we inquired more deeply and discovered not only differences in methods, but very different understandings among the faculty (and students) about the purposes of SI and the outcomes we were seeking.
We had, as our projects reached maturity, encountered the problem of scaling with fidelity to the model. It seems that our discipline of innovation needs to include ongoing evaluation of the purposes and steps to implementation, continuing staff training, and rigorous data analysis to assure that the treatment in which we are investing doesn’t take on a life of its own after institutionalization. A useful lesson!
No comments:
Post a Comment