Showing posts with label More to Most. Show all posts
Showing posts with label More to Most. Show all posts

Thursday, August 23, 2012

R to the O to the I

In March, we released More to Most: Scaling Up Effective Community College Practices, a guidebook that lays out a process for identifying promising practices and developing an expansion plan so that good programs reach as many students as possible. Central to the process is the SCALERS framework, developed at Duke University and discussed extensively on this blog. There are other steps that form the foundation for SCALERS application. We’re going to tell you about some of them in our next few posts. To download a copy of More to Most, visit the website: www.more2most.org.

The first step in the process is determining program value; you can read a rundown of that step here. One part of that rationale—and a rather popular one in these austere days of budget cuts—is demonstrating the return on investment (ROI), the sometimes-hard-to-quantify case that shows program spending generates a financial return (in increased FTE or other revenues) that offsets operating costs. Standardizing this process, where possible, can help you compare different programs and make decisions about which programs should be discontinued and which should be expanded. This kind of information, of course, needs to be combined with other qualitative data about program effects, and considerations of faculty and student engagement and institutional priorities.

Program ROI
There are a number of ways to analyze the connection between a program’s results and its costs. In 2009, Jobs For the Future and the Delta Cost Project released Calculating Cost-Return on Investments in Student Success. The report determines cost-return on investment by assessing student retention for program and non-program students, the resources required for program operation, and the revenue gained by additional retention using the following data and calculations:
  1. Additional number of students enrolled into the next year because of the program: Calculated using number of students served, one-year retention rates for program participants, number of participating students retained, and one-year retention rates for non-program students.
  2. Direct costs of program operation: Calculated based on expenditures for personnel, supplies and equipment, stipends for students, facilities, etc
  3. Spending and revenue data: Calculated from average expenditures per student, and additional tuition, state, and federal revenue gained from increased retention.

These simple calculations could be the beginning of a formula that is unique to your institution, one that incorporates the costs, revenue, and priorities that are most relevant to your program goals.

Local and State ROI
You can also think about ROI beyond the bounds of a program. The various student success programs and policies at a college add up to increasing numbers of students completing credentials. (At least, that’s where we’re headed, right?) And increased completion outcomes can lead to positive returns for communities and states, even the nation as a whole. Being able to make such a case at your college—and across a system—could garner support from policy makers at the state and federal level. The Center for Law and Social Policy (CLASP) partnered with the National Center for Higher Education Management Systems (NCHEMS) to create a Return on Investment Dashboard. The dashboard combines data from the Census Bureau, National Center for Education Statistics, and Department of Education to project short- and long-term effects of maintaining the educational attainment status quo vs. increasing the number of credentialed adults in a particular state.

In addition to credential attainment, state summaries include projections for personal economic gain, as well as returns to the state in the form of state income, property, and sales taxes. The dashboard also has figures for Medicaid savings and correctional system expenditures. The dashboard allows you to manipulate high school completion, college going rates, credential attainment, and enrollment patterns to see how increases or decreases might affect individual and state returns on investment.

ROI and External Funding

The growth of social innovation financing or social impact bonds is one way that demonstrating return-on-investment is tied directly to external funding. Organizations cover initial program costs by borrowing money from foundations and other investors. If the organization meets agreed upon outcomes, they’ll receive additional funds from the state, and original investors will receive a portion of those funds. While this form of financing is primarily occurring in the nonprofit and social enterprise sectors, it could have implications for the education sector as well. You can read a great summary of social impact bonds on the Nonprofit Finance Fund website and check out how some social programs in Massachusetts are implementing this type of financing.

Whether you’re trying to quantify program costs, make a state-wide case for investing in a particular innovation, or looking for new ways to fund your efforts, understanding and articulating the return-on-investment is an important piece of the scaling up puzzle.

Tuesday, August 21, 2012

Is it worth it?

In March, we released More to Most: Scaling Up Effective Community College Practices, a guidebook that lays out a process for identifying promising practices and developing an expansion plan so that good programs reach as many students as possible. Central to the process is the SCALERS framework, developed at Duke University and discussed extensively on this blog. There are other steps that form the foundation for SCALERS application. We’re going to tell you about some of them in our next few posts. To download a copy of More to Most, visit the website: www.more2most.org.

Before you create a plan for scaling up, you need to decide if the program or practice you want to scale is actually effective. The last thing you need is to do more of something that isn’t even worthwhile. (Such logic only applies to things like eating Doritos or watching episodes of Chopped.) Chapter 2 of More to Most, Determining Program Value, lays out a process for assessing the value of a program.

The first stage of this process is defining the problem you are trying to address with your program and identifying your desired outcome. An example of a well-defined problem:
Too many students who test into three developmental education courses never successfully complete a college-level math or English course.
An example of a concrete outcome:
Currently, X percent of students successfully complete gateway math in one year. We will increase this by X percentage points by [YEAR].
You’ve got a program that’s designed to address this problem and help you reach this outcome. During the next stage, you collect evidence that will help you decide how well the program is achieving the desired outcome. What evidence of the program’s impact, both quantitative and qualitative, is available? Possible sources include:
  • Basic demographic profile of student body, a target group, and/or program participants
  • Course completion data: by course; by student cohort
  • Focus group data from students, staff, and/or faculty
  • Data from national surveys like CCSSE, SENSE, Noel-Levitz, etc.
Now, you can determine whether the program or practice meets criteria for effectiveness. We focus on two important criteria:
  • Does the evidence show that the program delivers the desired outcome?
  • Would scaling up the program align with institutional objectives?
It is important to consider both of these questions when determining the value of scaling up the program. Many institutions analyze data disaggregated by race, income, and other demographic factors and identify achievement gaps among student populations. If closing these gaps is an institutional priority for your college and one of the desired outcomes of your program, then make sure you analyze the evidence for how effectively the program accomplishes this goal. When comparing the evidence for several programs, keep in mind that if the program has positive outcomes for a designated population that is generally less successful, it may show a lesser impact on overall student success outcomes, at least in the short term. This does not make the program less valuable, however. The value comes from how well the program matches your desired outcomes and the institution’s priorities — or makes a case for changing those priorities.

Once you’ve collected your evidence, it’s time to make the case for expansion and developing a scaling strategy. We’ll discuss those steps in upcoming posts.

Tuesday, July 3, 2012

Scaling Social Impact: NYC Edition

Earlier this month, we attended the Social Impact Exchange’s annual Symposium on Scaling Social Impact. The Symposium brought together nonprofit organizations, funders, consultants, and evaluators to share knowledge about bringing social solutions to scale. We were there to share what we learned as we created More to Most, a guidebook on scaling up effective community college practices, and to learn from the experiences of others. Here are a few themes from the conference that we’d like to share:
  • If “scaling social impact” sounds like a nebulous phrase, that’s because it is. As the meeting organizers readily admitted, “There are many ways to achieve scaled impact—from replicating programs in new locations to developing breakthrough products and services; from scaling policy initiatives and social movements to online expansions through the use of toolkits and platforms. And there are other types of expansions too that include knowledge sharing, network building and collaborations.” While many of the symposium attendees were focused on replicating a nonprofit’s services across multiple geographies, More to Most is focused on scaling within a system. Our big question: how can community colleges go from serving some students in effective programs, to expanding those programs to more students, and finally reach most of those who can benefit from them.
  • Is the ideal funder a thought partner, too? According to a panel on grantmakers and nonprofits working in partnership, yes. One panelist used a food metaphor (always our favorite) to explain: she said funders should help build the kitchen, but they don’t need to be in there cooking with you; in other words, there’s a place for funder input in program design, but implementation should be left to the delivery organization. Another pointed out that when funders are engaged as thought partners, they are usually willing to be more flexible with timelines and shifting plans. 
  • Making the most of opportunities is usually good, but being too opportunistic and losing coherent priorities is bad. Know yourself and your non-negotiables. If you adjust a working solution to meet the preferences of a new funder or for the sake of simply making the program larger, you may risk your effectiveness. 
  • To get results, make talent development a priority. A panel of philanthropic leaders drove the point home at the symposium, reminding us that a key indicator of job satisfaction across sectors is the feeling of continual challenge. The panelists recommended that the social sector pay more attention to cultivating talent and leadership over time. For community colleges, this means that any successful scaling effort should be linked with professional development and faculty engagement
  • Sustainable funding is crucial to scaling success. A program can’t rely on a continuous stream of grants to operate. While grant funds work great for start up and proof of concept, a program needs to identify a long-term sustainable funding stream. Many grantmakers are hesitant to fund an idea that doesn’t have a plan for revenue generation. At the community college, this usually means finding a way to get resources reallocated in the general fund. Use the grant money to demonstrate the effectiveness of your program, so that college administrators recognize the value of incorporating the program into the annual budget.