Organizational Capacity Building: Addressing a Research and Practice Gap
Abstract
The purpose of this article is to address the gap between evaluation research, and the practice of capacity building with nonprofits. This study describes a 5-year capacity building initiative with grassroots organizations including a longitudinal evaluation of the implementation and outcomes achieved. Formative processes yielded many lessons that were used to improve the capacity building model of services. The results show that the majority of groups met a priori expectations for participation success. Organizational staff valued technology, consultants, and program funding the most. Increases were found in board membership and perceptions of visibility of the organization were enhanced. Executive directors reported greater awareness of needs and improved management knowledge. These small organizations fill many unmet needs and more capacity building evaluation studies are needed to understand the mechanisms that support their efforts and the impact on their sustainability.
Introduction
Although organizational capacity building is promoted as a way to enhance the effectiveness and sustainability of nonprofits (Blumenthal, 2003; Kellogg Foundation, 2001; Light, 2004; US Department of Health and Human Services, 2006), the evaluation of these efforts lags behind. Practitioners and funders may agree it is important to know more about these initiatives, but there are few published studies on the effects of capacity building (Chinman et al., 2005; Florin, Chavis, Rich, & Wandersman, 1992; Joffres et al., 2004; Leviton, Herrera, Pepper, Fishman, & Racine, 2006; Patrizi, Gross, & Freedman, 2006), which means that capacity building will continue to be done by “hunch” (Light, 2004, p. 175). Evaluation studies can clarify the processes that lead to successful capacity building. Also, theories of change on enhancing nonprofit sustainability will be limited until the relationship between the various types of capacity building strategies and outcomes can be better understood. Finally, all of this information will help those who provide funding for capacity building to make better decisions about what to invest in.
This article begins with a review of the literature and the challenges of evaluating organizational capacity building. Next, we describe a capacity building project and its longitudinal evaluation. Following this, we explain the outcomes achieved and conclude with a discussion that offers suggestions for ways practitioners and evaluators can use the information.
Section snippets
Literature review
The use of capacity building has become an important tool to support nonprofit organizations by giving them training, technical assistance and other resources to achieve their mission. Indeed, private and public funders have invested millions to build the capacity of nonprofit organizations (Blumenthal, 2003; Lucile and David Packard Foundation, 2006; US Department of Health and Human Services, 2004). These resources complement existing strategies used by nonprofits to improve effectiveness.
Background on capacity building program
New Detroit, a local intermediary, established a program to strengthen the management capacities of small, grassroots organizations. The model of capacity building emerged through years of collaborative work, testing and modification. In 1999 and 2000, New Detroit piloted a model of intensive services. The project was named Strengthening Community Organizations to Promote Effectiveness or SCOPE. New Detroit's stature eased the way for creating a partnership of professionals, including academics
Evaluation design and questions
This evaluation uses a single case study design of the SCOPE project. The case concerns the processes and activities of the project and outcomes achieved over the span of 5 years. Evaluators used a participatory evaluation framework (Cousins & Whitmore, 1998) to guide the process. A participatory model of evaluation was used for pragmatic reasons. The intervention model that was evaluated was new and untested. For this reason we needed an approach that would yield information to foster decision
Evaluation procedures
Multiple evaluation methods were used. The University's internal review board approved all evaluation materials and informed consent was obtained from participants.
Primary and secondary data were collected over 5 years. Organizational information including budget, program size (number of youth served), and number of staff and board members at pre and posttest were entered into a database. Second, pre and posttest face-to-face interviews were conducted with the ED). The topics included program
Who participated and the extent of participation
A total of 23 organizations were selected for the SCOPE program which operated in two year cycles over a 5-year period. Organizations participating ranged in age from 5 to 85 years, with annual budgets ranging from $52,600 to $337,775. They offered a wide range of youth programming, including: performing arts, after school tutoring, recreation, job skills, cultural programming, substance abuse prevention, and mentoring.
Actual participants closely matched our projected target population:
Discussion
The findings from this study have addressed some of the gaps in the literature between research and practice. First, the SCOPE project provided funding for a longitudinal evaluation strategy that allowed for a feedback loop for future iterations of the program. Second, the evaluation emphasizes capacity building with grassroots groups. Small organizations have unique challenges and needs from their larger counterparts that must be taken into consideration when providing capacity building
Acknowledgements
This research was funded by a contract to New Detroit, Inc. from the Skillman Foundation. The authors would like to thank their colleagues at New Detroit and the SCOPE coordinating team as well as the participating organizations.