image of book cover

Introduction
Jennifer Martineau Kelly Hannum Claire Reinelt

Who This Book is For
How to Use This Book
What This Book Is Not
Key Concepts

Leadership Development
Evaluation
Leadership Development Evaluation
Purposes of Evaluating Leadership Development
Leadership Development Evaluation Roles
Leadership Development Evaluation and Assumptions About Change
Where to Look For Change
Conclusion
References

 
 

Download a printer friendly version of the Introduction
 
  Introduction
Jennifer Martineau Kelly Hannum Claire Reinelt


This book provides broad and practical information about how to conduct leadership development evaluations using a variety of approaches, many of which have been recently developed. We have intentionally sought authors from a variety of sectors (nonprofit, academic, for-profit, and governmental agencies) to increase the diversity of perspectives, expertise, and experiences represented in these pages. The target populations and program designs covered in this handbook are also diverse; we believe this book represents a powerful opportunity for cross-program and cross-sector learning.

This handbook is divided into three parts, each of which begins with an overview chapter. Part One is devoted to designing leadership development evaluations. The chapters in this section address a variety of approaches and considerations that come into play when designing a method for evaluating leadership development initiatives. Part Two, Leadership Development Evaluation in Context, presents chapters addressing specific environments for designing and implementing leadership development, ranging from a stand-alone leadership program for developing evaluators of color to a change initiative intended to transform school leadership and performance. Finally, in Part Three, Increasing Impact Through Evaluation Use, the book addresses ways in which evaluation can and should be used to maximize impact, rather than serving only to measure and document.

WHO THIS BOOK IS FOR
This book supports the daily work of people responsible for developing, implementing, and evaluating leadership development programs and initiatives. These can be human resource managers, instructional and learning designers, trainers, consultants, funders, evaluators, and others from a wide range of organizations: for-profit, nonprofit, government, educational systems, religious and faith-based organizations, community organizations, and more. People who study and research evaluation, leadership development, or both, (such as students, scholars, and staff at foundations, think tanks, or research organizations) form a secondary audience for this book. While we focus on our intended audiences, we sincerely hope that others will benefit from the knowledge, practices, and resources presented in this handbook.

We invite those of you interested in the field of leadership development evaluation to learn from each other and broaden the scope of questions you are asking and the evaluation approaches you are using and testing. Our intent is that this handbook will move the field of leadership development evaluation forward by creating more interaction between practitioners in the for-profit and non-profit, governmental, and educational fields, pushing their collective thinking ahead by exposing them to areas of practice they might not otherwise have access to in their daily work.

HOW TO USE THIS BOOK

This book is first and foremost a resource for its readers, to be used in whatever manner they see fit. We encourage readers to find and read those chapters that are most immediately valuable to them, given the context of their work and the questions they are asking. For example, if you have such a question as, I wonder how other evaluators have designed evaluation when control groups are not possible?, you can find answers in Chapter One. If you ask, How do I evaluate leadership that is focused on systems transformation?, you will find guidance in Chapter Eleven. And if your stakeholders want to know how you plan on sharing the results of the evaluation, consult Chapter Seventeen’s discussion of communication. While each chapter has a specific focus, you are likely to find relevant information and advice on a variety of topics in many different chapters, especially those chapters that describe how leadership development is implemented in a specific context. For that reason, we encourage readers to make liberal use of the book’s index and to read the introductions for each of the book’s three parts, which supplies an overview of the ideas expressed by the chapters in that section.

WHAT THIS BOOK IS NOT

This book is not about the programs or initiatives that are being evaluated; rather it is about the evaluation of those programs or initiatives. Similarly, this book is not about leadership development evaluation findings other than those that are relevant to evaluation design, implementation, and use. Program or initiative information is given to provide the reader with the contextual information about the evaluation.

Second, this is not a basic evaluation text. There are many very good resources available for readers wishing to learn more about how to evaluate in general (for example, Fitzpatrick, Worthen, & Sanders, 2004; Davidson, 2005). We assume that you have a basic understanding of evaluation (see Exhibit I.1) that lets you delve into the specifics of leadership development evaluation.

Third, this book does not promote one evaluation approach over another. We believe that different leadership development programs, different organizational or community contexts, and different evaluation questions demand a broad variety of approaches. The multitude of effective and appropriate evaluation approaches is one of the reasons we felt it was so important to draw this collection of information together.

Fourth, there are contexts for leadership development evaluation that are not adequately covered by this book, including leadership development outside the United States and in many more diverse communities and cultures than we can cover in one book. For example, we concentrate on leadership efforts that target adults. While we have sought diverse perspectives in a variety of contexts, we hope future work will illuminate lessons about leadership and leadership development in diverse cultures and communities worldwide.

Finally, though we hope and believe that the information provided in this book will lead to more effectively and appropriately designed leadership development efforts, the focus of the book is not about designing leadership development initiatives.

KEY CONCEPTS

Because this book explores the intersection of leadership development and evaluation, we think it is useful to orient the reader to concepts to consider what leadership development and evaluation together offer each other.

Leadership Development
In The Center for Creative Leadership Handbook of Leadership Development (McCauley & Van Velsor, 2004), a key distinction is made between leader development and leadership development. Leader development is directed towards individuals to expand their “capacity to be effective in leadership roles and processes” (McCauley & Van Velsor, 2004, p. 2) In this definition, leadership roles and processes are those that “facilitate setting direction, creating alignment, and maintaining commitment in groups of people who share common work” (McCauley & Van Velsor, 2004, p. 2). Leadership development is the “expansion of the organization’s capacity to enact the basic leadership tasks needed for collective work: setting direction, creating alignment, and maintaining commitment” (McCauley & Van Velsor, 2004, p. 18). Granted, the term tasks can evoke a cold and mechanistic view of leadership; however, in this context the tasks needed for collective work are authentic and relational. The Center for Creative Leadership (CCL) pioneered the study and practice of leader development, particularly through feedback-intensive programs such as 360-degree assessments and developmental coaching. In recent years, its focus has expanded to include team and organizational development and what is being called “connection development” – the interdependency between individuals, groups, teams, and whole organizations. The purpose of connection development is to strengthen relationships so that the collective work of organizations can be carried out more effectively.

In a recent GrantCraft guide for funders of leadership development, two broad categories of approaches to leadership development are identified. One seeks to support greater organizational effectiveness among nonprofit organizations and uses leadership development, “as a way to support specific individuals and provide them with skills, experiences, and resources that will make them and their organizations more effective” (Grantcraft, 2003, p. 6) The second approach seeks to strengthen communities and fields by developing leadership “as a way to change what is happening in a particular community or in a field by increasing skills, role models, credentials, resources, and opportunities for people who work in the community or field and by bringing them into contact with new perspectives or approaches to social change” (Grantcraft, 2003, p. 7). These two categories are very similar to those set out by CCL. Because Grantcraft and CCL serve many organizations, their ways of categorizing the work of leadership development indicate broad acceptance of this understanding and practice in the field.

Within the categorizations of leader and leadership development, there are many different types of leadership that are being developed and supported. One of the earliest distinctions was between transactional and transformational leadership (Burns, 1978). Transactional leadership is an exchange of something that has value for both leaders and followers. Transformational leadership is a process that leaders and followers engage in that raises one another’s level of morality and motivation by appealing to ideals and values. Another way in which transactional and transformational leadership are distinguished is between what leaders do and who leaders are (see Chapters Six and Thirteen).

Early understandings of leadership focused almost exclusively on the traits, characteristics, and capacities of individual leaders. Currently, there is growing interest in collective leadership—sometimes called shared leadership, connected leadership, collaborative leadership, or community leadership. Collective leadership focuses on leadership as a process, on the relationships between people, their interdependency, and their ability to act upon a shared vision (see Chapter Nine).

Leadership development can be used to achieve many different goals. Some of the purposes of leadership development may include
• expanding the capacity of individuals to be effective in leadership roles and processes
• developing the pipeline of leaders within an organization or field
• identifying and giving voice to emerging and/or invisible leadership
• strengthening the capacity of teams to improve organizational outcomes
• supporting the creation of new organizations or fresh approaches to leading
• encouraging collaboration across functions, sectors, and industries
• creating a critical mass of leaders that can accelerate change in communities and countries to address key issues and problems.

Evaluation
Evaluation is a process of inquiry for collecting and synthesizing information or evidence. There is considerable variation in how information or evidence is gathered, analyzed, synthesized, and disseminated; and there are different purposes for which these things are done.

In the Encyclopedia of Evaluation, Fournier describes evaluation as a process that “culminates in conclusions about the state of affairs, value, merit, worth, significance, or quality of a program, product, person, policy, proposal, or plan.” (2005, p. 139). In this definition, evaluation is primarily about determining value and worth. In other definitions there is more emphasis on use. The Innovation Network has articulated a use-focused definition of evaluation as “the systematic collection of information about a program that enables stakeholders to better understand the program, improve its effectiveness, and/or make decisions about future programming” (Innovation Network, 2005, p. 3). For a more in-depth discussion of evaluation use, consult Part Three of this book.

In recent discussions about multicultural evaluation, emphasis is placed on the inquiry process itself and those engaged in it. Because people have different worldviews and value systems, proper data gathering, synthesis, and interpretation requires more than applying a set of tools. To be relevant and valid, data collection, analysis, and dissemination strategies need to “take into account potential cultural and linguistic barriers; include a reexamination of established evaluation measures for cultural appropriateness; and/or incorporate creative strategies for ensuring culturally competent analysis and creative dissemination of findings to diverse audiences” (Inouye, Yu, & Adefuin, 2005, p. 6). Practicing culturally competent evaluation involves understanding how history, culture, and place shape ways of knowing and the ways in which knowledge gets used (see Chapters Four and Nine).

You may wonder how evaluation is different from research. The two activities are multifaceted and at times are quite similar. Both research and evaluation depend on empirical inquiry methods and techniques, but often differ in their purpose. For instance, evaluation traditionally focuses on determining the quality, effectiveness, or value of something, but research seeks to understand relationships among variables or describe phenomena on its way to developing knowledge that can be generalized and applied. The distinction between evaluation and research blurs when complex initiatives are evaluated. Often times, research methods are used to gather data to assess whether an initiative’s strategies are convincingly linked to change, such as when school performance rises. This information is used to improve those strategies and determine how resources will be allocated.

Sometimes calling an inquiry process an evaluation can suggest a methodology to prove or disprove success. The emphasis on proof instead of learning has negatively impacted some programs, initiatives, and communities. As a result, there are contexts in which evaluators may use alternative terms, such as documentation (see Chapter Fourteen) or appreciative inquiry (Preskill and Coghlan, 2004) to describe a process of systematic inquiry. By focusing evaluation on improvement instead of proof, evaluation becomes an asset in these communities rather than a liability.

Leadership Development Evaluation
Leadership development evaluation brings together leadership development and evaluation in a way that expands and deepens the dialogue regarding what constitutes effectiveness in both. Leadership development, when it uses evaluation effectively, accelerates desired changes by being intentional about what is being achieved and why. Evaluation can also be used to better understand and document the unintentional outcomes of leadership development. This knowledge can then be used to improve development programs. Evaluation that embraces leadership development pays more attention to how evaluation gets used, who defines success and for what purpose, and what methods are developed to measure and document changes that result from leadership actions. In this book, we hope to facilitate the transfer of learning about leadership development and evaluation across contexts and to improve practice in both areas.

Purposes of Evaluating Leadership Development
Leadership development is a particularly complex process; it is not something that is fully knowable in a short period of time. Leadership development programs seed changes in and connections among individuals, groups or teams, organizations, and communities, which continue to emerge over time. There are many reasons to evaluate leadership development.
• To demonstrate more fully how participants, their organization, and communities do or might benefit from their leadership development program experiences.
• To fine-tune a proposed or existing leadership-development intervention so that it has farther reach and might better meet its goals.
• To show how participation in leadership development experiences connects to such visions as improving organizational performance or changing society for the better, for example.
• To promote use of learning-centered reflection as a central evaluation activity.
• To pinpoint what leadership competencies are most appropriate in particular settings.
• To encourage more comprehensive discussion about what works and why.

LEADERSHIP DEVELOPMENT EVALUATION ROLES

Leadership development evaluators have many different and often overlapping roles, depending on the context in which they are evaluating and the purposes for which the evaluation is being done. Some of the roles include
Assessor. Evaluators assess the value and quality of a leadership development program or intervention in order to determine if it has achieved its desired outcomes without doing harm or provided a valuable return on investment.
Planner and designer. Evaluators assist stakeholders in using evaluation findings and processes to improve and sometimes to design a new program or intervention. They also engage designers to identify what outcomes are desired, what will demonstrate success, and what program elements will contribute to or cause these outcomes.
Trainer and capacity builder. Evaluators educate stakeholders so that they might design, implement, and use evaluation effectively. Often this is done by facilitating gatherings where stakeholders participate in the evaluation process and learn how to use evaluation tools.
Translator and boundary spanner. Evaluators cross boundaries to listen to and search for multiple perspectives and interpretations. As they move back and forth across boundaries, evaluators carry perspectives and findings with them and share those with the other groups in ways that those groups can hear and understand.
Advocate. Evaluators present evaluation findings in public forums that are intended to influence decisions about a program, policy direction, or allocation of resources. Evaluators can give voice to ideas, perspectives, and knowledge that normally go unheard or unknown because the groups that espouse them are ignored by groups with more resources and power. Evaluators advocate for taking the time and investing the resources to reflect, inquire, study, and assess programs and interventions because this process increases the likelihood of success and impact. In their role as advocates, evaluators may find that they are asked to modify or couch their findings in ways that will have positive results for a particular audience. Evaluators have an ethical obligation to do their best to maintain the integrity of the evaluation.
Reflective practitioner. Evaluators learn from their own thoughts, reactions, and experiences through a systematic process of interaction, inquiry and reflection (see Chapters Four and Seven).
LEADERSHIP DEVELOPMENT EVALUATION AND ASSUMPTIONS ABOUT CHANGE

It is a common assumption that change is a linear process that happens over time. The program logic model or pathway model is a linear diagram that specifies inputs and expected short-, intermediate-, and long-term impacts. These models assist stakeholders to better delineate and agree on the changes they seek over time, how these changes are linked to each other, and how they are linked to the inputs. Evaluations typically test the logic between the parts in these kinds of models.

However, leadership development and its outcomes rarely fall into a neat, linear progression, and sometimes profound change can happen very quickly (see Chapter Three). Recent leadership development evaluations have designed evaluation approaches that are based on systems theories (see Chapters Three and Eleven). These approaches seek to capture the complexity of changes that are occurring and how leadership development contributes to or is linked with those changes.

WHERE TO LOOK FOR CHANGE

There are many different domains of impact where results from leadership development interventions can be measured or captured.

Individuals. By far the most common domain, leadership development evaluators look for results among the individuals who participate in a program. They look for changes in knowledge, skills, values, beliefs, identities, attitudes, behaviors, and capacities.

Groups and teams. In an organizational context, when specific groups or teams are the target of leadership development efforts, evaluators may look for changes in workgroup climate, collaboration, productivity, and so on.

Organizations. Leadership development programs may seek to influence business strategy, organizational sustainability, quantity, and quality of products or services delivered. Evaluators look for changes in decision making, leadership pipelines, shared vision, alignment of activities and strategy, and key business indicators.

Communities. Leadership development programs may seek changes in geographic communities or communities of practice. Evaluators look for changes in the composition of leaders who are in decision-making positions, in social networks, in partnerships and alliances among organizations, in ways in which emerging leaders are identified and supported, and in the numbers and quality of opportunities for collective learning and reflection.

Fields. Leadership development programs may seek changes in language, paradigms, and how knowledge gets organized and disseminated. Evaluators look for changes in language, shifts in paradigm, the demographics of participants in a field, and the visibility of ideas within a field.

Networks. In a community or field context, when network building is a core focus of the leadership development effort, evaluators may look for changes in the diversity and composition of networks, levels of trust and connectedness, and their capacity for collective action.

Societies/social systems. Leadership development evaluators sometimes seek to measure or capture social or systems change. Because it typically takes longer to occur, it may be difficult to see in the timeframe of most evaluations. Evaluators look for changes in social norms, social networks, policies, the allocation of resources, and quality of life indicators.

CONCLUSION

Throughout this book, its authors delve into the concepts we introduce here and into other ideas. We hope that you will find this book useful, thought provoking, and that you will be interested in learning more and contributing your knowledge and experiences to advancing the field of leadership development evaluation. For more information and resources related to topics in this handbook, visit the book’s companion web site.

Evaluations rarely follow a linear process, but the steps outlined below represent typical activities an evaluator facilitates during an evaluation.

• Identify stakeholders for the initiative and the evaluation.
• Articulate the initiative and evaluation purposes.
• Specify at what level impact is expected to occur (for example, individual or organizational).
• Specify the type and timing of impact (for example, a change in a specific behavior within six months after the program).
• Determine and prioritize critical evaluation questions.
• Identify or create measures or processes for gathering information about the initiative and its impact.
• Gather and communicate information.
• Share and interpret information from the evaluation.
• Use the information from the evaluation.

REFERENCES

Burns, J. M. Leadership. New York: Harper & Row, Publishers, 1978.

Davidson, E.J. Evaluation Methodology Basics: The Nuts and Bolts of Sound EvaluationThousand Oaks, CA: Sage Publications. 2005.

Fitzpatrick,J.L, Sanders, J.R., and Worthen,B.R. Program Evaluation: Alternative Approaches & Practical Guidelines (3rd. Ed.). Boston, MA: Addison-Wesley, 2004.

Fournier, D.M. Evaluation. In S. Mathison,(ed.) Encyclopedia of Evaluation. Thousand Oaks, CA: Sage Publications, 2005, p. 139–140.

Grantcraft, “Leadership Development Programs: Investing in Individuals,” 2003.

Innovation Network, Evaluation Plan Workbook, 2005. http://www.innonet.org

Inouye, T., Cao Yu, H. and Adefuin, J. Commissioning Multicultural Evaluation: A Foundation Resource Guide. In partnership with Social Policy Research Associates. The California Endowment’s Diversity in Health Education Project, Oakland, CA, January 2005.

MacCauley, C.D. and Van Velsor, E. (eds.) Handbook of Leadership Development(2nd Ed.). San Francisco: Jossey-Bass, 2004.

Preskill, H. and A. Coghlan,(eds.) Using Appreciative Inquiry in Evaluation, New Directions for Evaluation, No. 100. San Francisco: Jossey-Bass, 2004.

Preskill, H. and Torres, R.T. Evaluative Inquiry for Learning in Organizations. Thousand Oaks, CA: Sage Publications, 1994.

 
 

 

• • • return to top

 
 

 

Part One Part Two Part Three Book Home Order

 

 
 

Project Sponsors