CES Logo

Générations: approche multigénérationnelle à l'évaluation

1 au 4 mai, 2011
The Westin Edmonton, Edmonton, Alberta
Email: CES@buksa.com

 

English / Français

Ateliers

Ces ateliers interactifs offrent aux participants des occasions d'acquérir de façon pratique des connaissances et de l'expertise en ce qui a trait aux méthodes, aux concepts et aux outils de l'évaluation. Des ateliers d'une demi-journée et d'une journée sont offerts.

Ateliers d'une journée - le dimanche 1er mai, de 9 h à 16 h


1. Handling data: From logic Model to final report

Gail Barrington, Barrington Research Group, Inc.

This full-day workshop will discuss ways of identifying, collecting, and analyzing program evaluation data that are feasible for the evaluator and meaningful to the client. Based on her more than twenty-five years of consulting experience, Gail will share some hard-won lessons about how to interact with stakeholders, ask the right questions, collect the right data and analyze and present findings in useful ways.

You will have the opportunity to work together and in small groups to tackle some common data collection and data handling problems, particularly in complex studies. Actual work samples will be provided. At the end of the workshop, you will take away some fresh ideas and a number of useful tools and techniques for application in your program evaluation context.

You will learn how to:

  • deal with surface underlying issues that can hamper the implementation of an evaluation;
  • use evaluabilty assessment to develop an accepted logic model;
  • ask the right questions;
  • develop an overall data collection plan and simplify it for participants;
  • link data collection tools;
  • compile data;
  • map themes;
  • triangulate data;
  • prepare and use evidence tables,
  • report findings in comprehensive and effective ways, and
  • think about knowledge transfer after the report is finalized.

2. Evaluation project management 101: Intro to evaluation management practice

Nicole Michaud, Social Sciences and Humanities Research Council
Shannon Townsend, National Research Council

Against a backdrop of demanding technical requirements and a dynamic political environment within the current Canadian context, the goal and challenge of evaluation management is to develop, with available resources and time, valid and useful evaluations. This is certainly true in the evaluation enterprise where recent shifts in the policy and program environments have resulted in significantly more complex projects. This workshop will focus on building and increasing the evaluation management knowledge and skills of those who conduct evaluations, or those who have oversight responsibility, based on an adaptation of project management tools and methodologies in real-world evaluation settings.

Building on a presentation on this topic at the CES 2009 conference, this workshop is intended to increase the evaluation management knowledge and skills of those who conduct evaluations, as well as others who have oversight responsibility. As well , it will introduce participants to a discussion of the principles of project management (PM) and project portfolio management (PPM) based on the Project Management Institute's Project Management Body of Knowledge (PMBOK), including the benefits of applying well-established and internationally accepted PM methodologies to the field of program evaluation.

Using a combination of short presentations, seminar discussions, and case examples in small group exercises, participants will have the opportunity to enhance their skills in directly managing or overseeing each phase of an evaluation (i.e., planning, conducting, monitoring and closing). Practical suggestions will focus on how common problems can be avoided or resolved. Key cross-cutting topics will also be addressed, such as, the interrelation of evaluation management phases. Other examples of evaluation management challenges and potential solutions using PM methods will be provided by the instructors as well as being drawn upon from participants' experiences.

This interactive workshop will provide participants with a basic understanding of:

  • the principles of project management (PM) and project portfolio management (PPM), based on the PMBOK;
  • the potential benefits that can be derived by applying PM principles; and
  • ways in which the adaptation of PM methodologies can inform, guide and be used in evaluation practice.

Ateliers d'une demi-journée - le dimanche 1er mai, de 9 h à midi


3. Practical approaches to managing ethical risk in evaluation projects

Linda Barrett-Smith, Alberta Innovates - Health Solutions
Birgitta Larsson, BIM Larsson & Associates

Participants
This is a half day, intermediate level workshop for experienced evaluators who wish to enhance their ability to identify and manage ethical issues in their projects. It presents a practical approach to enhancing essential skills for ethical evaluation practice and introduces 2 tools to help evaluators integrate ethical concepts into their projects. The content is applicable to any locale in which a project is carried out, whether in Canada or in another nation. For the online portion of the workshop, participants require a laptop (can share with a colleague). A maximum 25 participants can be accepted.

The Issues
Evaluation projects have become increasingly important in health and human service delivery to demonstrate public benefit. Growing in both number and complexity, these projects often deal with at-risk or otherwise vulnerable populations in intrusive ways. Often, participants are not protected through ethical scrutiny, nor are there guidelines or processes in place to assist addressing ethical risks to participants. Complicating the issue is the common assumption that simply following professional codes of conduct are enough to protect people or that ethics review does not need to be considered in projects that do not involve research.

To address these gaps, the Alberta Research Ethics Community Consensus Initiative (ARECCI), with the assistance of CES Alberta Chapter, developed a practical, "on the ground" course that prepares evaluation and quality improvement leaders to consistently manage ethical risks in projects that involve people or their information.

The Project Ethics Workshop Description
This interactive workshop introduces the Project Ethics Course Level 1 titled "How to Integrate an Ethical Approach in Quality Improvement and Evaluation Projects." Using a "hands-on" approach to learning, this workshop will introduce project ethics concepts and provide opportunity for the application of online ethics tools to identify and manage risk in evaluation projects.

Some of the concepts addressed in the workshop are:

  • Evaluation projects do raise ethical concerns.
  • Evaluation projects would benefit from an assessment that identifies ethical issues and provides appropriate oversight to ensure respect for people and their information.
  • Evaluation projects are embedded as part of ongoing management strategies to improve quality of service. Thus, accountability for their ethics oversight and management of ethical risk should rest with human service providers or their organizations.

The workshop features two tested ethics decision-support tools:

  1. The ARECCI Guidelines for Quality Improvement and Evaluation Projects include six ethical considerations that can be applied to assist integration of ethics when planning or reviewing a project.
  2. The ARECCI Ethics Screening Tool helps determine the primary purpose of the project (research versus non-research), the level of risk to participants, and the level of ethics review required (if any).

Workshop Learning Objectives

  1. Accept that ethics screening and ethics review (when appropriate) should be performed to protect and respect people.
  2. Apply ethics screening and review principles to evaluation projects.
  3. Complete a preliminary ethics assessment (which includes assessing the level of risk) of an evaluation project using the ARECCI ethics decision-support tools.

4. A wide-angle view of program success: When are performance indicators meaningful, what are their limitations, and how can we create alternative approaches?

Tammy Horne, WellQuest Consulting Ltd.

Reporting progress on indicators has become central to evaluators' work over the past 20 years, as funders' emphasis on performance measurement for accountability has grown. However, simply monitoring a program's performance indicators can lead to describing change, without exploring the how's, why's, and implications of change.

Over-emphasis on, or misuse of, pre-determined performance indicators has been questioned by prominent evaluation theorists such as Robert Stake and Michael Patton, in their writings on responsive evaluation and developmental evaluation, respectively. Canadians Burt Perrin and Sarah Bowen have shared concerns around use and misuse of indicators in practice. Critiques include: (a) indicators may be poorly developed, (b) indicators may be chosen based on convenience rather than on evidence and relevance to evaluation questions, and (c) overemphasis on a few pre-determined indicators misses program context and complexity, stakeholder perceptions/experiences, and issues that emerge during evaluation.

Workshop learning objectives: Participants will learn to (1) distinguish between well-developed and poorly-developed indicators, (2) write well-constructed indicators (both quantitative and qualitative), (3) distinguish between performance measurement/monitoring and program evaluation, (4) determine when indicators are most likely to provide useful data for decision-making, or not, (5) develop alternative approaches that could complement or replace pre-determined indicators, depending on an evaluation's focus.

This workshop will be extensively hands-on, as follows:

  1. Participants will critique examples of indicators
  2. Participants will work in small groups to develop indicators relevant to important evaluation questions (process or outcome-related) from their own programs
  3. Participants will first discuss, in pairs, different stakeholder views on the importance of focusing on indicators, and complementary or alternative approaches. The 'report-back' to the larger group will be an interactive, fun, solution-seeking process. One pair will begin to role play stakeholders with differing views on indicators. 'Audience' members can jump into the 'scene' as they wish, and take a role to move the scene toward a resolution that balances focusing on indicators with other approaches to evaluating a program's success/progress. This exercise is an informal adaptation of 'popular theatre' -- a way of learning sometimes used in adult education (as well as in participatory research and community/social change). People can participate according to their comfort level; no acting experience is needed.
  4. The workshop will conclude with reflections on what participants have learned from each other through exercises 1-3, including 'take-away' messages they can use in their work.

By sharing experiences and perspectives with their peers, participants can return to their work with skills/ideas for (a) improving performance indicators when they are required/useful, and (b) developing complementary or alternative approaches. They will be able to contribute to evaluation capacity building among others with whom they work by further sharing their workshop learning.

Improving indicator development skills may help participants better align their work with the Program Evaluation Standards, particularly A3-A7 on monitoring and data gathering. Moving beyond over-emphasizing indicators may further enhance alignment with the Standards (e.g., U3, U7, P1, A2).

Though the workshop will be paperless, I will e-mail presentation materials to participants following the workshop, for future reference.

5. Waawiyeyaa (Circular) evaluation tool certification

Andrea L. K. Johnston, Johnston Research Inc.

Learning Objective 1: To demonstrate the Waawiyeyaa (Circular) Evaluation Tool video and manual in order to educate participants on the options beyond the logic model for evaluation.

Learning Objective 2: To demonstrate the power of oral and visual education via video and by example showcase how this tool can not only serve to evaluate participant outcomes, but too also further their healing by sharing stories.

Learning Objective 3: To provide certification to use and implement the tool to workshop participants. Participants leave with a DVD and Facilitators Manual.

Developed by Johnston Research Inc., this holistic evaluation tool, grounded in Anishnawbe traditional knowledge was created for program providers. It's a self-evaluation tool allowing programs to document both meaningful process and outcomes over time. It's also a learning tool that promotes growth and self-development among the program participants. By applying the tool at various program-milestones a full picture can be documented of the personal journeys of the participants in a systematic manner. The traditional knowledge tool provides a framework from which program participants can easily relate. Participants like the tool because the storytelling is driven by them through their own eyes and at their own pace. We will review the manual, see the 20-minute video, complete the paper and crayon exercise and incorporate our stories into an evaluation report. You will take home your story, as well as a certification in the use of the Tool. The DVD and Manual are for sale; however free to interested First Nations.

Methods:
-- PPT, Full training/knowledge on the Training Book, Group discussion, Sharing learnings and inspiring insights.

Johnston Research Inc. is a forerunner in utilizing new technology and media to develop culture-based evaluation tools that can properly assess and improve culture-based social programming. Our latest releases include the Waawiyeyaa Evaluation Tool - that uses a video that shows program providers and participants how they can use the tool to document their stories; and an electronic survey tool that uses an easy question builder to launch a survey that will be instantly integrated into a database using the Internet. Other methods can be viewed on our website.

6. Evaluating environmental, resource and conservation programs

Andy Rowe, ARCeconomics Inc.

Demand for evaluations of resource, conservation and environmental interventions is growing rapidly. However evaluation thinking and capacity to work in these settings is still very nascent. Currently the majority of these evaluations are undertaken by domain specialists from the natural and physical sciences with no evaluation training or experience.

The distinguishing characteristic of evaluation in environmental settings is an evaluand that involves two systems, human and natural. A two system evaluand stretches contemporary evaluation approaches and requires evaluators to modify their approaches and methods. Thus in providing a frame and approaches for evaluating environmental interventions the workshop also addresses important issues in systems thinking as well as evaluation in complex settings where developmental evaluation has an important role.

This workshop introduces the key concepts and techniques for evaluating environmental, conservation and resource interventions. Participants will also be introduced to approaches addressing the unique methodological challenges and the difficulty of achieving use in science settings. The workshop will include small group and interactive sessions interspersed with tutorials based on the presenters' extensive experience in this area and is most appropriate for those with basic evaluation knowledge.

7. La conduite d'évaluations complexes (French session, maximum 20 participants)

Simon Roy, Goss Gilroy Inc.

Une évaluation peut s'avérer complexe pour plusieurs raisons. La complexité peut découler du type de programme à l'étude, de la structure du programme, de la gouvernance ou des sources d'information. De nombreuses conséquences malheureuses peuvent s'en suivre, comme des délais additionnels ou des résultats, conclusions ou recommandations inappropriés. Pour cet atelier, Simon Roy présentera des stratégies et des conseils qui peuvent être utilisés pour quatre types d'évaluations complexes : évaluations horizontales ou multi-programmes; évaluations de programmes ou de sujets complexes; évaluations à délais serrés; et évaluations à sujets délicats. Les participant(e)s apprendront quelques stratégies pour surmonter les obstacles associés à ces difficultés, notamment celles associées à la gouvernance des évaluations, l'utilisation d'experts, la participation d'intervenants, la méthodologie, et l'analyse et la présentation de résultats. L'atelier sera présenté dans un format interactif qui permettra également aux participants d'apprendre les uns des autres.

8. Writing for action

Rochelle Zorzi, Cathexis Consulting Inc.
Cameron Hauseman, Cathexis Consulting Inc.

Do you consider yourself to be a good writer, but still wish your reports would have more of an impact than they do? Then join us! This hands-on workshop provides a set of tools that you can use to make every report a springboard for action.

This will be an hands-on workshop, including large group discussion, paired activities, and individual activities. Participants will consider a report or other communication that they have written or are about to write. With guidance from the facilitators, they will:

  • Identify their communication goals,
  • Analyse a target audience for the report, and
  • Select strategies that will help them convey their messages.

This workshop addresses the following Competencies for Canadian Evaluators:

  • Competency 3.5: Serves the information needs of intended users
  • Competency 5.1: Uses written communication skills and technologies

Ateliers d'une demi-journée - le dimanche 1er mai, de 13 h à 16 h


9. Developmental evaluation: The experience and reflections of early adopters

Mark Cabaj, Tamarack Institute

This 1/2 day session is a presentation-discussion style workshop aimed at people interested in learning more about the experience of their colleagues in using developmental evaluation, an approach designed to be used in situations of social innovation, high complexity, program replication, crisis situations and ongoing/radical program design. Michael Quinn Patton, a well-known evaluation expert and the person credited with "developing" the approach first mentioned the concept in 1994 and released his comprehensive book on developmental evaluation in July 2010. This workshop will explore the results of a six month investigation into the experience of 18 "early adopters" of developmental evaluation. This is the first known research of this guide on this topic. The findings are organized into the following areas: distinguishing developmental evaluation from formative and summative evaluation; the 'niches' in which developmental evaluation is employed; the conditions for effective development evaluations; coping with 'imperfect' conditions for developmental evaluations; planning and budgeting for emergent interventions, methodological issue in developmental evaluations; accountability in developmental evaluation; expertise and competencies of evaluators in developmental evaluation; implications and recommendations for evaluation theory and practice. It is recommended that participants have a basic understanding of developmental evaluation.

10. Evaluating development results in peace-precarious situations

Catherine Elkins, RTI International and Duke University

Peace-precarious situations (Elkins 2006; 2008) are characterized by recent, ongoing, or chronic violent power disputes, yet increasingly also host major international development interventions. Contemporary projects and programs are undertaken where wars have barely ended or have yet truly to conclude, and where other security or crime concerns dominate community development priorities. Intervening amidst intermittent or ongoing friction that unpredictably yields further sporadic violence, with core institutions not obliterated but significantly destabilized, complicates all of our implementation and learning challenges.

Our theory and expectations for programs expected to work in peace-precarious situations require critical examination from design assumptions to empirical results. While undertaking development in these unusually stressed circumstances generates vast amounts of new and needed information, however, much of it tends to dissipate unused. Political sensitivities or humanitarian urgency can inhibit complete and accurate documentation; turnover further undermines accountability; and goals, scope, and resource levels can shift dramatically and repeatedly. In a volatile context, the theories of change effectively in play multiply throughout program implementation. Successfully navigating peace-precarious environments to produce measurable development results requires innovative strategies, analysis, and tools.

This half-day workshop consists of participatory presentation including individual and group exercises on case studies. The workshop uses a theoretical framework with field examples to explore the state of the art in development and M&E/evaluation related to peace-precarious situations, critiquing methods and approaches within a framework for participants to assess strengths, weaknesses, and persisting gaps in the field.

Learning Objectives: Participants will increase their professional knowledge of:

  1. Similarities and differences between peace-precarious and other contingent settings
  2. Critical dimensions for strategic consideration when designing development and M&E/evaluation systems for projects in peace-precarious situations
  3. Tools and strategies for producing useful evaluations and actionable findings in peace-precarious situations

Presentations, discussion, and cases cover M&E/evaluation in this complex landscape. We will examine results-relevant characteristics of these settings and their development projects from an evaluation perspective, exploring analytical and M&E/evaluation strategies to deliver sound and useful findings. Evaluation and M&E practitioners interested in development, especially in conflict/post-conflict/fragile or analogous settings, will participate in small-group exercises using case studies, and contribute to collaborative learning.

11. Designing and advancing evaluation quality

Cheryl Poth, University of Alberta
Michelle Searle, Queen's University
Lyn Shulha, Queen's University

The Program Evaluation Standards, Third Edition (2010), developed by the Joint Committee on Standards for Educational Evaluation (JCSSE), has been approved by the American National Standards Institute, and adopted by the Canadian Evaluation Society (CES). Six years in development, The Standards document is a product of needs assessments, reviews of the scholarship and practice literature from the last four decades and suggestions from numerous American, Canadian and international meetings on what the third edition should accomplish. As drafts were prepared, more than 75 national and international reviewers provided comments and suggestions in an open review process. More than 50 practitioners as well as program users tried out these standards and provided comments prior to final editing and approval.

The Program Evaluation Standards, Third Edition, is an integrated guide for evaluating programs designed to foster human learning and performance. These standards apply to a wide variety of settings such as schools, communities, governmental health-care, private and non-profit organizations. In the 16 years since the publication of the second edition much has been learned about

  • whether and when to evaluate,
  • how to select evaluators and other experts,
  • the impact of cultures, contexts, and politics on evaluation decision making,
  • communication and stakeholder engagement,
  • technical issues in planning, designing, and managing evaluations,
  • uses and misuses of evaluations,
  • evaluation quality, improvement, and accountability.

A feature of the third edition is the intention to have the standards also serve the needs of evaluation users.

This half-day workshop will introduce participants to the five dimensions and 30 standards that, as a set, define quality evaluation. The session will begin by applying one or more dimensions of evaluation quality to a practical problem in evaluation. Participants will situate themselves within a team of evaluators and program administrators negotiating 'next steps' in an evaluation. This experience will form the foundation for examining how standards can work both independently and interactively to guide evaluation decision-making. The workshop will also deal explicitly with metaevaluation and its role in evaluation quality improvement and accountability. Attendees will also have the opportunity to report on their own evaluation dilemmas and explore, in small and large groups, how the application of program evaluation standards may serve to increase and balance dimensions of evaluation quality, such as utility, feasibility, propriety, accuracy, and evaluation accountability.

Attendees will receive handouts to support reflective practice in their future evaluations and evaluation-related work.

Summary

Intended Outcomes

  1. Participants will know the dimensions quality and specific standards to be considered in designing and conducting and evaluation (knowledge)
  2. Participants will understand the complexity inherent in using standards as a decision making tool (knowledge)
  3. Participants will participate in an evaluation decision making exercise (experience)
  4. Participants will be able select and apply standards in ways that can contribute to the overall quality of the evaluation (skill).
  5. Participants will appreciate the potential contribution of the standards to decision making from both an evaluator and an evaluation user perspective (attitude)

12. Social return on investment: An emerging tool for the evaluator's toolkit

Stephanie Robertson, SiMPACT Strategy Group

Learning Goals & Objectives:
To introduce the concept of social return on investment (SROI), share with participants how the effective approach of SROI builds upon outcome-based evaluation frameworks in order to further emphasize the value of outcomes achieved.

In addition to methodology-specific skill development, participants will learn about the development of The SROI Network (international body), the SROI Canada Network, opportunities to become accredited as an SROI practitioner and how to access & share research on financial proxies.

Session Overview:
Social Return on Investment (SROI) is an innovative and increasingly sought-out approach to value the impact of a project, program, organization or policy. An SROI includes the value of the most important impacts upon stakeholders. It expresses value that is typically without a known market value, in financial terms. SROI is a tool for investors, evaluators and project managers. While increasingly known as an informative evaluation and forecasting/planning tool for organizations seeking to optimize the value of their activities, SROI also improves communication with key stakeholders.

The SROI methodology is a principles-based approach that values change for people and the environment, changes that are at-risk of being under-valued, misunderstood or ignored altogether because their value is not universally understood. As a result, an SROI analysis assigns financial proxies to social and environmental change wherever possible. Most importantly, the monetized value presented is then combined with non-monetizeable indicators of change, so that a complete expression of value creation is communicated.

This session will draw upon three years of case development in Calgary and a recent province-wide exercise to train 107 people representing 55 Alberta-based organizations, including their evaluators, on how to use the SROI framework to forecast value created, therefore to map out their value creation evaluation strategy. It will illustrate how an effective approach to SROI analysis ensures that the monetizable and non-monetizable sources of value created are presented together in order to ensure that the SROI result tells the entire value creation story.

How will engage participants:
This workshop will provide opportunities for participants to develop aspects of their own SROI. They will also think through financial proxies and apply them to their own case examples.

13. Program evaluation and organizational development: An integrative framework

Wendy Rowe, Royal Roads University

Integrating evaluation methods and OD processes (E-OD) into an organizational or program change framework leverages the power of these two disciplines to help organizations achieve outcomes and to sustain continuous learning and development. This session will describe the stages of the E-OD change model beginning with mobilizing commitment to a desired change, using visioning, planning and needs assessments processes and then finally using evaluation results within the organization and for all stakeholders groups to direct and sustain continued change and development.

Using small discussion groups, participants will be engaged in applying these evaluation concepts to a change initiative in their organization or an organization they are working with. Participants will be introduced to group process techniques, such as the world café and appreciative inquiry process; techniques that generate evaluation data while also creating the conditions for change to occur.

By the end of the workshop, participants will have acquired greater understanding of the role that evaluation can play in facilitating organizational development and change, as well as developed some familiarity with methods and tools for carrying out this kind of integrative evaluation and OD function.

14. Conducting complex evaluations (English session, maximum 20 participants)

Simon Roy, Goss Gilroy Inc.

Evaluations can be highly complex for a number of reasons. They can be complex by the subject matter (or program type), program structure, evaluation governance structure and/or the sources of information. This complexity can lead to a number of undesired outcomes, including delayed timelines and inappropriate findings/conclusions or recommendations. For this workshop, Dr. Roy will present key strategies and tips that could be used for four types of challenging evaluations: horizontal and multi-program evaluations, evaluations of highly complex subject matters/programs, evaluations with tight timelines, and evaluations about sensitive programs. Participants will learn about practical strategies and tools to help them deal with these complexities. Strategies will be related to the governance of evaluations, the use of experts, stakeholder participation, methodology, and the analysis and presentation of results. The workshop will be presented in an interactive format that will allow participants to learn from experiences of other participants.

Proposal Submissions

Ateliers

La date limite de soumission de propositions d'ateliers est le 25 octobre 2010. Veuillez noter qu'aucune proposition ne sera acceptée après cette date.

Présentations

La date limite de soumission de résumés de présentations est le 15 novembre 2010. Veuillez noter qu'aucun résumé de présentation ne sera accepté après cette date.

 

 

Générations: approche multigénérationnelle à l'évaluation
c/o BUKSA Strategic Conference Services
Suite 307, 10328 - 81 Avenue NW, Edmonton, AB T6E 1X2
Téléphone: (780) 436-0983 x234 Télécopieur: (780) 437-5984 Email:CES@buksa.com