Theme 3 Promote Equity and Fairness

3.1 Theme Overview

Description: Assessment and evaluation are inexplicably linked to equity and fairness. Often assessment and evaluation are used to highlight or uncover issues of inequity. We asked panelists to consider the inverse, tasked with considering how advances in assessment and evaluation may come to promote equity and fairness.

The following questions were intended to inspire and generate ideas. The speakers did not need to address the questions directly.

  • What can assessment and evaluation do to promote equity and fairness?

  • What can assessment and evaluation scholars do to promote equity and fairness?

  • How do equity, diversity, and inclusion influence the ways in which you approach assessment and evaluation?

  • How can we work with local partnerships as assessment and evaluation experts to promote equity and fairness?

3.2 Understanding a Trajectory of Equity in Evaluation to Imagine Action to Advance Equity

Author: Michelle Searle, PhD, CE, OCT

Institution: Queen’s University

Recommended Citation:

Searle, M. (2022, January 27-28). Understanding a Trajectory of Equity in Evaluation to Imagine Action to Advance Equity [Paper presentation]. Advancing Assessment and Evaluation Virtual Conference: Queen’s University Assessment and Evaluation Group (AEG) and Educational Testing Services (ETS), Kingston, Ontario, Canada.

Introduction and Positioning of the Author

Advancing assessment and evaluation to promote equity and fairness is an important aspect of evaluators’ theoretical and applied practice (Yarborugh, 2011). As an academic, researcher, and practitioner who is interested in the theory and practice of promoting the usefulness of evaluation, my values, and the work that I do positions me at the crux of embracing and embedding equity and fairness into both my scholarly and professional contributions to the field. Earlier in my career I lived internationally and worked as an educator. It was at this point when my perspective shifted, and I started thinking about equity and fairness as key elements in professional relationships and evaluative collaborations. Hall’s (1976) iceberg idea of culture, recognizing the visible surface culture and how much deeper culture lies below is a good starting point for newcomers to this way of seeing the World. In thinking about the cultural iceberg, I hoped that by better understanding learners while engaging in integrative teaching, assessment, and evaluation, I could create a welcoming environment that prioritized taking risks and being courageous in our classroom community (Blackburn & Niedzwiedz, 1980). In carrying these hopes and refining of my ideas for more than a I have become more aware that the links between culture and equity are rooted in systematic practices worthy of exploration (Zhou & Fink, 2003). I believe evaluation is one-way practitioners and researchers can further explore to deepen their learning, increase their self-awareness, and hone their sensitivity.

To emphasize the value of the voices and experiences of others in my evaluation practice, research and teaching, I have prioritized collaborative (Roy & Searle, 2020; Searle & Poth, 2021; Searle et al., 2020; Shulha et al., 2019) and methodologically diverse evaluations (Searle, In Press) which regularly lead me to the inclusion of arts-informed inquiry (Cooper et al., In review; Searle, 2020; Searle & Fels, 2018; Searle et al., 2016). In recent years, I have discovered that my efforts are insufficient. I am learning that we promote equity and fairness by questioning, surfacing assumptions and nourishing explicit communications. Including voices is different than intentionally centering the voices and experiences of others. Difficult conversations cannot be avoided (Katz & Dack, 2012). In fact, few, if any aspects of evaluation are neutral, meaning that we cannot proceed as if they are (AEA, 2011). Thankfully, evaluation emphasizes the asking of questions, such as: who is being served by this program; who is experiencing barriers to access; what stories do people tell about the program; what biases do I hold as a listener; what do we need to learn/unlearn; and what data contributes to decision-making (Alkin, 2013). In asking these questions, evaluators including myself continue to unsettle evaluative assumptions in the pursuit of equity.

Many people are ready to commit to the idea of equity in evaluation, yet some may be wondering why we need to think about equity and fairness in evaluation now? The following three reasons are presented to answer why now: 1. Barriers to education and social programs affect groups based on gender, race, poverty, and many other factors (Hansman et al., 1999). Identifying and removing barriers supports equity (Schuelka et al., 2020). Equity is connected to cultural competence, making an understanding of culture necessary for understanding programs because programs and “evaluations cannot be culture free” (AEA, 2011). The field of evaluation considers cultural competence a requirement of a practicing evaluator (AEA, 2011; CES, 2018). By emphasizing cultural competence in the professionalization of the field, steps are being made to overcome barriers.

  1. In an equitable society all can prosper; the quality of education and access to social programs directly correlates to the development of children, youth, and adults (South et al., 2020). Evaluations help us understand and determine the quality of programming and make recommendations for improvement (Donaldson & Picciotto, 2016). To promote an equitable society, we require equity-focused evaluations (Bamberger et al., 2016) and a willingness for evaluators to learn, change and act.

  2. Finally, if not now, when? We are in the Decade of Evaluation for Action (Eval4Action, 2021). Evaluative thinking is in great demand globally by government, non-profits as well as the private sector, to determine the effectiveness of and contribute to ongoing learning related to, programs, policies, and practices. Therefore, now is the time to prioritize learning about and conducting equitable evaluations.

When considering why the field of evaluation needs to focus on equity and fairness, the following section framed with four questions which reveal the trajectory of evaluation:

  1. How is the field of evaluation advancing with regard for equity and fairness?

  2. In what ways can/does the practice of evaluation contribute to resolving issues of equity?

  3. How do we strengthen the capacities of governments, organizations, and communities to evaluate with equity and fairness in mind?

  4. What are considerations for moving forward with equity in evaluation?

How is the field of evaluation advancing with regard for equity and fairness?

Shirin Ebadi wrote a story about rising female leadership which featured the quote, “It is not just about hope and ideas. It’s about action.” Evaluators often embark on their journey in the field of evaluation with hopes and some ideas about equity and fairness (Robinson, 2011). But, action is the strength of evaluation. Program evaluation and research share many similarities, which are often summarized by Patton’s adage (1997), “Research seeks to prove; evaluation seeks to improve.” Beyond this adage, evaluation is a human and contextual endeavour that acknowledges and incorporates differences in values and perspectives, focused on questions evaluands care about and aims to produce results for varied audiences including funders, program champions, participants, and the broader community or public (Alkin & King, 2016; Ottoson & Hawe, 2009; Rey et al., 2014). Evaluation, then, is about action which can influence positive outcomes in society (Gamble, 2008). The pandemic has brought to the forefront much information about racial and social unrest, challenged ideas about capitalism and awakened many to the need to act for a more equitable and fair society, a kind of social solidarity (Mishra & Rath, 2020; Ray & Rojas, 2020). Equity is related to the concept of being fair and impartial, promoting diversity and inclusion so that all people can participate and prosper (Bolino et al., 2008). Thinking about equity and evaluation is an invitation to think about what kinds of programs we have in education and society, and to examine the explicit ways these programs are pro-equity, prioritizing improving the conditions for groups with the greatest needs. Equity in evaluation includes a focus on the context in which the program operates, the power and people involved in programs (Donnelly, 2010; Bamberger & Segone, 2011).

Promoting/Advancing equity and fairness in program evaluation poses challenges and opportunities (Patton, 2019). One challenge is understanding what evaluation is and if there are specific evaluation theories, approaches and methods that are better suited to advancing equity and fairness (Greene, 2005). Not surprisingly, program evaluation can be defined in many ways (Alkin and King, 2017). Early definitions of evaluation focused on the “examination of worth, merit, or significance of an object” (Scriven, 1998). This understanding of evaluation has expanded over time to include evaluations that are focused on establishing readiness, contribute to developing understanding, implementation, measuring effectiveness, and examining intended and unintended outcomes of programs (Mark, 2011). All evaluations focus on “the systematic collection of information about the activities, characteristics, and outcomes of programs to make judgements about the program, improve program effectiveness, and/or inform decisions about future program development” (Patton, 1997). Opportunity in evaluation lies in the expansions the field has undergone and its continued evolutions (Hogan, 2007; Shulha & Cousins, 1997). The field of evaluation is known for its flexibility; in fact, we might consider evaluation as aligned with a growth-oriented mindset, because evaluation is often incorporating new approaches and methods for examining programs and contributing to transformation (Mertens, 2007). Many evaluators embrace their roles as capacity builders, innovators and change-makers who work alongside social champions to address complex challenges and use evidence for decision-making (Mertens, 2008).

In what ways can/does the practice of evaluation contribute to resolving issues of equity?

Evaluation is a relatively young field, which over recent decades has seen a growing emphasis on the role of stakeholders in evaluation (Hart et al., 2009). Stakeholders are defined by Greene (2005) as people who have a stake or a vested interest in the program, policy or product evaluated and therefore also have a stake in the evaluation. Stakeholder engagement was initiated to increase the utilization of evaluation (Greene, 1988). Ideas about stakeholder involvement in evaluation are captured in many evaluation approaches, including participatory (Cousins & Earl, 1992), collaborative (Cousins & Shulha, 2008; Shulha et al., 2016), democratic (House, 2000; Ryan, 2004), utilization-focused (Patton, 2008), empowerment (Fetterman, 2005), responsive (Stake, 2003), and transformative (Mertens, 2008) evaluations.

Stakeholder-engaged thinking and approaches have evolved over time about how, when and who to include in evaluation (Herremans et al., 2016). There is evidence to support that stakeholder approaches share advantages related to “understanding, involvement, ownership, access, development, implementation, and improvement” (Rodriguez-Campos, 2012). When thinking about equity, one important concept to remember is that there are many processes to engaging stakeholders, some might even consider it as a continuum of stakeholder engagement (Donnelly et al., 2016; Preskill & Torres, 2000). An enduring message from the rise of stakeholder approaches is that stakeholders and communities are the experts in understanding and proposing solutions, therefore they are critical to promoting equity and fairness in evaluation (Brandon & Fukunaga, 2014). Innovation in programs and evaluation is advanced through collaboration that enables co-learning (Matsumoto, 2018; Nsangi et al., 2020). Stakeholder-engaged approaches take time, commitment, and skills to work together, across differences and to envision unimagined possibilities (Svensson & Cousins, 2015). Embracing equity and fairness through stakeholder-involved evaluation and research on evaluation is about working together as catalysts for change (Greene, 2005; Patton, 2019). Stakeholder-involved approaches are methodologically agnostic, meaning that they use the methods which are best suited to answer the questions stakeholders care about (Better Evaluation, 2021). This may include going beyond conventional quantitative or qualitative data collection, to engage in rich and robust data which can provide insight into complex social processes, attitudes, and experiences from those who were involved in the program as well as those who were not, or who may be hard to reach (Fetterman, 2017).

Innovative evaluation methodology that may support more equity and fairness through social change includes arts-informed inquiry using collage, poetic techniques, image-elicitation, and storytelling (Searle & Shulha, 2016; Searle, 2020). Artful forms have a history of contributing to social change, just think of the work of Pablo Freire (1968 legendary poets such as Maya Angelou or Shane Koyczan and so many other artists who compel people to think, feel and see differently. There are also evaluators using the arts and researching the arts, recent CES members Jennica and Maya from ANDimplementation and past contributions from Eisner (1979), Goodyear (2007), Simons & McCormack (2007) and MacNeil (2000), to name just a few. Artful approaches can contribute valuable perspectives that use multiple forms to create empathy and engage others in an experience (Barone & Eisner, 2012). Providing access to these embodied experiences may evoke new understandings leading to improvement and change. The arts have the capacity to represent visibly and viscerally, it can uplift or challenge us, as pleasure, as inspiration and provocation (Searle & Fels, 2018). The arts are already being used in evaluation; considering equity and fairness in evaluation is not about new techniques, it is about refocusing and refining the skills many evaluators already possess while enhancing the capacity of others.

How do we strengthen the capacities of governments, organizations, and communities to evaluate with equity and fairness in mind?

If we are to promote equity and fairness in our evaluations with the current demand for evaluators, then we must prioritize the examination of the practices used to teach evaluation, the partnerships formed to support evaluation and the collaborations that nourish evaluation mentorship (Christie et al., 2014; El Hassar et al., 2021; Poth & Searle, 2021; Searle & Poth, 2021). Higher education lacks sufficient evaluation learning opportunities rooted in evidence, particularly those centering experiential and community partnerships (Trevisan, 2002). We can both strengthen capacity and strengthen equity and fairness by developing interdisciplinary, cross-institutional, intersectoral and community-engaged projects (e.g., Armitage & Levac, 2015).

One way to actively strengthen the capacity of evaluation education and the reach of evaluation courses, is to engage in teaching collaboratives. I am a partner in a teaching collaborative that is entering its second with Dr. Gokiert at the University of Alberta. The catalyst for this partnership between us as scholars and evaluators as well as our academic institutions was developing an intensive experiential evaluation course. This course was established by Mignone (2018) and later adapted as UEval by Gokiert with colleagues (Gokeirt et al., 2021). The inaugural joint version of the course was offered in the Spring of 2021 and was called, UEval/QEval. This course leverages our skills, perspectives, and networks by addressing and investigating evaluation learning, while building student and participant evaluation capacity through experiential and community-engaged research (Davis & MacKay, 2014; Gokiert et al., 2021; Preskill & Boyle, 2008; Reed, 2015). In this course, research and teaching are used to link students, staff, and scholars to issues of policy and practice across sectors by partnering with community organizations with a social imperative. The equity seeking aims of these community partner groups might include a focus on sexual health programming, community-police relationships, serving newcomers, offering programs for students at risk or addressing food and housing insecurities. Recently the idea of launching this program at Queen’s was recognized and has received funding as an education-leadership opportunity (https://www.queensu.ca/gazette/stories/announcing-first-education-leaders-residence). By continuing the first interdisciplinary co-learning partnership with students and communities across institutions and examining the efficacy of this model, the Assessment and Evaluation Group at Queen’s University contributes to “reimagining our relationship” (p. 5) and placing equity as a central goal (Deane, 2020).

What are the/some considerations for moving forward with equity in evaluation?

The pathway for a more just society is strengthened when the capacity for evaluation and research on evaluation are furthered through collaborations, embracing diverse knowledge, and working reciprocally with communities (Gokiert et al., 2017). Positioning evaluation and research on evaluation as promoting a more just society involves harnessing processes and tools to further understand impact-driven practices for advancing understanding and community change (Reed et al., 2015).

It’s inspiring to be an evaluation scholar right now, with access to strong mentors, fantastic collaborators, and dedicated students (Gullickson et al., 2019). As poet Frost (1922) proclaimed, “promises to keep. And miles to go before I sleep”. This quote might prompt evaluators to pause to recognize that we have a lot to be grateful for, especially as we are on this road together, interested in understanding and committing to action for advancing equity in evaluation. The list below helps translate some of the ideas from this paper into actions for evaluation researchers and practitioners alike:

  1. Examine intentionality of self, in others as well as in programs, policies, and practices.

  2. Engagement matters. Prioritize developing collaborations and nourish inclusion.

  3. Promote interdisciplinarity. Come prepared to learn/unlearn as each other’s teachers.

  4. Embrace complexity with a creative mindset that enables you to stay engaged through difficulties.

  5. Imagine and act through a lens of methodological humility and flexibility.

The list consolidates some of my ideas and hopes for how evaluators can begin to take action toward advancing equity and fairness in our work. Remembering the Ebadi quote from earlier that urged us to action, I must clarify: my action, in evaluation and in research on evaluation, is through learning and promoting the learning of others. Not surprisingly, I reflect that my learning is a work in progress, as I continue thinking about theories and practices that support equity and practical suggestions or tools that I can adopt. For now, I offer Table 1 for a not-so-exhaustive draft of my evaluation commitments and practical actions. As you review my evaluation commitments and actions, I ask you to consider what are your commitments and considerations to advancing equity?

Commitments and Considerations for Advancing Equity and Fairness in Evaluation

Commitments as an evaluator and researcher of evaluation who is advancing equity:

  • Continue to be knowledgeable about emerging evaluation theory, approaches, and methods

  • Stay informed about program evaluation standards, guiding principles and competencies

  • Engage often in reflexive practice to recognize power, privilege, and bias (e.g., build reflectivity into planning, establish prompts, use multiple processe)

  • When considering a project, engage in community-scoping to gain a more in-depth understanding of a community, its social diversity, history, networks, and characteristics

Considerations for evaluation partners/client/collaborators to act for advancing equity:

  • Examine intentionality to see intended and unintended consequences; use a wide lens for looking at influences and impacts of projects, not just the specific project objectives.

  • Explicitly frame evaluation questions to allow for or promote the exploration of equity issues that may have arisen in a context.

  • Assess readiness for evaluation to promote equity by engaging in context/environmental scanning, document analysis and/or asset inventory.

  • Use logic modelling or theory of change processes to be explicit about values and assumptions, question the logic through courageous conversations.

  • Conduct a power analysis by asking whose knowledge and evidence is likely to be overlooked unless we make an explicit effort to include them? Hear from those not included to ask why didn’t they access, use, or participate in the program?

  • Disseminate learning broadly and engage in reflection systematically.

References

Alkin, M. C. (Ed.). (2013). Evaluation roots. Thousand Oaks, CA: Sage.

Alkin, M. C., & King, J. A. (2016). The historical development of evaluation use. American Journal of Evaluation, 37(4), 568-579.

American Evaluation Society. (April 22, 2011). American Evaluation Association Statement on Cultural Competence in Evaluation. https://www.eval.org/Community/Volunteer/Statement-on-Cultural-Competence-in-Evaluation

Armitage, T., & Levac, L. (2015). Th e development of community-engaged scholars through course-based learning: A student perspective. Engaged Scholar Journal: Community Engaged Research, Teaching and Learning, 7 (1), 148–163.

Barone, T., & Eisner, E. W. (2012). Arts based research. Thousand Oaks, CA: SAGE Publications.

Bamberger, M., Raftree, L., & Olazabal, V. (2016). The role of new information and communication technologies in equity-focused evaluation: Opportunities and challenges. Evaluation, 22(2), 228-244.

Bamberger, M., & Segone, M. (2011). How to design and manage equity-focused evaluations. New York: UNICEF Evaluation Office.

Better Evaluation. (2022, January 24). Understand and engage stakeholders. https://www.betterevaluation.org/en/rainbow_framework/manage/understand_engage_stakeholders

Blackburn, J. D., & Niedzwiedz, E. (1980). Do Teaching Methods Matter: A Field Study of an Integrative Teaching Technique. Am. Bus. LJ, 18, 525.

Bolino, M. C., & Turnley, W. H. (2008). Old faces, new places: Equity theory in cross‐cultural contexts. Journal of Organizational Behavior: The International Journal of Industrial, Occupational and Organizational Psychology and Behavior, 29(1), 29-50.

Brandon, P. R., & Fukunaga, L. L. (2014). The state of the empirical research literature on stakeholder involvement in program evaluation. American Journal of Evaluation, 35(1), 26-44.

Canadian Evaluation Society. (2018). Competencies for Canadian evaluation practice. Retrieved from http://www.evaluationcanada.ca/txt/2_competencies_cdn_evaluation_practice.pdf

Christie, C. A., Quiñones, P., & Fierro, L. (2014). Informing the discussion on evaluator training: A look at evaluators’ course taking and professional practice. American Journal of Evaluation, 35(2), 274-290.

Cousins, J. B., & Earl, L. M. (1992). The case for participatory evaluation. Educational evaluation and policy analysis, 14(4), 397-418.

Davis, R. S., & MacKay, K. (2014). Evaluator training: Content and topic valuation in university evaluation courses. American Journal of Evaluation, 35 (3), 419–429.

Donaldson, S. I., & Picciotto, R. (Eds.). (2016). Evaluation for an equitable society. IAP.

Donnelly, J. (2010). Maximising participation in international community-level project evaluation: A strength-based approach. Evaluation Journal of Australasia, 10(2), 43-50.

Donnelly, C., Shulha, L., Klinger, D., & Letts, L. (2016). Using program evaluation to support knowledge translation in an interprofessional primary care team: A case study. BMC Family Practice, 17(1), 1-14.

Eisner, E. W. (1979). The use of qualitative forms of evaluation for improving educational practice. Educational Evaluation and Policy Analysis, 1(6), 11-19.

El Hassar, B., Poth, C., Gokiert, R., & Bulut, O. (2021). Toward an evidence-based approach to building evaluation capacity. Canadian Journal of Program Evaluation, 36(1).

Eval4Action. (2022, January 24). Decade of Evaluation for Action. https://www.eval4action.org

Fetterman, D. M., & Wandersman, A. (Eds.). (2005). Empowerment evaluation principles in practice. Guilford Press.

Fetterman, D. M., Rodríguez-Campos, L., & Zukoski, A. P. (2017). Collaborative, participatory, and empowerment evaluation: Stakeholder involvement approaches. Guilford Publications.

Freire, Pablo. (1968). Plan de trabajo. Paz y tierra. Rio de Janeiro. P.25

Frost, R. (1922). The Witch of Coos. Poetry, 19(4), 175-181.

Gamble, J. A. A. (2008). A developmental evaluation primer. JW McConnell Family Foundation, Canada.

Gokiert, R. J., Kingsley, B. C., Poth, C., Edwards, K., El Hassar, B., Tink, L. N., Tremblay, M., Cor, K., Springett, J., & Hopkins, S. (2017). Developing an evaluation capacity building network in the fi eld of early childhood development. Engaged Scholar Journal: Community-Engaged Research, Teaching, and Learning, 3 (2), 59–79.

Gokiert, R. J., Daniels, J., Brazil, J., Pittman, S., Poth, C., Karbabian, A., … & Jun, S. (2021). UEval: Bringing Community-Based Experiential Learning to the Evaluation Classroom. Canadian Journal of Program Evaluation, 35(3).

Goodyear, L. K. (2007). Poetry, performance and pathos in evaluation reporting. In Dilemmas of engagement: Evaluation and the new public management. Emerald Group Publishing Limited.

Greene, J. G. (1988). Stakeholder participation and utilization in program evaluation. Evaluation review, 12(2), 91-116.

Greene, J. C. (2005). A value‐engaged approach for evaluating the Bunche–Da Vinci Learning Academy. New Directions for Evaluation, 2005(106), 27-45.

Gullickson, A. M., King, J. A., LaVelle, J. M., & Clinton, J. M. (2019). The current state of evaluator education: A situation analysis and call to action. Evaluation and Program Planning, 75, 20–30.

Hall, E. T. (1976). Beyond Culture. Garden City, NY: Anchor Press.

Hart, D., Diercks-O’Brien, G., & Powell, A. (2009). Exploring stakeholder engagement in impact evaluation planning in educational development work. Evaluation, 15(3), 285-306.

Hansman, C. A., Spencer, L., Grant, D., & Jackson, M. (1999). Beyond diversity: Dismantling barriers in education. Journal of instructional psychology, 26(1), 16.

Herremans, I. M., Nazari, J. A., & Mahmoudian, F. (2016). Stakeholder relationships, engagement, and sustainability reporting. Journal of business ethics, 138(3), 417-435.

Hogan, R. L. (2007). The historical development of program evaluation: Exploring past and present. Online Journal for Workforce Education and Development, 2(4), 5.

House, E. R., & Howe, K. R. (2000). Deliberative democratic evaluation. New directions for evaluation, 85, 3-12.

Katz, S., & Dack, L. A. (2012). Intentional interruption: Breaking down learning barriers to transform professional practice. Corwin Press.

MacNeil, C. (2000). The prose and cons of poetic representation in evaluation reporting. American Journal of evaluation, 21(3), 359-367.

Mark, M. M. (2011). Toward better research on—and thinking about—evaluation influence, especially in multisite evaluations. New Directions for Evaluation, 2011(129), 107-119.

Matsumoto, C., (2018). Advantages of increasing evaluation capacity in nonprofits. (Unpublished master’s thesis). The University of Minnesota.

Mertens, D. M. (2007). Transformative paradigm: Mixed methods and social justice. Journal of mixed methods research, 1(3), 212-225.

Mertens, D. M. (2008). Transformative research and evaluation. Guilford press.

Mishra, C., & Rath, N. (2020). Social solidarity during a pandemic: Through and beyond Durkheimian Lens. Social Sciences & Humanities Open, 2(1), 100079.

Mignone, J., Hinds, A., Migliardi, P., Krawchuk, M., Kinasevych, B., & Duncan, K. A. (2018). One room school: The Summer Institute in Program Evaluation. The Canadian Journal of Program Evaluation, 33 (2), 268–278.

Nsangi, A., Oxman, A. D., Oxman, M., Rosenbaum, S. E., Semakula, D., Ssenyonga, R., … & Sewankambo, N. K. (2020). Protocol for assessing stakeholder engagement in the development and evaluation of the Informed Health Choices resources teaching secondary school students to think critically about health claims and choices. PloS one, 15(10), 1–15 (2020).

Ottoson, J. M., & Hawe, P. (2009). Editors’ notes. New Directions for Evaluation, 2009(124), 3-5.

Patton M, Q. (1997). Utilization-focused evaluation: The new century text. Sage Publications.

Patton, M. Q. (2008). Future trends in evaluation. From policies to results: Developing capacities for country monitoring and evaluation systems. Paris: UNICEF and IPEN, 44-56.

Patton, M. Q. (2019). Blue marble evaluation: Premises and principles. Guilford Publications.

Poth, C., & Searle, M. (2021). Competency-Based Evaluation Education: Four Essential Things to Know and Do. Canadian Journal of Program Evaluation, 35(3).

Preskill, H., & Boyle, S. (2008). A multidisciplinary model of evaluation capacity building. American Journal of Evaluation, 29(4), 443-459

Preskill, H., & Torres, R. T. (2000). The learning dimension of evaluation use. New Directions for Evaluation, 2000(88), 25-37.

Ray, R., & Rojas, F. (2020). Inequality during the coronavirus pandemic. Context’s blog.

Reed, R., King, A., & Whiteford, G. (2015). Re-conceptualising sustainable widening participation: evaluation, collaboration, and evolution. Higher Education Research & Development, 34(2), 383-396.

Rey, L., Tremblay, M. C., & Brousselle, A. (2014). Managing tensions between evaluation and research: illustrative cases of developmental evaluation in the context of research. American Journal of Evaluation, 35(1), 45-60.

Robinson, S. B. (2011). Inside, outside, upside down: Challenges and opportunities that frame the future of a novice evaluator. New Directions for Evaluation, 2011(131), 65-70.

Rodríguez-Campos, L. (2012). Advances in collaborative evaluation. Evaluation and program planning, 35(4), 523-528.

Roy, S., & Searle, M. (2020). Scope Creep and Purposeful Pivots in Developmental Evaluation. Canadian Journal of Program Evaluation, 35(1).

Ryan, K. E. (2004). Serving public interests in educational accountability: Alternative approaches to democratic evaluation. American Journal of Evaluation, 25(4), 443-460.

Searle, M. J., & Shulha, L. M. (2016). Capturing the imagination: Arts-informed inquiry as a method in program evaluation. Canadian Journal of Program Evaluation, 31(1).

Searle, M., & Fels, L. M. (2018). An artistic contemplative inquiry: What arrives in co-contemplating assessment and evaluation. Artizein: Arts and Teaching Journal, 3(1), 12.

Searle, M. J., Kirkpatrick, L. C., Smyth, R. E., Paolini, M., & Brown, H. M. (2020). Promoting Learning Through a Collaborative Approach to Evaluation: A Retrospective Examination of the Process and Principles. In Collaborative Approaches to Evaluation: Principles in Use. Sage Publishing

Searle, M. (2020). Enhancing Educational Decision-Making: Arts-Informed Inquiry as a Feature of Collaborative Evaluations. In Perspectives on Arts Education Research in Canada, Volume 2 (pp. 76-96). Brill.

Searle, M., & Poth, C. (2021). Collaborative evaluation designs as an authentic course assessment. Canadian Journal of Program Evaluation, 35(3).

Schuelka, M. J., Braun, A. M., & Johnstone, C. J. (2020, January). Beyond access and barriers: Inclusive education and systems change. In FIRE: Forum for International Research in Education (Vol. 6, No. 1).

Shulha, L., & Cousins, J. B. (1997). Evaluation use: Theory, research, and practice since 1986. Evaluation Practice, 18, 195–208. Shulha, L. M., Whitmore, E., Cousins, J. B., Gilbert, N., & al Hudib, H. (2016). Introducing evidence-based principles to guide collaborative approaches to evaluation: Results of an empirical process. American Journal of Evaluation, 37(2), 1-23.

Shulha, L. M., Searle, M., Poth, C. N., & Chalas, A. (2019). Stakeholders weigh in on collaborative approaches to evaluation. Growing the Knowledge Base in Evaluation: The Contributions of J. Bradley Cousins, 99

Simons, H., & McCormack, B. (2007). Integrating arts-based inquiry in evaluation methodology: Opportunities and challenges. Qualitative Inquiry, 13(2), 292-311.

South, E. C., Butler, P. D., & Merchant, R. M. (2020). Toward an equitable society: building a culture of antiracism in health care. The Journal of Clinical Investigation, 130(10), 5039-5041.

Stake, R. (2003). Responsive evaluation. In International handbook of educational evaluation (pp. 63-68). Springer, Dordrecht.

Svensson, K., & Cousins, J. B. (2015). Meeting at the crossroads: Interactivity, technology, and evaluation utilization. Canadian Journal of Program Evaluation, 30(2), 143-159.

Trevisan, M. S. (2002). Evaluation capacity in K-12 school counseling programs. American Journal of Evaluation, 23(3), 291-305.

Yarbrough, D.B., Shulha, L.M., Hopson, R.K., & Caruthers, F.A. (2011). The program evaluation standards: A guide for evaluators and evaluation users (3rd. ed). Corwin Press.

Zhou, A. Z., & Fink, D. (2003). The intellectual capital web: a systematic linking of intellectual capital and knowledge management. Journal of intellectual capital.

3.3 Without Recognition of Student (Children’s) Rights, Equity in Assessment Falls Flat and Short

Author: Jacqueline P. Leighton, PhD

Institution: University of Alberta

Recommended Citation:

Leighton, J. (2022, January 27-28). Without Recognition of Student (Children’s) Rights, Equity in Assessment Falls Flat and Short [Paper presentation]. Advancing Assessment and Evaluation Virtual Conference: Queen’s University Assessment and Evaluation Group (AEG) and Educational Testing Services (ETS), Kingston, Ontario, Canada.

Introduction

Twenty-five years ago, my attention centered on measurement models and tools, with significantly less attention on student learning. Over the course of 25 years, the focus of my attention has flipped. My research and advocacy in student assessment in the past decade now focus on children’s human rights within the classroom; this includes not only assessment but also all pedagogical activities associated to learning in the classroom. Two broad approaches guide my research: (1) the United Nations Convention on the Rights of the Child (UN, 1989) and (2) the third force of psychology - humanistic psychology (Rogers, 1969).

Pragmatic Perspective

My entry point into children’s human rights came from the observations I made while working within K-12 schools; in particular, my observation that children often have very little say or voice in matters relevant to their best interests (Leighton, 2020, 2021; UN, 2009). This relative absence of participation may be rooted in outdated conceptions of children, their socio-emotional development, and their positions as rights holders. Outdated conceptions of children may undergird the types of relationships teachers develop with students. Two of the largest teacher education programs in Canada – Alberta and Ontario – have no formal instruction in the UNCRC (Leighton, 2022). In other words, many teachers are relatively unaware of their responsibilities as duty bearers vis-à-vis children. Thus, the student-teacher relationship, which I label the pedagogical alliance, plays a pivotal role in this research and, as such, there can no longer be any separation of assessment from teaching and learning in the research I conduct. The economic, educational, and psychological research base that shows the influence of teachers in the experience and outcomes of students is simply too robust to avoid.

The perspective that the UNCRC provides is pragmatic since it is a legally-binding international agreement. All countries have ratified the UNCRC except the United States. The UNCRC involves 4 core principles (Lundy & Byrne, 2017):

  • No child shall be subjected to discrimination.

  • Practices taken shall be in the best interest of the child.

  • Children shall freely express their views and participate in matters that pertain to them.

  • All children shall be supported in their healthy development and survival.

Although the UNCRC is not rooted in any one theoretical perspective, it has the backing of significant research in children’s mental and physical health. Thus, the UNCRC is primarily a data driven document, the upshot being that use of the UNCRC requires a strong commitment to high-quality data. Any and all interventions developed and administered to facilitate children generally and their learning in particular must be rigorously implemented and evaluated using high quality research designs and samples.

Research Focus

A focus on children’s human rights cannot be a partisan political undertaking if the best interest of all students is the goal. It is for this reason that I avoid the term equity in my research and writings. This term is over-used in my view, without clear behavioral definition, and has become more of a dogmatic statement than an evidence-based pathway to better student assessment and learning. Although a simple definition of equity is the practice of being fair and impartial, many educational institutions (including universities) have adopted practices under the term ‘equity’ that may be characterized as prejudicial (e.g., eliminating programs for certain groups of students); such practices can harm more than benefit students. Indeed, the effects of equity practices should be the basis of rigorous study and debate but this has not occurred; probably because practices adopted are more political than educational.

Consequently, in an effort to bring rigor and avoid dogma, my research involves children’s human rights and how assessments and student-teacher relationships can be improved to benefit all students. As per the UNCRC, the research aims to not privilege any one group of children based on gender, race, ethnicity, socioeconomic background, ability, religion or gender orientation. This means that in studies of how assessments ought to be designed, administered and interpreted, what guides the work is what is in the best interest of all children irrespective of their particular backgrounds, identities or injustices previously suffered.

Food for Thought

This research program makes particular demands on those who undertake it. At the outset is the need to avoid simple solutions to complex problems in an effort to correct previous wrongs in the lives of children. This means putting dogma aside and doing what we, as researchers have been trained to do, gather data and engage in the best interpretation of the results. Moreover, the focus has to be on children and students and less so on the political leanings of adults. For example, in a book I recently finished titled Leveraging Socio-Emotional Assessment to Foster Children’s Human Rights (2022), the entire last chapter is devoted to the collective failure on the part of many governments, public health officials, academics and teachers’ unions in understanding and rectifying the failure of COVID policies on children’s development and learning.

References

Leighton, J.P. (2022). Leveraging Socio-Emotional Assessment to Foster Children’s Human Rights. Part of the Student Assessment for Educators (Book Series Editor - J. McMillan). Oxfordshire, UK: Routledge (Taylor & Francis Group).

Leighton, J.P. (2021). Not all that counts is safe for counting: Barriers to collecting learning data for assessment purposes. In R. Lissitz and H. Jiao (Eds.), Enhancing effective instruction and learning using assessment data. Information Age Publishing.

Leighton, J.P. (2022). A study of teacher candidates’ views on children’s human rights in Canada. International Journal of Children’s Rights.

Leighton, J.P. (2020). On barriers to accessing children’s voices in school-based research. Canadian Journal of Children’s Rights, 7(1), 164-193.

Leighton, J.P. (2020). Cognitive diagnosis is not enough: The challenge of measuring learning with classroom assessments. In S.M. Brookhart & J.H. McMillan (Eds.), Classroom assessment and educational measurement (pp. 27-45). NCME Book Series. Routledge.

Lundy, L., & Byrne, B. (2017). The four principles of the United Nations convention on the rights of the child: The potential value of the approach in other areas of human rights law. In Brems, E., Desmet, E., & Vandenhole, W. (Eds.), Children’s rights law in the global human rights landscape (pp. 52-70). Taylor & Francis Group.

Rogers, C. (1969). Freedom to learn: A view of what Education might become. Columbus, Ohio: Charles E. Merrill.

The United Nations. (1989). Convention on the Rights of the Child. Treaty Series, 1577, 3.

UN Committee on the Rights of the Child. (2009, July 20). General comment No. 12: The right of the child to be heard. Retrieved Dec 1 2019 at https://www2.ohchr.org/english/bodies/crc/docs/AdvanceVersions/CRC-C-GC-12.pdf

3.4 Fairness and Inclusion on Canada’s Large-scale Assessments

Authors: Tess Miller, PhD & Elizabeth Blake

Institution: University of Prince Edward Island

Recommended Citation:

Miller, T. & Blake, E. (2022, January 27-28). Fairness and Inclusion on Canada’s Large-scale Assessments [Paper presentation]. Advancing Assessment and Evaluation Virtual Conference: Queen’s University Assessment and Evaluation Group (AEG) and Educational Testing Services (ETS), Kingston, Ontario, Canada.

3.4.1 Abstract

The purpose of this thought paper was to highlight the interplay of inclusive education and large-scale assessment exclusion rates drawing into question the fairness of having students with diverse learning needs participate in large-scale assessments. The idea and data for this paper was drawn from three studies examining exclusion rates in Canada and a literature review on issues related to inclusive education. Melding these two issues revealed an explanation for high exclusion rates.

3.4.2 Thought Paper

A review of national and international LSA scores for Canadian students over the past four cycles of administration revealed inconsistent and high rates non-participants (Miller & Yan, n.d.). The high percentage of students not participating in LSAs exceeds the 5% cap articulated in the PISA technical standards (CMEC, 2016). For example, when examining the number of non-participants on the national Pan-Canadian Assessment Program (PCAP) for mathematics, that is, students who were excluded because of a disability or limited language, students who were absent on the day of the assessment, and students who were excluded under a category PCAP described as other (i.e., a category which provides discretion for school administrators not to include students in the assessment), the number of non-participants on the mathematics assessment was over 25% in the province of Prince Edward Island (PE) and Nova Scotia (NS). On the next administration of the PCAP assessment of mathematics in 2016, the number of non-participants for British Columbia (BC) and NS was over 20% (Miller & Yan, n.d.). An examination of non-participants on the Programme for International Student Assessment (PISA) was similar to patterns on the PCAP where over 27% of students from PE could be classified as non-participants on the 2015 assessment of mathematics (CMEC, 2016). The number of non-participants is so alarming that Anders et al. (2021) questioned whether Canada’s education system was worth of the laurels it has been bestowed. It is important to note that this surge in the percent of non-participants is not just a Canadian phenomenon as similar findings have been reported in the US and UK (Anderson, Lai, Alonzo & Tindal, 2011; Braun, Zhang & Vezzu, 2010; Jerrim, 2021). Hence the purpose of this thought paper is to pose an explanation for what has been called a selection bias on LSAs.

We posit that the high rate of non-participants may indeed be a selection bias but the root of the problem is connected to theories of inclusive education. Prior to the 1990s, children with intellectual and physical disabilities were segregated from mainstream classrooms. One family challenged this and the outcome of the Supreme Court of Canada ruled that disabled children had every right to attend their local school with their same age peers (Moore v. British Columbia, 2012). Exclusion rates from LSAs during the early 2000s still were still low and below the 5% cap. Following the court ruling in 2012, the practice of inclusive education became more widespread resulting in classrooms with a more diverse set of learners. At the same time, definitions of inclusive education continued to broaden which resulted in greater diversity in the classroom. A typical classroom has evolved to include students with many special needs such as behaviour (e.g., inattention, impulsivity), academic (e.g., dyslexia, dyscalculia), and socio-emotional (e.g., anxiety). This resulted in a greater demand for learning and accommodation resources which unfortunately, did not accompany the policy on inclusion. Thus, it is likely that students requiring additional support fall through the gaps due to a lack of resources and are left to flounder in the public education system. It is highly possible that these students are not operating a grade level. As students are passed from one grade to the next due to a practice of social promotion where students are promoted to the next despite having met grade level outcomes (Robertson, 2021), students fall further and further behind.

When students become of age to write the PCAP (i.e., 13-year-olds) and PISA (15-year-olds), school administrators ponder the fairness of deciding who should write the LSA and who should not (Miller & FitzGerald, 2022). Although the definition of disability is not entirely clear (Anderson et al, 2011; McGill et al., 2016), there is a distinction between students who have a psychological disability (e.g., schizophrenia, manic depression) versus a learning disability. Based on the current PCAP and PISA exclusion guidelines, students with intellectual disabilities can be excluded; however, given the diversity of learners in the inclusive classroom, it is likely that students with learning needs (not disabilities) are also being exempted as school administrators claim it is not fair to require a student operating a grade or more below grade level, complete a LSA that they have not been prepared to write (Miller & FitzGerald, 2022).

A closer look at the composition of classrooms and the learning needs of children in the classroom is needed to affirm this theory. In the event this theory is proven correct, LSA administrators may need to revisit their policies in light of inclusive education practices and prepare guidelines clearly articulating who should write the LSA and who should not to ensure LSA scores are reflective of the student population being measured.

References

American Psychological Association (2020). What is intellectual disability. Retrieved from https://www.psychiatry.org/patients-families/intellectual-disability/what-is-intellectual-disability

Anders, J., Has, S., Jerrim, J., Shure, N., & Zieger, L. (2021). Is Canada really an education superpower? The impact of non-participation on results from PISA 2015. Educational Assessment, Evaluation and Accountability, 33, 229–249. https://link.springer.com/content/pdf/10.1007/s11092-020-09329-5.pdf

Anderson, D., Lai, C. F., Alonzo, J., & Tindal, G. (2011). Examining a grade-level math CBM designed for persistently low-performing students. Educational Assessment, 16(1), 15–34.

Braun, H., Zhang, J., & Vezzu, S. (2010). An investigation of bias in reports of the National Assessment of Educational Progress. Educational Evaluation and Policy Analysis, 32 (1), 24–43.

CMEC. (2016). Measuring up: Canadian results of the OECD PISA 2015 study. Retrieved from https://www.cmec.ca/Publications/Lists/Publications/Attachments/389/PISA2015_CPS_EN.pdf

Jerrim, J. (2021). PISA 2018: How representative is the data for England and Wales? Retrieved from https://ffteducationdatalab.org.uk/2021/04/pisa-2018-how-representative-is-the-data-for-england-and-wales/

Miller, T. & Yan, Y. (n.d.). Factors Influencing the Accuracy of Canada’s Large-scale Assessment Data: Policies and practices of exclusion, absenteeism, and social promotion

Miller, T. & FitzGerald, A. M. (2022). The practice of excluding students from large-scale assessments: Interviews with principals, Alberta Journal of Educational Research.

Miller, T. & Yankey, A. (n.d.). The practice of excluding students from large-scale assessments: The perspectives of mathematics teachers [Unpublished manuscript]. University of Prince Edward Island

OECD. (2015a). PISA 2018 technical standards. Retrieved from https://www.oecd.org/pisa/pisaproducts/PISA-2018-Technical-Standards.pdf

OECD. (2015b). Sample Design. Retrieved from https://www.oecd.org/pisa/sitedocument/PISA-2015-Technical-Report-Chapter-4-Sample-Design.pdf

OECD. (2017). PISA 2015 Technical Report. Retrieved from http://www.oecd.org/pisa/sitedocument/PISA-2015-technical-report-final.pdf

OECD. (2019). PISA 2018: Insights and Interpretations. Retrieved from https://www.oecd.org/pisa/PISA%202018%20Insights %20and%20Interpretations%20FINAL%20PDF.pdf

Roberts, R. M. (2021). To retain or not retain: A review of literature related to kindergarten retention. Retrieved from https://eric.ed.gov/?id=ED611006

Supreme Court of Canada (2012), Moore v. British Columbia (Education), SCC 61 (CanLII).

3.5 Assessing Non-Cognitive Skills to Promote Equity and Academic Resilience

Authors: Louis Volante, PhD 1 & Don A. Klinger, PhD 2

Institutions: Brock University 1 ; University of Waikato 2

Recommended Citation:

Volante, L. & Klinger, D. (2022, January 27-28). Assessing non-cognitive skills to promote equity and academic resilience [Paper presentation]. Advancing Assessment and Evaluation Virtual Conference: Queen’s University Assessment and Evaluation Group (AEG) and Educational Testing Services (ETS), Kingston, Ontario, Canada.

Acknowledgement:

This research is supported by the Social Sciences and Humanities Research Council of Canada (SSHRC)

Introduction

The global pandemic has created undeniable hardships for school-aged children, teachers, and parents around the world. Understandably, governments and policymakers are concerned about the impact of COVID-19 on student learning and achievement. This concern is mirrored in the emerging research, which has used large-scale assessment measures to attempt to quantify the degree of ‘learning losses’ that students have experienced as a result of school closures, shifts towards online and hybrid learning, and other impacts associated with successive waves of this deadly virus. Not surprisingly, this body of research suggests that learning stalled during the pandemic, with the greatest impacts felt by at-risk student populations, such as those with lower socio-economic status (SES) and migrant backgrounds (see Engzell et al., 2021; Kaffenberger, 2021; Maldonato & De Witte, 2021).

Discussions of student progress, or lack thereof, have also been examined alongside the construct of academic resilience. While defined in numerous ways, academic resilience is typically considered to be present in those children who obtain positive achievement outcomes despite being disadvantaged due to factors such as lower SES or facing adversity. As one example, for the purposes of cross-national comparisons, the Organisation for Economic Cooperation and Development (OECD) defines disadvantaged youth as those in the lowest one-third of its socioeconomic indicator within each country, based on information about parents’ occupation(s) and along with measures of household possessions (OECD, 2018a). Countries which possess smaller achievement gaps between high and low-SES student populations are said to be more equitable and also more successful in promoting academic resilience (Agasisti, et al., 2018).

Cognitive vs. Non-Cognitive Skills

Notwithstanding concerns about critical learning losses, the COVID-19 pandemic has also forced us to reconsider the relative importance ascribed to cognitive versus non-cognitive skills, an admittedly problematic albeit common distinction. International standardized tests such as the OECD’s Programme in International Student Assessment (PISA), as well as national large-scale assessment programs, have been fairly adept at measuring cognitive skills such as reading, mathematics, and science literacy. The OECD has also begun to measure and report on non- cognitive skills such as growth mindset and socio-emotional issues (Kautz, 2014; OECD, 2018b). Yet it is worth noting that measures such as PISA occur on a triennial basis and are based on cross-sectional data. Additionally, PISA focuses only on students who are approximately 15-years of age. Cross-sectional data based on single age cohorts are limited in that they do not allow for the examination and comparisons of different age groups and stages of development over time (Volante & Klinger, 2022).

One might naturally query how governments are positioned to respond to shocks such as COVID-19 in the absence of more timely and robust data – particularly in light of the troubling mental health and general wellness patterns that have recently emerged. For example, a recent study conducted by the Hospital for Sick Children in Ontario, Canada found approximately 70 percent of children/adolescents experienced deterioration in at least one of six mental health domains during the pandemic: depression, anxiety, irritability, attention, hyperactivity, or obsessions/compulsions (Cost et al., 2021). Similarly, a survey of the impact of COVID-19 found less than 5 percent of Canadian children 5–11 years old and 0.6 percent of youth 12–17 years old were meeting required physical activity guidelines (Moore et al., 2020). Collectively, these findings underscore the deleterious effects of the pandemic and the need to consider broader notions of resilience for school-aged populations.

Toward a Broader Conceptualization of Academic Resilience

Contemporary studies suggest a broader conceptualization of academic resilience is required that extends beyond mere measures of learning losses in student populations. Our proposed triarchic model (see Volante et al., 2021), expands the nature and scope of considerations policymakers should examine when gauging academic resilience, both during and post-COVID by focusing on three key areas:

  • Academic Supports: Policies, procedures, and/or guidelines that support students’ academic success.
  • Physical Health Supports: Policies, procedures, and/or guidelines that support students’ physical health and well-being.
  • Mental Health Supports: Policies, procedures, and/or guidelines that support students’ mental health.

Innovative Assessment Designs

Along with the need for a broader conception of academic resilience, there is also a need for more innovative and timely large-scale measures that can assess the three key areas of resilience we have identified above: academic supports; physical health supports; and mental health supports. The foundations for such measures are already in place at international and national levels. The OECD has long been interested in health and well-being outcomes, and the International Association for the Evaluation of Educational Achievement (IEA) has recently developed the Responses to Educational Disruption Survey (REDS), which is focused on the impact of the pandemic. Further the Health Behaviour of School-Aged Children survey (HBSC) (e.g., Craig, King, & Pickett, 2020; Inchley, Currie, Budisavljevic, Torsheim, Jåstad, Cosma, et al., 2020) has over 30 years of data on children’s health. There was a previous effort from the OECD to incorporate the HBSC survey into its PISA cycle. There are also examples of large- scale surveys focused on various aspects of childrens’ health and wellbeing used by provincial governments in Canada.

Nevertheless, these measures are typically independent of each other, resulting in potential duplication of surveys but with survey designs that are not fully compatible. Using Canada as an example, provincial surveys can often be linked to individual students through student identification numbers. In contrast, international assessments do not link their data to individual students. On the other hand, international survey instruments tend to be larger and their items and constructs have been carefully researched and validated, something national and provincial organisations are unable to duplicate due to resources. Lastly, international assessments tend to follow at multiyear cycle, providing a broad estimate of shifts over time, while the annual nature of provincial assessments and surveys are more responsive to recent shifts in learning or health impacts.

The current pandemic has highlighted the challenges of measuring the impact of major events on the educational and health outcomes of our children. While governments desire reliable, valid, and timely data to guide their decisions regarding the education and health of our children, our current measures are largely unable to do so. As a result, we need to create more interdependent assessments that incorporate the sampling methods of provincial assessments and surveys along with the survey tools developed by international organisations such as the OECD, the IEA, and the HBSC. This would enable us to obtain longitudinal data about specific cohorts of students on well-defined measures of educational outcomes related to health, well-being, and resilience.

To reiterate, there are already examples of this in place. The Canadian team of HBSC researchers have worked closely with the Public Health Agency of Canada and with provincial and territorial governments to produce specific reports focused on the interests of these provinces and territories. It may already be too late to provide accurate information of the initial impact of COVID-19 on our children. Yet the ongoing challenges associated with the pandemic continues to highlight the necessity of focusing our efforts on monitoring its effects on children’s education and health. Ultimately, the proposed assessment reforms are meant to assist governments and policymakers in their efforts to promote equity and academic resilience.

References

Agasisti, T., et al. (2018). Academic resilience: What schools and countries do to help disadvantaged students succeed in PISA. OECD Education Working Papers, No. 167. OECD Publishing. https://doi.org/10.1787/e22490ac-e

Cost et al. (2021). Mostly worse, occasionally better: Impact of COVID-19 pandemic on the mental health of Canadian children and adolescents. European Child Adolescent Psychiatry. https://pubmed.ncbi.nlm.nih.gov/33638005/.

Engzell, P., Frey, A., & Verhagen, M. D. (2021). Learning loss due to school closures during the COVID-19 pandemic. Proceedings of the National Academy of Sciences of the United States of America, 118(17), 1-7. https://www.pnas.org/content/pnas/118/17/e2022376118.full.pdf.

Craig, W. M., King, M., & Pickett, W. (2020). The Health of Canadian Youth: Findings from the Health Behaviour in School-aged Children Study. Public Health Agency of Canada. https://www.canada.ca/content/dam/phac-aspc/documents/services/publications/science-research-data/hbsc/health-behaviour-school-aged-children-study-report.pdf

Inchley, J., Currie, D., Budisavljevic, S., Torsheim, T., Jåstad, A., Cosma, A., et al., editors (2020). Spotlight on adolescent health and well-being. Findings from the 2017/2018 Health Behaviour in School-Aged Children (HBSC) survey in Europe and Canada. International report. Volume 1. Key findings. Copenhagen: WHO Regional Office for Europe. https://www.euro.who.int/en/health-topics/Life-stages/child-and-adolescent-health/health- behaviour-in-school-aged-children-hbsc/publications/2020/spotlight-on-adolescent-health-and- well-being.-findings-from-the-20172018-health-behaviour-in-school-aged-children-hbsc-survey- in-europe-and-canada.-international-report.-volume-1.-key-findings

Kaffenberger, M. (2021). Modelling the long-run learning impact of the Covid-19 learning shock: actions to (more than) mitigate loss. International Journal of Development, 81. https://www.sciencedirect.com/science/article/pii/S0738059320304855#

Kautz, T., et al. (2014). Fostering and Measuring Skills: Improving Cognitive and Non- Cognitive Skills to Promote Lifetime Success. OECD Publishing. www.oecd.org/education/ceri/Fostering-and-Measuring-Skills-Improving-Cognitive-and-Non-Cognitive-Skills-to-Promote-Lifetime-Success.pdf

Maldonato, J. E., & De Witte, C. (2021). The effect of school closures on standardised student test outcomes. British Educational Research Journal. https://doi.org/10.1002/berj.3754

Moore et al. (2020). Impact of the COVID-19 virus outbreak on movement and play behaviours of Canadian children and youth: A national survey. International Journal of Behaviour Nutrition and Physical Activity, 17. https://doi.org/10.1186/s12966-020-00987-8

Organisation for Economic Cooperation and Development. (2018a). Equity in Education: Breaking Down Barriers to Social Mobility. OECD Publishing. www.oecd.org/education/equity- in-education-9789264073234-en.htm

Organisation for Economic Cooperation and Development. (2018b). PISA 2018 Results (Volume III). OECD Publishing. www.oecd-ilibrary.org/education/pisa-2018-results-volume- iii_bd69f805-en

Volante, L., Klinger, D., & Barrett, J. (2021). Academic resilience in a post-COVID world: A multi-level approach to capacity building. Education Canada, 61(3), 32-34. https://www.edcan.ca/articles/academic-resilience-in-a-post-covid-world/

Volante, L., & Klinger, D. A. (2022). PISA, global reference societies, and policy borrowing: The promises and pitfalls of academic resilience. Policy Futures in Education. https://journals.sagepub.com/doi/10.1177/14782103211069002

3.6 Discussant Summary

Author: Chi Yan Lam, PhD, CE

Independent Scholar

Recommended Citation:

Lam, C. Y. (2022, January 27-28). Discussant Summary [Discusssant Remarks]. Advancing Assessment and Evaluation Virtual Conference: Queen’s University Assessment and Evaluation Group (AEG) and Educational Testing Services (ETS), Kingston, Ontario, Canada.

Good afternoon! Thank you for having me here.

Let me start off by expressing my sincere thanks and appreciation to the panelists for their illuminating and inspiring thoughts on advancing assessment and evaluation to promote equity and fairness.

At no point in our recent history are issues of equity and fairness more prominent in both academic and public discourse than now. The pandemic has laid bare inequities in our social institutions, from education and health to our justice system and our economy (e.g., Chen & Bougie, 2020; Ismail, Tunis, Zhao, & Quach, 2021; Khare, Shroff, Nkennor, Mukhopadhyay, 2020; Perry, Aronson, & Pescosolido, 2021; Whitley, Beauchamp, & Brown, 2021). And increasingly, we’re realizing that much of these inequities are not only a condition of the present but also part of the design of our institutions of our past (e.g., Holton, 2021). For these reasons, I find this session timely, needed, and essential as we envision a tomorrow that could be different and better than one we have today.

My name is Chi Yan Lam. I am a public sector evaluator. In that role, I work with both internal and external evaluators to design, commission, and implement large-scale evaluations of government programs. In turn, I work with policy advisors and policymakers on improving programs and designing new policy interventions. But I’m not here in the capacity as a public servant but as a member of the academy. For the past ten years, I’ve have been working with students at the Faculty of Education and the Faculty of Health Sciences, Queen’s University on honing their assessment and evaluation knowledge and skills as a term adjunct. So, it is through this dual lens of policy and scholarship that I will discuss these papers.

Let me begin by offering brief observations about each of these papers before concluding with remarks.

Professor Searle. I really enjoyed your paper for its personal, reflective, and reflexive stance. I was moved by your commitment to examining the self in relation to evaluation theory/practice. It is this critical stance that I wish more practitioners and scholars would adopt.

Professor Leighton. Your paper stood out to me for its strong commitment to inviting scholars to adopt explicit frameworks in their attempts to promote equity and fairness. You reminded us that in the debate of equality versus equity that individual needs are what is ought to be at the forefront of policy considerations— “person, not boxes”.

Professor Miller. Your paper with Professor Blake on exclusion highlighted that the stories that we tell of our educational systems are a function of who gets to participate in the construction of those stories.

And Professor Volante. Your paper with Professor Klinger on broadening conceptualization of academic resilience is noteworthy and, of course, timely for its explicit intent in stimulating further dialogue among policymakers in their efforts to promoting equity and academic resilience.

Now, let me now offer some remarks.

One of the consistent through lines across this set of papers was their desire to speak to policymakers and seek influence in the public sphere. As someone who straddles both worlds—that of policy and the academy—I can tell you that policymakers do listen, and ideas do percolate… but that process can be slow, and that policymakers’ bandwidths are often consumed by political priorities and the issue of the day. So, my two invitations to you are to: (1) continue to find ways to mobilize your ideas in a digestible format, even at the risk of being reductive. (One way to accomplishing that might be to post the key ideas of your paper in under five bullets, on one slide, on your web site, so that a policy advisor could more readily draw on these ideas when preparing policy materials.); and (2), to continue to building that rationale for promoting equity and fairness—but in a way that relates it to the goals of our educational systems in both pragmatic (in the here-and-now sense) and fundamental fashions (in terms of the society we wish to build collectively.) It is through these levers that I could see academics as having greater influence over policy discourse.

For my second remark, I’d like to turn to the notion of allyship. The act of allyship occurs when a person of privilege works in solidarity and partnership with those in lesser power to help take down systems that challenge those being disadvantaged and oppressed (Washington & Evans, 1991). So, in the context of our current focus, I would invite all of us to consider expanding how you think about the audiences of your work to building in ideas for how allies could support and further the work of equity and fairness in assessment and evaluation. Put simply, what could allies take away from your work. On this note, I tip my hat to Prof. Leighton whose paper had introduced the notion of pedagogical alliance between that of student and teacher.

And finally, it strikes me as important to point out that one of the common roots which underpin both assessment and evaluation is the construction of merit: what counts as meritorious in learning, programs, curriculum, and policies—and more importantly, who gets to say what counts as meritorious in what study. So, my invitation to all is to look beneath the surface of what we study to questioning the values systems which underpin our inquiry.

In closing, I thank our panelists for their papers on this very important topic. They’ve highlighted that while we have much to celebrate in our educational systems, much work remains. And today, we’ve taken another important step towards shaping a society that could be more equitable, fair, and just.

Thank you.

References

Chen, I., & Bougie, O. (2020). Women’s Issues in Pandemic Times: How COVID-19 Has Exacerbated Gender Inequities for Women in Canada and around the World. Journal of Obstetrics and Gynaecology Canada, 42(12), 1458.

Holton, W. (2021). Liberty is Sweet: The Hidden History of the American Revolution. Simon & Schuster.

Ismail, S. J., Tunis, M. C., Zhao, L., & Quach, C. (2021). Navigating inequities: a roadmap out of the pandemic. BMJ Global Health, 6(1), e004087.

Khare, N., Shroff, F., Nkennor, B., & Mukhopadhyay, B. (2020). Reimagining safety in a pandemic: the imperative to dismantle structural oppression in Canada. CMAJ, 192(41), E1218-E1220.

Perry, B. L., Aronson, B., & Pescosolido, B. A. (2021). Pandemic precarity: COVID-19 is exposing and exacerbating inequalities in the American heartland. Proceedings of the National Academy of Sciences, 118(8).

Washington, J., & Evans, N. J. (1991). Becoming an Ally. In N. J. Evans & V. A. Wall (Eds.), Beyond tolerance: Gays, lesbians and bisexuals on campus (pp. 195–204). American Association for Counseling and Development.

Whitley, J., Beauchamp, M. H., & Brown, C. (2021). The impact of COVID-19 on the learning and achievement of vulnerable Canadian children and youth. Facets, 6(1), 1693-1713.