KEYNOTE SPEAKERS
Interview with Prof. Christopher DeLuca
Prof. Christopher DeLuca
Queen’s University
Title: From Artificial Intelligence to Assessment Innovation: Toward a More Human Future for Assessment Research and Practice
Abstract: This keynote address considers some of the most pressing challenges before us –– the rapid onset and evolution of artificial intelligence, the global sustainability crisis, and the paramount need to restore health and wellbeing in our societies –– as incitement for assessment innovations. Drawing on studies of teachers’ innovative assessment practices, this talk envisions a brighter, more human future for assessment in schools and classrooms.
Bio: Dr. Christopher DeLuca is an Associate Dean at the School of Graduate Studies and Postdoctoral Affairs and Professor in Educational Assessment at the Faculty of Education, Queen’s University. Dr. DeLuca leads the Classroom Assessment Research Team and is Director of the Queen’s Assessment and Evaluation Group.
Dr. Yiasemina Karagiorgi
Cyprus Ministry of Education, Sports and Youth
Title: The Implementation of International and Local Assessment Programmes in Cyprus: Reflecting on the Experiences behind and the Challenges ahead
Abstract: Traditional student testing at the national level has been challenged by the evolution of technology and its impact on educational assessment. This presentation aims to reflect on the developments of international and national large-scale assessments, implemented in Cyprus over the last 10 years by the Center of Educational Research and Evaluation of the Cyprus Pedagogical Institute. These assessment practices certainly differ in terms of intended outcomes and testing processes. The extent and the ways in which these programmes have changed as a response to advancements in assessment worldwide have also been different; while some programmes still reflect what is known as ‘traditional testing’, others have moved along to integrate e.g. computerised adaptive testing and process data. The presentation aims to highlight lessons learned from past experiences and draw on the way forward.
Bio: Dr Yiasemina Karagiorgi [karagiorgi.y@cyearn.pi.ac.cy] is the Head of Educational Research and Evaluation (CERE) of the Cyprus Ministry of Education, Sports and Youth since September 2011. As the Head of the CERE, she is currently supervising the implementation of local national research programmes (e.g. programme for functional literacy) and evaluation studies on programmes and innovations of the Ministry of Education. She is also the national project manager for Cyprus for international surveys, such as TALIS, PISA, TIMSS, ICCS, ICILS, PIRLS, HSBC. She has participated in several funded European projects and has published her work in international and local educational journals.
Heather Kayton
University of Oxford
Winner of the Kathleen Tattersall New Researcher Award
Title: Evaluating the validity and comparability of PIRLS 2016 in South Africa
Abstract: Large-scale assessments of reading, such as the Progress in International Reading Literacy Study (PIRLS), are useful for understanding reading development. PIRLS aims to provide valid, reliable, and fair assessments of reading across multiple countries. However, the extreme range in reading achievement outcomes, from the very high performance of some countries to the very low performance of others, creates substantial disparities across countries that threaten the validity, reliability, and fairness of PIRLS results. South Africa participates in PIRLS to better understand and inform interventions that can help improve reading in the country. But consistently low performance in PIRLS, coupled with intricate multilingualism, historical racial discrimination, and extreme inequality present further challenges to the already complicated task of assessing reading.
Bio: Heather Leigh Kayton is a Senior Education Specialist at What Works Hub for Global Education at the Blavatnik School of Government. Her work focuses on improving educational outcomes in developing countries, with a particular focus on the relationships between diverse, multilingual classrooms and foundational literacy development.
Heather has over a decade of experience as a teacher and teacher trainer in South African schools. This inspired her passion for finding pragmatic, equitable solutions to ensure all students learn to read successfully. Heather holds a MA in Applied Linguistics from the University of Johannesburg and a DPhil in Education at the University of Oxford.
Prof. Joshua McGrane
University of Melbourne
Title: Educational assessment and Generative AI: Less talk, more evidence
Abstract: The rapid emergence of AI, particularly through large language models (LLMs) and other generative AI tools, marks a profound disruption to education — arguably the most significant in decades. Initial responses from educational institutions often involved bans on AI for both teachers and students, at all levels of education, with fears about cheating, misinformation, and bias driving much of the conversation. While crucial, this ethical discourse has overshadowed an equally urgent question: How can we use these tools to enhance educational assessment? With bans being lifted, many institutions now offer little guidance beyond vague directives to “use responsibly” and “fact-check.” In this keynote, I will present two years of collaborative research that directly tackles this challenge, emphasising evidence-based strategies for integrating AI into assessment practices. The talk will explore key areas where AI can potentially add value to educational assessment while maintaining ethical rigour, and will propose critical research directions for the future. The aim is not just to debate AI’s place in educational assessment, but to provide practical, actionable insights into how we can leverage these tools to improve assessment outcomes for learners and educators alike. It’s time to shift from discussion to evidence-based implementation.
Bio: Joshua McGrane is Associate Professor of Measurement Analytics and Deputy Director of the Assessment and Evaluation Research Centre at the University of Melbourne. He is an Executive Editor for the journal, Assessment in Education: Principles, Policy and Practice. He has been an expert advisor to AQA, Qualifications Wales, and the International Baccalaureate. His research cuts across research disciplines and paradigms with a specific focus on the philosophy of measurement, Rasch modelling, as well as the use of comparative judgement in educational assessment. Most recently, he has been working with colleagues to evaluate the ethical, conceptual and empirical implications of the new wave of AI for educational assessment and measurement.
Dr. Christopher DeLuca is an Associate Dean at the School of Graduate Studies and Postdoctoral Affairs and Professor in Educational Assessment at the Faculty of Education, Queen’s University. Dr. DeLuca leads the Classroom Assessment Research Team and is Director of the Queen’s Assessment and Evaluation Group.
Dr. DeLuca’s research examines the intersection of assessment, curriculum, and pedagogy from socio-cultural frameworks. His work focuses on supporting in-service and pre-service teacher learning in assessment to enhance student learning for all. His latest thinking in this area is presented in a newly co-authored book entitled, Learning to Assess: Cultivating Assessment Capacity in Teacher Education. Recent recipient of the American Educational Research Association (AERA) Outstanding Paper in Classroom Assessment Award and Queen’s Education Research Excellence Award, Dr. DeLuca’s research has been published in national and international journals and has received continuous funding from the Social Sciences and Humanities Research Council of Canada. Dr. DeLuca has served as the Chair of AERA’s Classroom Assessment SIG, President of the Canadian Educational Researchers’ Association, and Editor of the Canadian Journal of Education. Dr. DeLuca is currently an Executive Editor for Assessment in Education: Principles, Policy and Practice.
Heather Leigh Kayton is a Senior Education Specialist at What Works Hub for Global Education at the Blavatnik School of Government. Her work focuses on improving educational outcomes in developing countries, with a particular focus on the relationships between diverse, multilingual classrooms and foundational literacy development.
Heather has over a decade of experience as a teacher and teacher trainer in South African schools. This inspired her passion for finding pragmatic, equitable solutions to ensure all students learn to read successfully. Heather holds a MA in Applied Linguistics from the University of Johannesburg and a DPhil in Education at the University of Oxford.
Prior to joining WWHGE, Heather worked a research assistant on the national research team for England for the Progress in International Reading Literacy Study (PIRLS) and the Programme for International Student Assessment (PISA). Her PhD research investigated ways to enhance equity in large-scale educational assessments for multilingual contexts. She is passionate about ensuring assessments being used to evaluate the achievement of students are fair, unbiased, and effective for diverse student populations across different contexts.
Abstract: Large-scale assessments of reading, such as the Progress in International Reading Literacy Study (PIRLS), are useful for understanding reading development. PIRLS aims to provide valid, reliable, and fair assessments of reading across multiple countries. However, the extreme range in reading achievement outcomes, from the very high performance of some countries to the very low performance of others, creates substantial disparities across countries that threaten the validity, reliability, and fairness of PIRLS results. South Africa participates in PIRLS to better understand and inform interventions that can help improve reading in the country. But consistently low performance in PIRLS, coupled with intricate multilingualism, historical racial discrimination, and extreme inequality present further challenges to the already complicated task of assessing reading.
This presentation discusses an evaluation of the validity and comparability of PIRLS 2016 in the unique local context of South Africa. It considers the implications of factors such as low overall performance, performance differences across groups, fit of the analytical models used, and comparability of item difficulty across language versions for the validity of the test and items.
The presentation will discuss three key areas of concern regarding PIRLS 2016 in South Africa:
1. The relationship between test language and reading achievement
2. The validity of the PIRLS 2016 test instrument for South African students
3. Comparability of the items, passages and overall test across language versions
The discussion will highlight some concerns and discuss the implications that factors such as low performance and language diversity have for the valid interpretation of PIRLS scores, as well as the implications for the use of PIRLS to inform education policy and practice in South Africa. By highlighting key challenges in the South African context, this research contributes to a deeper understanding of the importance of addressing the needs of unique local contexts to ensure large-scale assessments are able to provide valid, reliable, and fair information for all participating countries.
Shaping the Future of Assessment: Dr. Christopher DeLuca’s Vision for a Human-Centred Approach in the Age of AI.
As we approach the 25th AEA-Europe Annual Conference in Cyprus in November this year, excitement is building around the theme “Advances in Educational Assessment Practices: Considering the use of Technology, Artificial Intelligence, and Process Data for Assessment in the 21st Century.” Among the highlights of the event is the keynote speech “From Artificial Intelligence to Assessment Innovation: Toward a More Human Future for Assessment Research and Practice” by Dr. Christopher DeLuca, a leading expert in educational assessment from Queen’s University in Canada. Dr. DeLuca’s work centres on innovative classroom assessment practices, with a particular emphasis on supporting the diverse needs of students and empowering teachers to cultivate assessment innovation in their classrooms. His research focuses on teacher learning and the critical role of assessment in enhancing educational outcomes.
In our conversation, Dr. DeLuca shared insights into the pressing challenges of today’s rapidly evolving educational landscape. His keynote will delve into the intersection of AI and educational assessment, offering a forward-thinking perspective on how these technological advancements can transform the way we evaluate student learning. But rather than simply advocating for more technology, Dr. DeLuca emphasizes the importance of maintaining the human elements that are crucial to meaningful assessment—creativity, collaboration, and community engagement. His argument is clear: while AI presents powerful tools, the future of assessment lies in enhancing these uniquely human capacities.
Reflecting on the conference theme, Dr. DeLuca’s keynote will delve into how assessment practices can evolve to address broader global challenges, such as sustainability and well-being. He suggests that the future of assessment be grounded in care and compassion, considering not just academic outcomes, but also the overall well-being of students and their connection to the broader community and environment. This holistic approach is essential in responding to the pressing issues of our time and creating a more humane educational experience.
One of the key takeaways from Dr. DeLuca’s address will be the role of educators in this rapidly changing landscape, especially as the integration of AI into educational settings can evoke mixed reactions—from excitement to anxiety. Dr. DeLuca understands these challenges. He says these reactions are all healthy and productive, as we cannot simply uproot core well-grounded assessment values and tenets overnight. He stresses the importance of supporting teachers through this transition. By providing practical models and frameworks for responses, the aim is to empower educators to embrace innovation while preserving the core values of effective teaching and assessment.
This year’s conference is not just about presenting solutions, but also about fostering dialogue and collaboration among educators, researchers, and policymakers. Dr. DeLuca sees the AEA-Europe 2024 conference as a unique opportunity for you to engage with cutting-edge ideas, share your experiences, and explore the future of assessment together. His keynote will serve as a starting point for these important conversations, encouraging you to think differently about the role of technology and AI in education and how we can collectively shape the future of assessment.
Whether you are an academic, a practitioner, or a policymaker, this is an opportunity to gain fresh perspectives on the future of educational assessment. Make sure to mark your calendar for Dr. DeLuca’s keynote as you will gain valuable insights into how AI can be integrated into educational assessment in ways that enhance, rather than replace, the human touch. His insights will consider new approaches that prioritize the well-being and holistic development of students and not only illuminate the possibilities but also inspire actionable steps that can be implemented in classrooms worldwide.
Interviews carried out by Amina Afif, member of the Communications committee.