共查询到20条相似文献,搜索用时 109 毫秒
1.
Yoon Jeon Kim Mariah A. Knowles Jennifer Scianna Grace Lin José A. Ruipérez-Valiente 《British journal of educational technology : journal of the Council for Educational Technology》2023,54(1):355-372
Game-based assessment (GBA), a specific application of games for learning, has been recognized as an alternative form of assessment. While there is a substantive body of literature that supports the educational benefits of GBA, limited work investigates the validity and generalizability of such systems. In this paper, we describe applications of learning analytics methods to provide evidence for psychometric qualities of a digital GBA called Shadowspect, particularly to what extent Shadowspect is a robust assessment tool for middle school students' spatial reasoning skills. Our findings indicate that Shadowspect is a valid assessment for spatial reasoning skills, and it has comparable precision for both male and female students. In addition, students' enjoyment of the game is positively related to their overall competency as measured by the game regardless of the level of their existing spatial reasoning skills.
Practitioner notes
What is already known about this topic:- Digital games can be a powerful context to support and assess student learning.
- Games as assessments need to meet certain psychometric qualities such as validity and generalizability.
- Learning analytics provide useful ways to establish assessment models for educational games, as well as to investigate their psychometric qualities.
- How a digital game can be coupled with learning analytics practices to assess spatial reasoning skills.
- How to evaluate psychometric qualities of game-based assessment using learning analytics techniques.
- Investigation of validity and generalizability of game-based assessment for spatial reasoning skills and the interplay of the game-based assessment with enjoyment.
- Game-based assessments that incorporate learning analytics can be used as an alternative to pencil-and-paper tests to measure cognitive skills such as spatial reasoning.
- More training and assessment of spatial reasoning embedded in games can motivate students who might not be on the STEM tracks, thus broadening participation in STEM.
- Game-based learning and assessment researchers should consider possible factors that affect how certain populations of students enjoy educational games, so it does not further marginalize specific student populations.
2.
Andy Nguyen Sanna Järvelä Carolyn Rosé Hanna Järvenoja Jonna Malmberg 《British journal of educational technology : journal of the Council for Educational Technology》2023,54(1):293-312
Socially shared regulation contributes to the success of collaborative learning. However, the assessment of socially shared regulation of learning (SSRL) faces several challenges in the effort to increase the understanding of collaborative learning and support outcomes due to the unobservability of the related cognitive and emotional processes. The recent development of trace-based assessment has enabled innovative opportunities to overcome the problem. Despite the potential of a trace-based approach to study SSRL, there remains a paucity of evidence on how trace-based evidence could be captured and utilised to assess and promote SSRL. This study aims to investigate the assessment of electrodermal activities (EDA) data to understand and support SSRL in collaborative learning, hence enhancing learning outcomes. The data collection involves secondary school students (N = 94) working collaboratively in groups through five science lessons. A multimodal data set of EDA and video data were examined to assess the relationship among shared arousals and interactions for SSRL. The results of this study inform the patterns among students' physiological activities and their SSRL interactions to provide trace-based evidence for an adaptive and maladaptive pattern of collaborative learning. Furthermore, our findings provide evidence about how trace-based data could be utilised to predict learning outcomes in collaborative learning.
Practitioner notes
What is already known about this topic- Socially shared regulation has been recognised as an essential aspect of collaborative learning success.
- It is challenging to make the processes of learning regulation ‘visible’ to better understand and support student learning, especially in dynamic collaborative settings.
- Multimodal learning analytics are showing promise for being a powerful tool to reveal new insights into the temporal and sequential aspects of regulation in collaborative learning.
- Utilising multimodal big data analytics to reveal the regulatory patterns of shared physiological arousal events (SPAEs) and regulatory activities in collaborative learning.
- Providing evidence of using multimodal data including physiological signals to indicate trigger events in socially shared regulation.
- Examining the differences of regulatory patterns between successful and less successful collaborative learning sessions.
- Demonstrating the potential use of artificial intelligence (AI) techniques to predict collaborative learning success by examining regulatory patterns.
- Our findings offer insights into how students regulate their learning during collaborative learning, which can be used to design adaptive supports that can foster students' learning regulation.
- This study could encourage researchers and practitioners to consider the methodological development incorporating advanced techniques such as AI machine learning for capturing, processing and analysing multimodal data to examine and support learning regulation.
3.
Michail Giannakos Mutlu Cukurova 《British journal of educational technology : journal of the Council for Educational Technology》2023,54(5):1246-1267
This study presents the outcomes of a semi-systematic literature review on the role of learning theory in multimodal learning analytics (MMLA) research. Based on previous systematic literature reviews in MMLA and an additional new search, 35 MMLA works were identified that use theory. The results show that MMLA studies do not always discuss their findings within an established theoretical framework. Most of the theory-driven MMLA studies are positioned in the cognitive and affective domains, and the three most frequently used theories are embodied cognition, cognitive load theory and control–value theory of achievement emotions. Often, the theories are only used to inform the study design, but there is a relationship between the most frequently used theories and the data modalities used to operationalize those theories. Although studies such as these are rare, the findings indicate that MMLA affordances can, indeed, lead to theoretical contributions to learning sciences. In this work, we discuss methods of accelerating theory-driven MMLA research and how this acceleration can extend or even create new theoretical knowledge.
Practitioner notes
What is already known about this topic- Multimodal learning analytics (MMLA) is an emerging field of research with inherent connections to advanced computational analyses of social phenomena.
- MMLA can help us monitor learning activity at the micro-level and model cognitive, affective and social factors associated with learning using data from both physical and digital spaces.
- MMLA provide new opportunities to support students' learning.
- Some MMLA works use theory, but, overall, the role of theory is currently limited.
- The three theories dominating MMLA research are embodied cognition, control–value theory of achievement emotions and cognitive load theory.
- Most of the theory-driven MMLA papers use theory ‘as is’ and do not consider the analytical and synthetic role of theory or aim to contribute to it.
- If the ultimate goal of MMLA, and AI in Education in general, research is to understand and support human learning, these studies should be expected to align their findings (or not) with established relevant theories.
- MMLA research is mature enough to contribute to learning theory, and more research should aim to do so.
- MMLA researchers and practitioners, including technology designers, developers, educators and policy-makers, can use this review as an overview of the current state of theory-driven MMLA.
4.
5.
Judith Stanja Wolfgang Gritz Johannes Krugel Anett Hoppe Sarah Dannemann 《British journal of educational technology : journal of the Council for Educational Technology》2023,54(1):58-75
Formative assessment is considered to be helpful in students' learning support and teaching design. Following Aufschnaiter's and Alonzo's framework, formative assessment practices of teachers can be subdivided into three practices: eliciting evidence, interpreting evidence and responding. Since students' conceptions are judged to be important for meaningful learning across disciplines, teachers are required to assess their students' conceptions. The focus of this article lies on the discussion of learning analytics for supporting the assessment of students' conceptions in class. The existing and potential contributions of learning analytics are discussed related to the named formative assessment framework in order to enhance the teachers' options to consider individual students' conceptions. We refer to findings from biology and computer science education on existing assessment tools and identify limitations and potentials with respect to the assessment of students' conceptions.
Practitioner notes
What is already known about this topic- Students' conceptions are considered to be important for learning processes, but interpreting evidence for learning with respect to students' conceptions is challenging for teachers.
- Assessment tools have been developed in different educational domains for teaching practice.
- Techniques from artificial intelligence and machine learning have been applied for automated assessment of specific aspects of learning.
- Findings on existing assessment tools from two educational domains are summarised and limitations with respect to assessment of students' conceptions are identified.
- Relevent data that needs to be analysed for insights into students' conceptions is identified from an educational perspective.
- Potential contributions of learning analytics to support the challenging task to elicit students' conceptions are discussed.
- Learning analytics can enhance the eliciting of students' conceptions.
- Based on the analysis of existing works, further exploration and developments of analysis techniques for unstructured text and multimodal data are desirable to support the eliciting of students' conceptions.
6.
Qinyi Liu Mohammad Khalil 《British journal of educational technology : journal of the Council for Educational Technology》2023,54(6):1715-1747
The field of learning analytics has advanced from infancy stages into a more practical domain, where tangible solutions are being implemented. Nevertheless, the field has encountered numerous privacy and data protection issues that have garnered significant and growing attention. In this systematic review, four databases were searched concerning privacy and data protection issues of learning analytics. A final corpus of 47 papers published in top educational technology journals was selected after running an eligibility check. An analysis of the final corpus was carried out to answer the following three research questions: (1) What are the privacy and data protection issues in learning analytics? (2) What are the similarities and differences between the views of stakeholders from different backgrounds on privacy and data protection issues in learning analytics? (3) How have previous approaches attempted to address privacy and data protection issues? The results of the systematic review show that there are eight distinct, intertwined privacy and data protection issues that cut across the learning analytics cycle. There are both cross-regional similarities and three sets of differences in stakeholder perceptions towards privacy and data protection in learning analytics. With regard to previous attempts to approach privacy and data protection issues in learning analytics, there is a notable dearth of applied evidence, which impedes the assessment of their effectiveness. The findings of our paper suggest that privacy and data protection issues should not be relaxed at any point in the implementation of learning analytics, as these issues persist throughout the learning analytics development cycle. One key implication of this review suggests that solutions to privacy and data protection issues in learning analytics should be more evidence-based, thereby increasing the trustworthiness of learning analytics and its usefulness.
Practitioner notes
What is already known about this topic- Research on privacy and data protection in learning analytics has become a recognised challenge that hinders the further expansion of learning analytics.
- Proposals to counter the privacy and data protection issues in learning analytics are blurry; there is a lack of a summary of previously proposed solutions.
- Establishment of what privacy and data protection issues exist at different phases of the learning analytics cycle.
- Identification of how different stakeholders view privacy, similarities and differences, and what factors influence their views.
- Evaluation and comparison of previously proposed solutions that attempt to address privacy and data protection in learning analytics.
- Privacy and data protection issues need to be viewed in the context of the entire cycle of learning analytics.
- Stakeholder views on privacy and data protection in learning analytics have commonalities across contexts and differences that can arise within the same context. Before implementing learning analytics, targeted research should be conducted with stakeholders.
- Solutions that attempt to address privacy and data protection issues in learning analytics should be put into practice as far as possible to better test their usefulness.
7.
José Antonio Rodríguez-Martínez José Antonio González-Calero Javier del Olmo-Muñoz David Arnau Sergio Tirado-Olivares 《British journal of educational technology : journal of the Council for Educational Technology》2023,54(1):76-97
This study analyses the potential of a learning analytics (LA) based formative assessment to construct personalised teaching sequences in Mathematics for 5th-grade primary school students. A total of 127 students from Spanish public schools participated in the study. The quasi-experimental study was conducted over the course of six sessions, in which both control and experimental groups participated in a teaching sequence based on mathematical problems. In each session, both groups used audience response systems to record their responses to mathematical tasks about fractions. After each session, students from the control group were given generic homework on fractions—the same activities for all the participants—while students from the experimental group were given a personalised set of activities. The provision of personalised homework was based on the students' errors detected from the use of the LA-based formative assessment. After the intervention, the results indicate a higher student level of understanding of the concept of fractions in the experimental group compared to the control group. Related to motivational dimensions, results indicated that instruction using audience response systems has a positive effect compared to regular mathematics classes.
Practitioner notes
What is already known about this topic- Developing an understanding of fractions is one of the most challenging concepts in elementary mathematics and a solid predictor of future achievements in mathematics.
- Learning analytics (LA) has the potential to provide quality, functional data for assessing and supporting learners' difficulties.
- Audience response systems (ARS) are one of the most practical ways to collect data for LA in classroom environments.
- There is a scarcity of field research implementations on LA mediated by ARS in real contexts of elementary school classrooms.
- Empirical evidence about how LA-based formative assessments can enable personalised homework to support student understanding of fractions.
- Personalised homework based on an LA-based formative assessment improves the students' comprehension of fractions.
- Using ARS for the teaching of fractions has a positive effect in terms of student motivation.
- Teachers should be given LA/ARS tools that allow them to quickly provide students with personalised mathematical instruction.
- Researchers should continue exploring these potentially beneficial educational implementations in other areas.
8.
Jonathan C. Hilpert Jeffrey A. Greene Matthew Bernacki 《British journal of educational technology : journal of the Council for Educational Technology》2023,54(5):1204-1221
Capturing evidence for dynamic changes in self-regulated learning (SRL) behaviours resulting from interventions is challenging for researchers. In the current study, we identified students who were likely to do poorly in a biology course and those who were likely to do well. Then, we randomly assigned a portion of the students predicted to perform poorly to a science of learning to learn intervention where they were taught SRL study strategies. Learning outcome and log data (257 K events) were collected from n = 226 students. We used a complex systems framework to model the differences in SRL including the amount, interrelatedness, density and regularity of engagement captured in digital trace data (ie, logs). Differences were compared between students who were predicted to (1) perform poorly (control, n = 48), (2) perform poorly and received intervention (treatment, n = 95) and (3) perform well (not flagged, n = 83). Results indicated that the regularity of students' engagement was predictive of course grade, and that the intervention group exhibited increased regularity in engagement over the control group immediately after the intervention and maintained that increase over the course of the semester. We discuss the implications of these findings in relation to the future of artificial intelligence and potential uses for monitoring student learning in online environments.
Practitioner notes
What is already known about this topic- Self-regulated learning (SRL) knowledge and skills are strong predictors of postsecondary STEM student success.
- SRL is a dynamic, temporal process that leads to purposeful student engagement.
- Methods and metrics for measuring dynamic SRL behaviours in learning contexts are needed.
- A Markov process for measuring dynamic SRL processes using log data.
- Evidence that dynamic, interaction-dominant aspects of SRL predict student achievement.
- Evidence that SRL processes can be meaningfully impacted through educational intervention.
- Complexity approaches inform theory and measurement of dynamic SRL processes.
- Static representations of dynamic SRL processes are promising learning analytics metrics.
- Engineered features of LMS usage are valuable contributions to AI models.
9.
Karen D. Wang Jade Maï Cock Tanja Käser Engin Bumbacher 《British journal of educational technology : journal of the Council for Educational Technology》2023,54(1):192-221
Technology-based, open-ended learning environments (OELEs) can capture detailed information of students' interactions as they work through a task or solve a problem embedded in the environment. This information, in the form of log data, has the potential to provide important insights about the practices adopted by students for scientific inquiry and problem solving. How to parse and analyse the log data to reveal evidence of multifaceted constructs like inquiry and problem solving holds the key to making interactive learning environments useful for assessing students' higher-order competencies. In this paper, we present a systematic review of studies that used log data generated in OELEs to describe, model and assess scientific inquiry and problem solving. We identify and analyse 70 conference proceedings and journal papers published between 2012 and 2021. Our results reveal large variations in OELE and task characteristics, approaches used to extract features from log data and interpretation models used to link features to target constructs. While the educational data mining and learning analytics communities have made progress in leveraging log data to model inquiry and problem solving, multiple barriers still exist to hamper the production of representative, reproducible and generalizable results. Based on the trends identified, we lay out a set of recommendations pertaining to key aspects of the workflow that we believe will help the field develop more systematic approaches to designing and using OELEs for studying how students engage in inquiry and problem-solving practices.
Practitioner notes
What is already known about this topic- Research has shown that technology-based, open-ended learning environments (OELEs) that collect users' interaction data are potentially useful tools for engaging students in practice-based STEM learning.
- More work is needed to identify generalizable principles of how to design OELE tasks to support student learning and how to analyse the log data to assess student performance.
- We identified multiple barriers to the production of sufficiently generalizable and robust results to inform practice, with respect to: (1) the design characteristics of the OELE-based tasks, (2) the target competencies measured, (3) the approaches and techniques used to extract features from log files and (4) the models used to link features to the competencies.
- Based on this analysis, we can provide a series of specific recommendations to inform future research and facilitate the generalizability and interpretability of results:
- Making the data available in open-access repositories, similar to the PISA tasks, for easy access and sharing.
- Defining target practices more precisely to better align task design with target practices and to facilitate between-study comparisons.
- More systematic evaluation of OELE and task designs to improve the psychometric properties of OELE-based measurement tasks and analysis processes.
- Focusing more on internal and external validation of both feature generation processes and statistical models, for example with data from different samples or by systematically varying the analysis methods.
- Using the framework of evidence-centered assessment design, we have identified relevant criteria for organizing and evaluating the diverse body of empirical studies on the topic and that policy makers and practitioners can use for their own further examinations.
- This paper identifies promising research and development areas on the measurement and assessment of higher-order constructs with process data from OELE-based tasks that government agencies and foundations can support.
- Researchers, technologists and assessment designers might find useful the insights and recommendations for how OELEs can enhance science assessment through thoughtful integration of learning theories, task design and data mining techniques.
10.
Andrew Emerson Wookhee Min Roger Azevedo James Lester 《British journal of educational technology : journal of the Council for Educational Technology》2023,54(1):40-57
Game-based learning environments hold significant promise for facilitating learning experiences that are both effective and engaging. To support individualised learning and support proactive scaffolding when students are struggling, game-based learning environments should be able to accurately predict student knowledge at early points in students' gameplay. Student knowledge is traditionally assessed prior to and after each student interacts with the learning environment with conventional methods, such as multiple choice content knowledge assessments. While previous student modelling approaches have leveraged machine learning to automatically infer students' knowledge, there is limited work that incorporates the fine-grained content from each question in these types of tests into student models that predict student performance at early junctures in gameplay episodes. This work investigates a predictive student modelling approach that leverages the natural language text of the post-gameplay content knowledge questions and the text of the possible answer choices for early prediction of fine-grained individual student performance in game-based learning environments. With data from a study involving 66 undergraduate students from a large public university interacting with a game-based learning environment for microbiology, Crystal Island , we investigate the accuracy and early prediction capacity of student models that use a combination of gameplay features extracted from student log files as well as distributed representations of post-test content assessment questions. The results demonstrate that by incorporating knowledge about assessment questions, early prediction models are able to outperform competing baselines that only use student game trace data with no question-related information. Furthermore, this approach achieves high generalisation, including predicting the performance of students on unseen questions.
Practitioner notes
What is already known about this topic- A distinctive characteristic of game-based learning environments is their capacity to enable fine-grained student assessment.
- Adaptive game-based learning environments offer individualisation based on specific student needs and should be able to assess student competencies using early prediction models of those competencies.
- Word embedding approaches from the field of natural language processing show great promise in the ability to encode semantic information that can be leveraged by predictive student models.
- Investigates word embeddings of assessment question content for reliable early prediction of student performance.
- Demonstrates the efficacy of distributed word embeddings of assessment questions when used by early prediction models compared to models that use either no assessment information or discrete representations of the questions.
- Demonstrates the efficacy and generalisability of word embeddings of assessment questions for predicting the performance of both new students on existing questions and existing students on new questions.
- Word embeddings of assessment questions can enhance early prediction models of student knowledge, which can drive adaptive feedback to students who interact with game-based learning environments.
- Practitioners should determine if new assessment questions will be developed for their game-based learning environment, and if so, consider using our student modelling framework that incorporates early prediction models pretrained with existing student responses to previous assessment questions and is generalisable to the new assessment questions by leveraging distributed word embedding techniques.
- Researchers should consider the most appropriate way to encode the assessment questions in ways that early prediction models are able to infer relationships between the questions and gameplay behaviour to make accurate predictions of student competencies.
11.
Yuqin Yang Zhizi Zheng Gaoxia Zhu Sdenka Zobeida Salas-Pilco 《British journal of educational technology : journal of the Council for Educational Technology》2023,54(4):1025-1045
Preparing data-literate citizens and supporting future generations to effectively work with data is challenging. Engaging students in Knowledge Building (KB) may be a promising way to respond to this challenge because it requires students to reflect on and direct their inquiry with the support of data. Informed by previous studies, this research explored how an analytics-supported reflective assessment (AsRA)-enhanced KB design influenced 6th graders' KB and data science practices in a science education setting. One intact class with 56 students participated in this study. The analysis of students' Knowledge Forum discourse showed the positive influences of the AsRA-enhanced KB design on students' development of KB and data science practices. Further analysis of different-performing groups revealed that the AsRA-enhanced KB design was accessible to all performing groups. These findings have important implications for teachers and researchers who aim to develop students' KB and data science practices, and general high-level collaborative inquiry skills.
Practitioner notes
What is already known about this topic- Data use becomes increasingly important in the K-12 educational context.
- Little is known about how to scaffold students to develop data science practices.
- Knowledge Building (KB) and learning analytics-supported reflective assessment (AsRA) show premises in developing these practices.
- AsRA-enhanced KB can help students improve KB and data science practices over time.
- AsRA-enhanced KB design benefits students of different-performing groups.
- AsRA-enhanced KB is accessible to elementary school students in science education.
- Developing a collaborative and reflective culture helps students engage in collaborative inquiry.
- Pedagogical approaches and analytic tools can be developed to support students' data-driven decision-making in inquiry learning.
12.
Hatim Lahza Tammy G. Smith Hassan Khosravi 《British journal of educational technology : journal of the Council for Educational Technology》2023,54(1):335-354
Traditional item analyses such as classical test theory (CTT) use exam-taker responses to assessment items to approximate their difficulty and discrimination. The increased adoption by educational institutions of electronic assessment platforms (EAPs) provides new avenues for assessment analytics by capturing detailed logs of an exam-taker's journey through their exam. This paper explores how logs created by EAPs can be employed alongside exam-taker responses and CTT to gain deeper insights into exam items. In particular, we propose an approach for deriving features from exam logs for approximating item difficulty and discrimination based on exam-taker behaviour during an exam. Items for which difficulty and discrimination differ significantly between CTT analysis and our approach are flagged through outlier detection for independent academic review. We demonstrate our approach by analysing de-identified exam logs and responses to assessment items of 463 medical students enrolled in a first-year biomedical sciences course. The analysis shows that the number of times an exam-taker visits an item before selecting a final response is a strong indicator of an item's difficulty and discrimination. Scrutiny by the course instructor of the seven items identified as outliers suggests our log-based analysis can provide insights beyond what is captured by traditional item analyses.
Practitioner notes
What is already known about this topic- Traditional item analysis is based on exam-taker responses to the items using mathematical and statistical models from classical test theory (CTT). The difficulty and discrimination indices thus calculated can be used to determine the effectiveness of each item and consequently the reliability of the entire exam.
- Data extracted from exam logs can be used to identify exam-taker behaviours which complement classical test theory in approximating the difficulty and discrimination of an item and identifying items that may require instructor review.
- Identifying the behaviours of successful exam-takers may allow us to develop effective exam-taking strategies and personal recommendations for students.
- Analysing exam logs may also provide an additional tool for identifying struggling students and items in need of revision.
13.
Chantal Mutimukwe Olga Viberg Lena-Maria Oberg Teresa Cerratto-Pargman 《British journal of educational technology : journal of the Council for Educational Technology》2022,53(4):932-951
Understanding students' privacy concerns is an essential first step toward effective privacy-enhancing practices in learning analytics (LA). In this study, we develop and validate a model to explore the students' privacy concerns (SPICE) regarding LA practice in higher education. The SPICE model considers privacy concerns as a central construct between two antecedents—perceived privacy risk and perceived privacy control, and two outcomes—trusting beliefs and non-self-disclosure behaviours. To validate the model, data through an online survey were collected, and 132 students from three Swedish universities participated in the study. Partial least square results show that the model accounts for high variance in privacy concerns, trusting beliefs, and non-self-disclosure behaviours. They also illustrate that students' perceived privacy risk is a firm predictor of their privacy concerns. The students' privacy concerns and perceived privacy risk were found to affect their non-self-disclosure behaviours. Finally, the results show that the students' perceptions of privacy control and privacy risks determine their trusting beliefs. The study results contribute to understand the relationships between students' privacy concerns, trust and non-self-disclosure behaviours in the LA context. A set of relevant implications for LA systems' design and privacy-enhancing practices' development in higher education is offered.
Practitioner notes
What is already known about this topic- Addressing students' privacy is critical for large-scale learning analytics (LA) implementation.
- Understanding students' privacy concerns is an essential first step to developing effective privacy-enhancing practices in LA.
- Several conceptual, not empirically validated frameworks focus on ethics and privacy in LA.
- The paper offers a validated model to explore the nature of students' privacy concerns in LA in higher education.
- It provides an enhanced theoretical understanding of the relationship between privacy concerns, trust and self-disclosure behaviour in the LA context of higher education.
- It offers a set of relevant implications for LA researchers and practitioners.
- Students' perceptions of privacy risks and privacy control are antecedents of students' privacy concerns, trust in the higher education institution and the willingness to share personal information.
- Enhancing students' perceptions of privacy control and reducing perceptions of privacy risks are essential for LA adoption and success.
- Contextual factors that may influence students' privacy concerns should be considered.
14.
Judit Serra Roger Gilabert 《British journal of educational technology : journal of the Council for Educational Technology》2021,52(5):1898-1916
Practitioner notes
What is already known about this topic?- Serious games have the potential to aid learning but empirical research is needed.
- Findings about the efficiency of serious games are mixed.
- Current and reviewed versions of the Simple View of Reading constitute a suitable framework to measure reading acquisition.
- It contributes to the growing corpus of research on digital serious games.
- It provides empirical evidence on the use of an adaptive system in formal education.
- Comparing a teacher-led sequence to an algorithmic adaptive sequence on the same digital serious game has never been done before.
- The paper shows the need to obtain both system-internal and system-external data in order to capture the impact of gameplay on the development of L2 reading skills.
- It sheds some light on how certain game designs may actually help practise with different degrees of intervention by teachers.
- It is interesting for teachers to use an adaptive sequence that they can check and intervene in if needed.
15.
Kirsty Kitto Ben Hicks Simon Buckingham Shum 《British journal of educational technology : journal of the Council for Educational Technology》2023,54(5):1095-1124
An extraordinary amount of data is becoming available in educational settings, collected from a wide range of Educational Technology tools and services. This creates opportunities for using methods from Artificial Intelligence and Learning Analytics (LA) to improve learning and the environments in which it occurs. And yet, analytics results produced using these methods often fail to link to theoretical concepts from the learning sciences, making them difficult for educators to trust, interpret and act upon. At the same time, many of our educational theories are difficult to formalise into testable models that link to educational data. New methodologies are required to formalise the bridge between big data and educational theory. This paper demonstrates how causal modelling can help to close this gap. It introduces the apparatus of causal modelling, and shows how it can be applied to well-known problems in LA to yield new insights. We conclude with a consideration of what causal modelling adds to the theory-versus-data debate in education, and extend an invitation to other investigators to join this exciting programme of research.
Practitioner notes
What is already known about this topic
- ‘Correlation does not equal causation’ is a familiar claim in many fields of research but increasingly we see the need for a causal understanding of our educational systems.
- Big data bring many opportunities for analysis in education, but also a risk that results will fail to replicate in new contexts.
- Causal inference is a well-developed approach for extracting causal relationships from data, but is yet to become widely used in the learning sciences.
What this paper adds
- An overview of causal modelling to support educational data scientists interested in adopting this promising approach.
- A demonstration of how constructing causal models forces us to more explicitly specify the claims of educational theories.
- An understanding of how we can link educational datasets to theoretical constructs represented as causal models so formulating empirical tests of the educational theories that they represent.
Implications for practice and/or policy
- Causal models can help us to explicitly specify educational theories in a testable format.
- It is sometimes possible to make causal inferences from educational data if we understand our system well enough to construct a sufficiently explicit theoretical model.
- Learning Analysts should work to specify more causal models and test their predictions, as this would advance our theoretical understanding of many educational systems.
16.
Kate Talsma Andrew Chapman Allison Matthews 《British journal of educational technology : journal of the Council for Educational Technology》2023,54(6):1917-1938
Predictors of academic success at university are of great interest to educators, researchers and policymakers. With more students studying online, it is important to understand whether traditional predictors of academic outcomes in face-to-face settings are relevant to online learning. This study modelled self-regulatory and demographic predictors of subject grades in 84 online and 80 face-to-face undergraduate students. Predictors were effort regulation, grade goal, academic self-efficacy, performance self-efficacy, age, sex, socio-economic status (SES) and first-in-family status. A multi-group path analysis indicated that the models were significantly different across learning modalities. For face-to-face students, none of the model variables significantly predicted grades. For online students, only performance self-efficacy significantly predicted grades (small effect). Findings suggest that learner characteristics may not function in the same way across learning modes. Further factor analytic and hierarchical research is needed to determine whether self-regulatory predictors of academic success continue to be relevant to modern student cohorts.
Practitioner Notes
What is already known about this topic- Self-regulatory and demographic variables are important predictors of university outcomes like grades.
- It is unclear whether the relationships between predictor variables and outcomes are the same across learning modalities, as research findings are mixed.
- Models predicting university students' grades by demographic and self-regulatory predictors differed significantly between face-to-face and online learning modalities.
- Performance self-efficacy significantly predicted grades for online students.
- No self-regulatory variables significantly predicted grades for face-to-face students, and no demographic variables significantly predicted grades in either cohort.
- Overall, traditional predictors of grades showed no/small unique effects in both cohorts.
- The learner characteristics that predict success may not be the same across learning modalities.
- Approaches to enhancing success in face-to-face settings are not automatically applicable to online settings.
- Self-regulatory variables may not predict university outcomes as strongly as previously believed, and more research is needed.
17.
Oscar Blessed Deho Chen Zhan Jiuyong Li Jixue Liu Lin Liu Thuc Duy Le 《British journal of educational technology : journal of the Council for Educational Technology》2022,53(4):822-843
Practitioner notes
What is already known about this topic- LA is increasingly being used to leverage actionable insights about students and drive student success.
- LA models have been found to make discriminatory decisions against certain student demographic subgroups—therefore, raising ethical concerns.
- Fairness in education is nascent. Only a few works have examined fairness in LA and consequently followed up with ensuring fair LA models.
- A juxtaposition of unfairness mitigation algorithms across the entire LA pipeline showing how they compare and how each of them contributes to fair LA.
- Ensuring ethical LA does not always lead to a dip in performance. Sometimes, it actually improves performance as well.
- Fairness in LA has only focused on some form of outcome equality, however equality of outcome may be possible only when the playing field is levelled.
- Based on desired notion of fairness and which segment of the LA pipeline is accessible, a fairness-minded decision maker may be able to decide which algorithm to use in order to achieve their ethical goals.
- LA practitioners can carefully aim for more ethical LA models without trading significant utility by selecting algorithms that find the right balance between the two objectives.
- Fairness enhancing technologies should be cautiously used as guides—not final decision makers. Human domain experts must be kept in the loop to handle the dynamics of transcending fair LA beyond equality to equitable LA.
18.
Laura Benton Nelly Joye Emma Sumner Andrea Gauthier Seray Ibrahim Asimina Vasalou 《British journal of educational technology : journal of the Council for Educational Technology》2023,54(5):1314-1331
Digital literacy games can be beneficial for children with reading difficulties as a supplement to classroom instruction and an important feature of these games are the instructional supports, such as feedback. To be effective, feedback needs to build on prior instruction and match a learner's level of prior knowledge. However, there is limited research around the relationship between prior knowledge, instruction and feedback in the context of learning games. This paper presents an empirical study exploring the influence of prior knowledge on response to feedback, in two conditions: with or without instruction. Thirty-six primary children (age 8–11) with reading difficulties participated: each child was assessed for their prior knowledge of two suffix types—noun and adjective suffixes. They subsequently received additional instruction for one suffix type and then played two rounds of a literacy game—one round for each suffix type. Our analysis shows that prior knowledge predicted initial success rates and performance after a verbal hint differently, depending on whether instruction was provided. These results are discussed with regards to learning game feedback design and the impact on different types of knowledge involved in gameplay, as well as other game design elements that might support knowledge building during gameplay.
Practitioner notes
What is already known about this topic- Instructional supports, such as elaborative feedback, are a key feature of learning games.
- To be effective, feedback needs to build on prior instruction and match a learner's level of prior knowledge.
- Prior knowledge is an important moderator to consider in the context of elaborative feedback.
- Providing additional instruction (eg, pre-training) may act as a knowledge enhancer building on children's existing disciplinary expertise, whereas the inclusion of elaborative feedback (eg, a hint) could be seen as a knowledge equaliser enabling children regardless of their prior knowledge to use the pre-training within their gameplay.
- Highlights the importance of children's preferred learning strategies within the design of pre-training and feedback to ensure children are able to use the instructional support provided within the game.
- Possible implications for pre-training and feedback design within literacy games, as well as highlighting areas for further research.
- Pre-training for literacy games should highlight key features of the learning content and explicitly make connections with the target learning objective as well as elaborative feedback.
- Pre-training should be combined with different types of in-game feedback for different types of learners (eg, level of prior knowledge) or depending on the type of knowledge that designers want to build (eg, metalinguistic vs. epilinguistic).
- Modality, content and timing of the feedback should be considered carefully to match the specific needs of the intended target audience and the interaction between them given the primary goal of the game.
19.
Bethany Huntington James Goulding Nicola J. Pitchford 《British journal of educational technology : journal of the Council for Educational Technology》2023,54(5):1273-1291
Interactive apps are commonly used to support the acquisition of foundational skills. Yet little is known about how pedagogical features of such apps affect learning outcomes, attainment and motivation—particularly when deployed in lower-income contexts, where educational gains are most needed. In this study, we analyse which app features are most effective in supporting the acquisition of foundational literacy and numeracy skills. We compare five apps developed for the Global Learning XPRIZE and deployed to 2041 out-of-school children in 172 remote Tanzanian villages. A total of 41 non-expert participants each provided 165 comparative judgements of the five apps from the competition, across 15 pedagogical features. Analysis and modelling of these 6765 comparisons indicate that the apps created by the joint winners of the XPRIZE, who produced the greatest learning outcomes over the 15-month field trial, shared six pedagogical features—autonomous learning, motor skills, task structure, engagement, language demand and personalisation. Results demonstrate that this combination of features is effective at supporting learning of foundational skills and has a positive impact on educational outcomes. To maximise learning potential in environments with both limited resources and deployment opportunities, developers should focus attention on this combination of features, especially for out-of-school children in low- and middle-income countries.
Practitioner notes
What is already known about this topic- Interactive apps are becoming common to support foundational learning for children both in and out of school settings.
- The Global Learning XPRIZE competition demonstrates that learning apps can facilitate learning improvements in out-of-school children living in sub-Saharan Africa.
- To understand which app features are most important in supporting learning in these contexts, we need to establish which pedagogical features were shared by the winning apps.
- Effective learning of foundational skills can be achieved with a range of pedagogical features.
- To maximise learning, apps should focus on combining elements of autonomous learning, motor skills, task structure, engagement, language demand and personalisation.
- Free Play is not a key pedagogical feature to facilitate learning within this context.
- When developing learning apps with primary-aged, out-of-school children in low-income contexts, app developers should try to incorporate the six key features associated with improving learning outcomes.
- Governments, school leaders and parents should use these findings to inform their decisions when choosing an appropriate learning app for children.
20.
Colin Conrad Qi Deng Isabelle Caron Oksana Shkurska Paulette Skerrett Binod Sundararajan 《British journal of educational technology : journal of the Council for Educational Technology》2022,53(3):534-557
Practitioner notes
What is already known about this topic- University transitions to online learning during the Covid-19 pandemic were undertaken by faculty and students who had little online learning experience.
- The transition to online learning was often described as having a negative influence on students' learning experience and mental health.
- Varieties of cognitive load are known predictors of effective online learning experiences and satisfaction.
- Information overload and perceptions of technical abilities are demonstrated to predict students' difficulty and satisfaction with online learning.
- Students express negative attitudes towards factors that influence information overload, technical factors, and asynchronous course formats.
- Communication quantity was not found to be a significant factor in predicting either perceived difficulty or negative attitudes.
- We identify ways that educators in higher education can improve their online offerings and implementations during future disruptions.
- We offer insights into student experience concerning online learning environments during an abrupt transition.
- We identify design factors that contribute to effective online delivery, educators in higher education can improve students' learning experiences during difficult periods and abrupt transitions to online learning.