首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 328 毫秒
1.
Game-based assessment (GBA), a specific application of games for learning, has been recognized as an alternative form of assessment. While there is a substantive body of literature that supports the educational benefits of GBA, limited work investigates the validity and generalizability of such systems. In this paper, we describe applications of learning analytics methods to provide evidence for psychometric qualities of a digital GBA called Shadowspect, particularly to what extent Shadowspect is a robust assessment tool for middle school students' spatial reasoning skills. Our findings indicate that Shadowspect is a valid assessment for spatial reasoning skills, and it has comparable precision for both male and female students. In addition, students' enjoyment of the game is positively related to their overall competency as measured by the game regardless of the level of their existing spatial reasoning skills.

Practitioner notes

What is already known about this topic:
  • Digital games can be a powerful context to support and assess student learning.
  • Games as assessments need to meet certain psychometric qualities such as validity and generalizability.
  • Learning analytics provide useful ways to establish assessment models for educational games, as well as to investigate their psychometric qualities.
What this paper adds:
  • How a digital game can be coupled with learning analytics practices to assess spatial reasoning skills.
  • How to evaluate psychometric qualities of game-based assessment using learning analytics techniques.
  • Investigation of validity and generalizability of game-based assessment for spatial reasoning skills and the interplay of the game-based assessment with enjoyment.
Implications for practice and/or policy:
  • Game-based assessments that incorporate learning analytics can be used as an alternative to pencil-and-paper tests to measure cognitive skills such as spatial reasoning.
  • More training and assessment of spatial reasoning embedded in games can motivate students who might not be on the STEM tracks, thus broadening participation in STEM.
  • Game-based learning and assessment researchers should consider possible factors that affect how certain populations of students enjoy educational games, so it does not further marginalize specific student populations.
  相似文献   

2.
Preparing data-literate citizens and supporting future generations to effectively work with data is challenging. Engaging students in Knowledge Building (KB) may be a promising way to respond to this challenge because it requires students to reflect on and direct their inquiry with the support of data. Informed by previous studies, this research explored how an analytics-supported reflective assessment (AsRA)-enhanced KB design influenced 6th graders' KB and data science practices in a science education setting. One intact class with 56 students participated in this study. The analysis of students' Knowledge Forum discourse showed the positive influences of the AsRA-enhanced KB design on students' development of KB and data science practices. Further analysis of different-performing groups revealed that the AsRA-enhanced KB design was accessible to all performing groups. These findings have important implications for teachers and researchers who aim to develop students' KB and data science practices, and general high-level collaborative inquiry skills.

Practitioner notes

What is already known about this topic
  • Data use becomes increasingly important in the K-12 educational context.
  • Little is known about how to scaffold students to develop data science practices.
  • Knowledge Building (KB) and learning analytics-supported reflective assessment (AsRA) show premises in developing these practices.
What this paper adds
  • AsRA-enhanced KB can help students improve KB and data science practices over time.
  • AsRA-enhanced KB design benefits students of different-performing groups.
  • AsRA-enhanced KB is accessible to elementary school students in science education.
Implications for practice and/or policy
  • Developing a collaborative and reflective culture helps students engage in collaborative inquiry.
  • Pedagogical approaches and analytic tools can be developed to support students' data-driven decision-making in inquiry learning.
  相似文献   

3.
4.
Formative assessment is considered to be helpful in students' learning support and teaching design. Following Aufschnaiter's and Alonzo's framework, formative assessment practices of teachers can be subdivided into three practices: eliciting evidence, interpreting evidence and responding. Since students' conceptions are judged to be important for meaningful learning across disciplines, teachers are required to assess their students' conceptions. The focus of this article lies on the discussion of learning analytics for supporting the assessment of students' conceptions in class. The existing and potential contributions of learning analytics are discussed related to the named formative assessment framework in order to enhance the teachers' options to consider individual students' conceptions. We refer to findings from biology and computer science education on existing assessment tools and identify limitations and potentials with respect to the assessment of students' conceptions.

Practitioner notes

What is already known about this topic
  • Students' conceptions are considered to be important for learning processes, but interpreting evidence for learning with respect to students' conceptions is challenging for teachers.
  • Assessment tools have been developed in different educational domains for teaching practice.
  • Techniques from artificial intelligence and machine learning have been applied for automated assessment of specific aspects of learning.
What does the paper add
  • Findings on existing assessment tools from two educational domains are summarised and limitations with respect to assessment of students' conceptions are identified.
  • Relevent data that needs to be analysed for insights into students' conceptions is identified from an educational perspective.
  • Potential contributions of learning analytics to support the challenging task to elicit students' conceptions are discussed.
Implications for practice and/or policy
  • Learning analytics can enhance the eliciting of students' conceptions.
  • Based on the analysis of existing works, further exploration and developments of analysis techniques for unstructured text and multimodal data are desirable to support the eliciting of students' conceptions.
  相似文献   

5.
This study analyses the potential of a learning analytics (LA) based formative assessment to construct personalised teaching sequences in Mathematics for 5th-grade primary school students. A total of 127 students from Spanish public schools participated in the study. The quasi-experimental study was conducted over the course of six sessions, in which both control and experimental groups participated in a teaching sequence based on mathematical problems. In each session, both groups used audience response systems to record their responses to mathematical tasks about fractions. After each session, students from the control group were given generic homework on fractions—the same activities for all the participants—while students from the experimental group were given a personalised set of activities. The provision of personalised homework was based on the students' errors detected from the use of the LA-based formative assessment. After the intervention, the results indicate a higher student level of understanding of the concept of fractions in the experimental group compared to the control group. Related to motivational dimensions, results indicated that instruction using audience response systems has a positive effect compared to regular mathematics classes.

Practitioner notes

What is already known about this topic
  • Developing an understanding of fractions is one of the most challenging concepts in elementary mathematics and a solid predictor of future achievements in mathematics.
  • Learning analytics (LA) has the potential to provide quality, functional data for assessing and supporting learners' difficulties.
  • Audience response systems (ARS) are one of the most practical ways to collect data for LA in classroom environments.
  • There is a scarcity of field research implementations on LA mediated by ARS in real contexts of elementary school classrooms.
What this paper adds
  • Empirical evidence about how LA-based formative assessments can enable personalised homework to support student understanding of fractions.
  • Personalised homework based on an LA-based formative assessment improves the students' comprehension of fractions.
  • Using ARS for the teaching of fractions has a positive effect in terms of student motivation.
Implications for practice and/or policy
  • Teachers should be given LA/ARS tools that allow them to quickly provide students with personalised mathematical instruction.
  • Researchers should continue exploring these potentially beneficial educational implementations in other areas.
  相似文献   

6.
This article reports on a trace-based assessment of approaches to learning used by middle school aged children who interacted with NASA Mars Mission science, technology, engineering and mathematics (STEM) games in Whyville, an online game environment with 8 million registered young learners. The learning objectives of two games included awareness and knowledge of NASA missions, developing knowledge and skills of measurement and scaling, applying measurement for planetary comparisons in the solar system. Trace data from 1361 interactions were analysed with nonparametric multidimensional scaling methods, which permitted visual examination and statistical validation, and provided an example and proof of concept for the multidimensional scaling approach to analysis of time-based behavioural data from a game or simulation. Differences in approach to learning were found illustrating the potential value of the methodology to curriculum and game-based learning designers as well as other creators of online STEM content for pre-college youth. The theoretical framework of the method and analysis makes use of the Epistemic Network Analysis toolkit as a post hoc data exploration platform, and the discussion centres on issues of semantic interpretation of interaction end-states and the application of evidence centred design in post hoc analysis.

Practitioner notes

What is already known about this topic
  • Educational game play has been demonstrated to positively affect learning performance and learning persistence.
  • Trace-based assessment from digital learning environments can focus on learning outcomes and processes drawn from user behaviour and contextual data.
  • Existing approaches used in learning analytics do not (fully) meet criteria commonly used in psychometrics or for different forms of validity in assessment, even though some consider learning analytics a form of assessment in the broadest sense.
  • Frameworks of knowledge representation in trace-based research often include concepts from cognitive psychology, education and cognitive science.
What this paper adds
  • To assess skills-in-action, stronger connections of learning analytics with educational measurement can include parametric and nonparametric statistics integrated with theory-driven modelling and semantic network analysis approaches widening the basis for inferences, validity, meaning and understanding from digital traces.
  • An expanded methodological foundation is offered for analysis in which nonparametric multidimensional scaling, multimodal analysis, epistemic network analysis and evidence-centred design are combined.
Implications for practice and policy
  • The new foundations are suggested as a principled, theory-driven, embedded data collection and analysis framework that provides structure for reverse engineering of semantics as well as pre-planning frameworks that support creative freedom in the processes of creation of digital learning environments.
  相似文献   

7.
Capturing evidence for dynamic changes in self-regulated learning (SRL) behaviours resulting from interventions is challenging for researchers. In the current study, we identified students who were likely to do poorly in a biology course and those who were likely to do well. Then, we randomly assigned a portion of the students predicted to perform poorly to a science of learning to learn intervention where they were taught SRL study strategies. Learning outcome and log data (257 K events) were collected from n = 226 students. We used a complex systems framework to model the differences in SRL including the amount, interrelatedness, density and regularity of engagement captured in digital trace data (ie, logs). Differences were compared between students who were predicted to (1) perform poorly (control, n = 48), (2) perform poorly and received intervention (treatment, n = 95) and (3) perform well (not flagged, n = 83). Results indicated that the regularity of students' engagement was predictive of course grade, and that the intervention group exhibited increased regularity in engagement over the control group immediately after the intervention and maintained that increase over the course of the semester. We discuss the implications of these findings in relation to the future of artificial intelligence and potential uses for monitoring student learning in online environments.

Practitioner notes

What is already known about this topic
  • Self-regulated learning (SRL) knowledge and skills are strong predictors of postsecondary STEM student success.
  • SRL is a dynamic, temporal process that leads to purposeful student engagement.
  • Methods and metrics for measuring dynamic SRL behaviours in learning contexts are needed.
What this paper adds
  • A Markov process for measuring dynamic SRL processes using log data.
  • Evidence that dynamic, interaction-dominant aspects of SRL predict student achievement.
  • Evidence that SRL processes can be meaningfully impacted through educational intervention.
Implications for theory and practice
  • Complexity approaches inform theory and measurement of dynamic SRL processes.
  • Static representations of dynamic SRL processes are promising learning analytics metrics.
  • Engineered features of LMS usage are valuable contributions to AI models.
  相似文献   

8.
This study presents the outcomes of a semi-systematic literature review on the role of learning theory in multimodal learning analytics (MMLA) research. Based on previous systematic literature reviews in MMLA and an additional new search, 35 MMLA works were identified that use theory. The results show that MMLA studies do not always discuss their findings within an established theoretical framework. Most of the theory-driven MMLA studies are positioned in the cognitive and affective domains, and the three most frequently used theories are embodied cognition, cognitive load theory and control–value theory of achievement emotions. Often, the theories are only used to inform the study design, but there is a relationship between the most frequently used theories and the data modalities used to operationalize those theories. Although studies such as these are rare, the findings indicate that MMLA affordances can, indeed, lead to theoretical contributions to learning sciences. In this work, we discuss methods of accelerating theory-driven MMLA research and how this acceleration can extend or even create new theoretical knowledge.

Practitioner notes

What is already known about this topic
  • Multimodal learning analytics (MMLA) is an emerging field of research with inherent connections to advanced computational analyses of social phenomena.
  • MMLA can help us monitor learning activity at the micro-level and model cognitive, affective and social factors associated with learning using data from both physical and digital spaces.
  • MMLA provide new opportunities to support students' learning.
What this paper adds
  • Some MMLA works use theory, but, overall, the role of theory is currently limited.
  • The three theories dominating MMLA research are embodied cognition, control–value theory of achievement emotions and cognitive load theory.
  • Most of the theory-driven MMLA papers use theory ‘as is’ and do not consider the analytical and synthetic role of theory or aim to contribute to it.
Implications for practice and/or policy
  • If the ultimate goal of MMLA, and AI in Education in general, research is to understand and support human learning, these studies should be expected to align their findings (or not) with established relevant theories.
  • MMLA research is mature enough to contribute to learning theory, and more research should aim to do so.
  • MMLA researchers and practitioners, including technology designers, developers, educators and policy-makers, can use this review as an overview of the current state of theory-driven MMLA.
  相似文献   

9.
Learning analytics is a fast-growing discipline. Institutions and countries alike are racing to harness the power of using data to support students, teachers and stakeholders. Research in the field has proven that predicting and supporting underachieving students is worthwhile. Nonetheless, challenges remain unresolved, for example, lack of generalizability, portability and failure to advance our understanding of students' behaviour. Recently, interest has grown in modelling individual or within-person behaviour, that is, understanding the person-specific changes. This study applies a novel method that combines within-person with between-person variance to better understand how changes unfolding at the individual level can explain students' final grades. By modelling the within-person variance, we directly model where the process takes place, that is the student. Our study finds that combining within- and between-person variance offers a better explanatory power and a better guidance of the variables that could be targeted for intervention at the personal and group levels. Furthermore, using within-person variance opens the door for person-specific idiographic models that work on individual student data and offer students support based on their own insights.

Practitioner notes

What is already known about this topic
  • Predicting students' performance has commonly been implemented using cross-sectional data at the group level.
  • Predictive models help predict and explain student performance in individual courses but are hard to generalize.
  • Heterogeneity has been a major factor in hindering cross-course or context generalization.
What this paper adds
  • Intra-individual (within-person) variations can be modelled using repeated measures data.
  • Hybrid between–within-person models offer more explanatory and predictive power of students' performance.
  • Intra-individual variations do not mirror interindividual variations, and thus, generalization is not warranted.
  • Regularity is a robust predictor of student performance at both the individual and the group levels.
Implications for practice
  • The study offers a method for teachers to better understand and predict students' performance.
  • The study offers a method of identifying what works on a group or personal level.
  • Intervention at the personal level can be more effective when using within-person predictors and at the group level when using between-person predictors.
  相似文献   

10.
Understanding students' privacy concerns is an essential first step toward effective privacy-enhancing practices in learning analytics (LA). In this study, we develop and validate a model to explore the students' privacy concerns (SPICE) regarding LA practice in higher education. The SPICE model considers privacy concerns as a central construct between two antecedents—perceived privacy risk and perceived privacy control, and two outcomes—trusting beliefs and non-self-disclosure behaviours. To validate the model, data through an online survey were collected, and 132 students from three Swedish universities participated in the study. Partial least square results show that the model accounts for high variance in privacy concerns, trusting beliefs, and non-self-disclosure behaviours. They also illustrate that students' perceived privacy risk is a firm predictor of their privacy concerns. The students' privacy concerns and perceived privacy risk were found to affect their non-self-disclosure behaviours. Finally, the results show that the students' perceptions of privacy control and privacy risks determine their trusting beliefs. The study results contribute to understand the relationships between students' privacy concerns, trust and non-self-disclosure behaviours in the LA context. A set of relevant implications for LA systems' design and privacy-enhancing practices' development in higher education is offered.

Practitioner notes

What is already known about this topic
  • Addressing students' privacy is critical for large-scale learning analytics (LA) implementation.
  • Understanding students' privacy concerns is an essential first step to developing effective privacy-enhancing practices in LA.
  • Several conceptual, not empirically validated frameworks focus on ethics and privacy in LA.
What this paper adds
  • The paper offers a validated model to explore the nature of students' privacy concerns in LA in higher education.
  • It provides an enhanced theoretical understanding of the relationship between privacy concerns, trust and self-disclosure behaviour in the LA context of higher education.
  • It offers a set of relevant implications for LA researchers and practitioners.
Implications for practice and/or policy
  • Students' perceptions of privacy risks and privacy control are antecedents of students' privacy concerns, trust in the higher education institution and the willingness to share personal information.
  • Enhancing students' perceptions of privacy control and reducing perceptions of privacy risks are essential for LA adoption and success.
  • Contextual factors that may influence students' privacy concerns should be considered.
  相似文献   

11.
The field of learning analytics has advanced from infancy stages into a more practical domain, where tangible solutions are being implemented. Nevertheless, the field has encountered numerous privacy and data protection issues that have garnered significant and growing attention. In this systematic review, four databases were searched concerning privacy and data protection issues of learning analytics. A final corpus of 47 papers published in top educational technology journals was selected after running an eligibility check. An analysis of the final corpus was carried out to answer the following three research questions: (1) What are the privacy and data protection issues in learning analytics? (2) What are the similarities and differences between the views of stakeholders from different backgrounds on privacy and data protection issues in learning analytics? (3) How have previous approaches attempted to address privacy and data protection issues? The results of the systematic review show that there are eight distinct, intertwined privacy and data protection issues that cut across the learning analytics cycle. There are both cross-regional similarities and three sets of differences in stakeholder perceptions towards privacy and data protection in learning analytics. With regard to previous attempts to approach privacy and data protection issues in learning analytics, there is a notable dearth of applied evidence, which impedes the assessment of their effectiveness. The findings of our paper suggest that privacy and data protection issues should not be relaxed at any point in the implementation of learning analytics, as these issues persist throughout the learning analytics development cycle. One key implication of this review suggests that solutions to privacy and data protection issues in learning analytics should be more evidence-based, thereby increasing the trustworthiness of learning analytics and its usefulness.

Practitioner notes

What is already known about this topic
  • Research on privacy and data protection in learning analytics has become a recognised challenge that hinders the further expansion of learning analytics.
  • Proposals to counter the privacy and data protection issues in learning analytics are blurry; there is a lack of a summary of previously proposed solutions.
What this study contributes
  • Establishment of what privacy and data protection issues exist at different phases of the learning analytics cycle.
  • Identification of how different stakeholders view privacy, similarities and differences, and what factors influence their views.
  • Evaluation and comparison of previously proposed solutions that attempt to address privacy and data protection in learning analytics.
Implications for practice and/or policy
  • Privacy and data protection issues need to be viewed in the context of the entire cycle of learning analytics.
  • Stakeholder views on privacy and data protection in learning analytics have commonalities across contexts and differences that can arise within the same context. Before implementing learning analytics, targeted research should be conducted with stakeholders.
  • Solutions that attempt to address privacy and data protection issues in learning analytics should be put into practice as far as possible to better test their usefulness.
  相似文献   

12.
13.
With the widespread use of learning analytics (LA), ethical concerns about fairness have been raised. Research shows that LA models may be biased against students of certain demographic subgroups. Although fairness has gained significant attention in the broader machine learning (ML) community in the last decade, it is only recently that attention has been paid to fairness in LA. Furthermore, the decision on which unfairness mitigation algorithm or metric to use in a particular context remains largely unknown. On this premise, we performed a comparative evaluation of some selected unfairness mitigation algorithms regarded in the fair ML community to have shown promising results. Using a 3-year program dropout data from an Australian university, we comparatively evaluated how the unfairness mitigation algorithms contribute to ethical LA by testing for some hypotheses across fairness and performance metrics. Interestingly, our results show how data bias does not always necessarily result in predictive bias. Perhaps not surprisingly, our test for fairness-utility tradeoff shows how ensuring fairness does not always lead to drop in utility. Indeed, our results show that ensuring fairness might lead to enhanced utility under specific circumstances. Our findings may to some extent, guide fairness algorithm and metric selection for a given context.

Practitioner notes

What is already known about this topic
  • LA is increasingly being used to leverage actionable insights about students and drive student success.
  • LA models have been found to make discriminatory decisions against certain student demographic subgroups—therefore, raising ethical concerns.
  • Fairness in education is nascent. Only a few works have examined fairness in LA and consequently followed up with ensuring fair LA models.
What this paper adds
  • A juxtaposition of unfairness mitigation algorithms across the entire LA pipeline showing how they compare and how each of them contributes to fair LA.
  • Ensuring ethical LA does not always lead to a dip in performance. Sometimes, it actually improves performance as well.
  • Fairness in LA has only focused on some form of outcome equality, however equality of outcome may be possible only when the playing field is levelled.
Implications for practice and/or policy
  • Based on desired notion of fairness and which segment of the LA pipeline is accessible, a fairness-minded decision maker may be able to decide which algorithm to use in order to achieve their ethical goals.
  • LA practitioners can carefully aim for more ethical LA models without trading significant utility by selecting algorithms that find the right balance between the two objectives.
  • Fairness enhancing technologies should be cautiously used as guides—not final decision makers. Human domain experts must be kept in the loop to handle the dynamics of transcending fair LA beyond equality to equitable LA.
  相似文献   

14.
An extraordinary amount of data is becoming available in educational settings, collected from a wide range of Educational Technology tools and services. This creates opportunities for using methods from Artificial Intelligence and Learning Analytics (LA) to improve learning and the environments in which it occurs. And yet, analytics results produced using these methods often fail to link to theoretical concepts from the learning sciences, making them difficult for educators to trust, interpret and act upon. At the same time, many of our educational theories are difficult to formalise into testable models that link to educational data. New methodologies are required to formalise the bridge between big data and educational theory. This paper demonstrates how causal modelling can help to close this gap. It introduces the apparatus of causal modelling, and shows how it can be applied to well-known problems in LA to yield new insights. We conclude with a consideration of what causal modelling adds to the theory-versus-data debate in education, and extend an invitation to other investigators to join this exciting programme of research.

Practitioner notes

What is already known about this topic

  • ‘Correlation does not equal causation’ is a familiar claim in many fields of research but increasingly we see the need for a causal understanding of our educational systems.
  • Big data bring many opportunities for analysis in education, but also a risk that results will fail to replicate in new contexts.
  • Causal inference is a well-developed approach for extracting causal relationships from data, but is yet to become widely used in the learning sciences.

What this paper adds

  • An overview of causal modelling to support educational data scientists interested in adopting this promising approach.
  • A demonstration of how constructing causal models forces us to more explicitly specify the claims of educational theories.
  • An understanding of how we can link educational datasets to theoretical constructs represented as causal models so formulating empirical tests of the educational theories that they represent.

Implications for practice and/or policy

  • Causal models can help us to explicitly specify educational theories in a testable format.
  • It is sometimes possible to make causal inferences from educational data if we understand our system well enough to construct a sufficiently explicit theoretical model.
  • Learning Analysts should work to specify more causal models and test their predictions, as this would advance our theoretical understanding of many educational systems.
  相似文献   

15.
The current study digitalised an assessment instrument of receptive vocabulary knowledge, GraWo-KiGa, for use in Austrian kindergartens. Using a mixed-methods approach, this study looks at 85 kindergarteners in their last year (age M = 5.79 years, 51.8% male, 71.8% L1 German), to find out (a) whether the form of digital assessment employed meets the required quality criteria and is comparable to the print version and (b) how instructors and children perceive its practicality and comprehensibility, as well as which version kindergarteners prefer. The results reveal that the digital assessment tool is both reliable (α = 0.85) and valid (convergent validity: r = 0.43; discriminant validity: r = 0.31). Results of the digital and print version were comparable (r = 0.83). Although children found both versions easy to use, most of them reported the digital version to be easier and also preferred. In light of the numerous benefits that digital assessments offer in terms of administration, evaluation, feedback and motivation, the digital version of GraWo-KiGa has great potential in easing kindergarten teachers' assessment procedures. However, due to the limited availability of digital resources, the print version will remain highly relevant in the future.

Practitioner notes

What is already known about this topic
  • Proper assessment is the basis of individualised support.
  • Digital assessment procedures can ease the assessment process and motivate children, even in kindergarten.
  • In German-speaking countries, digitalisation has barely reached kindergarten.
  • The print version of GraWo-KiGa reliably and validly assesses receptive vocabulary in kindergarteners in their last year.
What this paper adds
  • GraWo-KiGa digital meets the necessary quality criteria in terms of reliability and validity and is comparable to its print version.
  • GraWo-KiGa is practical in use for both children and kindergarten teachers.
  • Most kindergarteners preferred the digital version over the print version. Teachers benefit from easy administration and evaluation, quick results, and a pleasant screening procedure for the kids.
Implications for practice and/or policy
  • Digital assessment tools in kindergarten have the potential to support kindergarten teachers in their regular assessment processes.
  • In kindergarten, the use of GraWo-KiGa digital allows children at risk of developing reading comprehension difficulties to be identified quickly and economically.
  • Digital assessments enable rapid and targeted allocation of children to support programmes.
  相似文献   

16.
Online peer assessment (OPA) has been increasingly adopted to develop students' higher-order thinking (HOT). However, there has not been a synthesis of research findings on its effects. To fill this gap, 17 papers (published from 2000 to 2022) that reported either a comparison between a group using OPA (n = 7; k = 22) and a control group or a pre–post comparison (n = 10; k = 17) were reviewed in this meta-analysis. The overall effect of OPA on HOT was significant (g = 0.76). Furthermore, OPA exerted more significant effects on convergent HOT (eg, critical thinking, reasoning and reflective thinking; g = 0.97) than on divergent HOT (eg, creativity and problem-solving; g = 0.38). Reciprocal roles and anonymity were found to positively moderate the impacts of OPA on HOT, although their moderating effects were not statistically significant because of small sample size of studies in the analysis. The results of the meta-analysis reinforce the arguments for regarding OPA as a powerful learning tool to facilitate students' HOT development and reveal important factors that should be considered when adopting OPA to enhance students' HOT.

Practitioner notes

What is already known about this topic
  • Online peer assessment (OPA) has significant positive impacts on learning achievement.
  • OPA has been regarded as a potential approach to cultivating students' higher-order thinking (HOT) but has not been proved by meta-analysis.
  • OPA should be carefully designed to maximise its effectiveness on learning.
What this paper adds
  • OPA has been proved to significantly positively influence students' HOT via meta-analysis.
  • OPA exerted more significant effects on convergent HOT than on divergent HOT.
  • The potential of reciprocal roles and anonymity for moderating the impacts of OPA on HOT should not be underestimated.
Implications for practice and/or policy
  • OPA could be a wise choice for practitioners when they help students to achieve a balanced development of HOT dispositions and skills.
  • Students' divergent HOT can be encouraged in their uptake of peer feedback and by allowing them autonomy in deciding assessment criteria.
  • OPA with design elements of reciprocal roles and anonymity has great potential to promote students' HOT.
  相似文献   

17.
Participation in educational activities is an important prerequisite for academic success, yet often proves to be particularly challenging in digital settings. Therefore, this study set out to increase participation in an online proctored formative statistics exam by digital nudging. We exploited targeted nudges based on the Fogg Behaviour Model, highlighting the relevance of acknowledging differences in motivation and ability in allocating nudges to elicit target behaviour. First, we assessed whether pre-existing levels of motivation and perceived ability to participate are effective in identifying different propensities of responsiveness to plain untailored nudges. Next, we evaluated whether tailoring nudges to students' motivation and perceived ability levels increases target behaviour by means of a randomized field experiment in which 579 first-year university students received 6 consecutive emails over the course of three weeks to nudge behaviour regarding successful participation in the online exam. First, the results point out that motivation explains differences in engagement as indicated by student responsiveness and participation, whereas the perceived ability to participate does not. Second, the results from the randomized field experiment indicate that tailored nudging did not improve observed engagement. Implications for the potential of providing motivational information to improve participation in online educational activities are discussed, as are alternatives for capturing perceived ability more effectively.

Practitioner notes

What is already known about this topic
  • Participation in educational activities is an important prerequisite for academic success, yet often proves to be particularly challenging in digital settings.
  • Students' internal barriers to online participation and persistence in higher education are lack of motivation and perceived ability.
  • Nudging interventions tackle students' behavioural barriers, and are particularly effective when guided by a theory of behaviour change, and when targeting students who suffer most from those barriers.
What this paper adds
  • This study examines whether the Fogg Behaviour Model is suited to guide a nudging intervention with the aim to increase student engagement in online higher education.
  • This study examines whether students with different levels of motivation and perceived ability vary in their online behaviour in response to nudges.
  • This study experimentally evaluates whether targeted nudges—targeted at students' motivation and perceived ability—are more effective than plain (not-targeted) nudges.
Implications for practice and/or policy
  • The results indicate the importance of motivation for performing nudged behaviours regarding successful participation in an online educational activity.
  • The results do not provide evidence for the role of perceived digital ability, yet do show prior performance on a similar educational activity can effectively distinguish between students' responsiveness.
  • Targeted nudges were not more effective than plain nudges, but the potential of other motivational nudges and how to increase perceived performance are discussed.
  相似文献   

18.
The COVID-19 pandemic has posed a significant challenge to higher education and forced academic institutions across the globe to abruptly shift to remote teaching. Because of the emergent transition, higher education institutions continuously face difficulties in creating satisfactory online learning experiences that adhere to the new norms. This study investigates the transition to online learning during Covid-19 to identify factors that influenced students' satisfaction with the online learning environment. Adopting a mixed-method design, we find that students' experience with online learning can be negatively affected by information overload, and perceived technical skill requirements, and describe qualitative evidence that suggest a lack of social interactions, class format, and ambiguous communication also affected perceived learning. This study suggests that to digitalize higher education successfully, institutions need to redesign students' learning experience systematically and re-evaluate traditional pedagogical approaches in the online context.

Practitioner notes

What is already known about this topic
  • University transitions to online learning during the Covid-19 pandemic were undertaken by faculty and students who had little online learning experience.
  • The transition to online learning was often described as having a negative influence on students' learning experience and mental health.
  • Varieties of cognitive load are known predictors of effective online learning experiences and satisfaction.
What this paper adds
  • Information overload and perceptions of technical abilities are demonstrated to predict students' difficulty and satisfaction with online learning.
  • Students express negative attitudes towards factors that influence information overload, technical factors, and asynchronous course formats.
  • Communication quantity was not found to be a significant factor in predicting either perceived difficulty or negative attitudes.
Implications for practice and/or policy
  • We identify ways that educators in higher education can improve their online offerings and implementations during future disruptions.
  • We offer insights into student experience concerning online learning environments during an abrupt transition.
  • We identify design factors that contribute to effective online delivery, educators in higher education can improve students' learning experiences during difficult periods and abrupt transitions to online learning.
  相似文献   

19.
Video is a widely used medium in teacher training for situating student teachers in classroom scenarios. Although the emerging technology of virtual reality (VR) provides similar, and arguably more powerful, capabilities for immersing teachers in lifelike situations, its benefits and risks relative to video formats have received little attention in the research to date. The current study used a randomized pretest–posttest experimental design to examine the influence of a video- versus VR-based task on changing situational interest and self-efficacy in classroom management. Results from 49 student teachers revealed that the VR simulation led to higher increments in self-reported triggered interest and self-efficacy in classroom management, but also invoked higher extraneous cognitive load than a video viewing task. We discussed the implications of these results for pre-service teacher education and the design of VR environments for professional training purposes.

Practitioner notes

What is already known about this topic
  • Video is a popular teacher training medium given its ability to display classroom situations.
  • Virtual reality (VR) also immerses users in lifelike situations and has gained popularity in recent years.
  • Situational interest and self-efficacy in classroom management is vital for student teachers' professional development.
What this paper adds
  • VR outperforms video in promoting student teachers' triggered interest in classroom management.
  • Student teachers felt more efficacious in classroom management after participating in VR.
  • VR also invoked higher extraneous cognitive load than the video.
Implications for practice and/or policy
  • VR provides an authentic teacher training environment for classroom management.
  • The design of the VR training environment needs to ensure a low extraneous cognitive load.
  相似文献   

20.
Technology-based, open-ended learning environments (OELEs) can capture detailed information of students' interactions as they work through a task or solve a problem embedded in the environment. This information, in the form of log data, has the potential to provide important insights about the practices adopted by students for scientific inquiry and problem solving. How to parse and analyse the log data to reveal evidence of multifaceted constructs like inquiry and problem solving holds the key to making interactive learning environments useful for assessing students' higher-order competencies. In this paper, we present a systematic review of studies that used log data generated in OELEs to describe, model and assess scientific inquiry and problem solving. We identify and analyse 70 conference proceedings and journal papers published between 2012 and 2021. Our results reveal large variations in OELE and task characteristics, approaches used to extract features from log data and interpretation models used to link features to target constructs. While the educational data mining and learning analytics communities have made progress in leveraging log data to model inquiry and problem solving, multiple barriers still exist to hamper the production of representative, reproducible and generalizable results. Based on the trends identified, we lay out a set of recommendations pertaining to key aspects of the workflow that we believe will help the field develop more systematic approaches to designing and using OELEs for studying how students engage in inquiry and problem-solving practices.

Practitioner notes

What is already known about this topic
  • Research has shown that technology-based, open-ended learning environments (OELEs) that collect users' interaction data are potentially useful tools for engaging students in practice-based STEM learning.
  • More work is needed to identify generalizable principles of how to design OELE tasks to support student learning and how to analyse the log data to assess student performance.
What this paper adds
  • We identified multiple barriers to the production of sufficiently generalizable and robust results to inform practice, with respect to: (1) the design characteristics of the OELE-based tasks, (2) the target competencies measured, (3) the approaches and techniques used to extract features from log files and (4) the models used to link features to the competencies.
  • Based on this analysis, we can provide a series of specific recommendations to inform future research and facilitate the generalizability and interpretability of results:
    • Making the data available in open-access repositories, similar to the PISA tasks, for easy access and sharing.
    • Defining target practices more precisely to better align task design with target practices and to facilitate between-study comparisons.
    • More systematic evaluation of OELE and task designs to improve the psychometric properties of OELE-based measurement tasks and analysis processes.
    • Focusing more on internal and external validation of both feature generation processes and statistical models, for example with data from different samples or by systematically varying the analysis methods.
Implications for practice and/or policy
  • Using the framework of evidence-centered assessment design, we have identified relevant criteria for organizing and evaluating the diverse body of empirical studies on the topic and that policy makers and practitioners can use for their own further examinations.
  • This paper identifies promising research and development areas on the measurement and assessment of higher-order constructs with process data from OELE-based tasks that government agencies and foundations can support.
  • Researchers, technologists and assessment designers might find useful the insights and recommendations for how OELEs can enhance science assessment through thoughtful integration of learning theories, task design and data mining techniques.
  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号