全文获取类型
收费全文 | 178篇 |
免费 | 8篇 |
国内免费 | 7篇 |
专业分类
教育 | 140篇 |
科学研究 | 25篇 |
体育 | 10篇 |
综合类 | 14篇 |
信息传播 | 4篇 |
出版年
2023年 | 3篇 |
2022年 | 2篇 |
2021年 | 6篇 |
2020年 | 10篇 |
2019年 | 7篇 |
2018年 | 3篇 |
2017年 | 6篇 |
2016年 | 6篇 |
2015年 | 5篇 |
2014年 | 8篇 |
2013年 | 14篇 |
2012年 | 8篇 |
2011年 | 13篇 |
2010年 | 6篇 |
2009年 | 10篇 |
2008年 | 10篇 |
2007年 | 9篇 |
2006年 | 16篇 |
2005年 | 8篇 |
2004年 | 11篇 |
2003年 | 9篇 |
2002年 | 6篇 |
2001年 | 6篇 |
2000年 | 3篇 |
1999年 | 1篇 |
1998年 | 4篇 |
1996年 | 1篇 |
1994年 | 1篇 |
1992年 | 1篇 |
排序方式: 共有193条查询结果,搜索用时 15 毫秒
191.
Mohammed Saqr 《British journal of educational technology : journal of the Council for Educational Technology》2023,54(5):1077-1094
Learning analytics is a fast-growing discipline. Institutions and countries alike are racing to harness the power of using data to support students, teachers and stakeholders. Research in the field has proven that predicting and supporting underachieving students is worthwhile. Nonetheless, challenges remain unresolved, for example, lack of generalizability, portability and failure to advance our understanding of students' behaviour. Recently, interest has grown in modelling individual or within-person behaviour, that is, understanding the person-specific changes. This study applies a novel method that combines within-person with between-person variance to better understand how changes unfolding at the individual level can explain students' final grades. By modelling the within-person variance, we directly model where the process takes place, that is the student. Our study finds that combining within- and between-person variance offers a better explanatory power and a better guidance of the variables that could be targeted for intervention at the personal and group levels. Furthermore, using within-person variance opens the door for person-specific idiographic models that work on individual student data and offer students support based on their own insights.
Practitioner notes
What is already known about this topic- Predicting students' performance has commonly been implemented using cross-sectional data at the group level.
- Predictive models help predict and explain student performance in individual courses but are hard to generalize.
- Heterogeneity has been a major factor in hindering cross-course or context generalization.
- Intra-individual (within-person) variations can be modelled using repeated measures data.
- Hybrid between–within-person models offer more explanatory and predictive power of students' performance.
- Intra-individual variations do not mirror interindividual variations, and thus, generalization is not warranted.
- Regularity is a robust predictor of student performance at both the individual and the group levels.
- The study offers a method for teachers to better understand and predict students' performance.
- The study offers a method of identifying what works on a group or personal level.
- Intervention at the personal level can be more effective when using within-person predictors and at the group level when using between-person predictors.
192.
Andrew Emerson Wookhee Min Roger Azevedo James Lester 《British journal of educational technology : journal of the Council for Educational Technology》2023,54(1):40-57
Game-based learning environments hold significant promise for facilitating learning experiences that are both effective and engaging. To support individualised learning and support proactive scaffolding when students are struggling, game-based learning environments should be able to accurately predict student knowledge at early points in students' gameplay. Student knowledge is traditionally assessed prior to and after each student interacts with the learning environment with conventional methods, such as multiple choice content knowledge assessments. While previous student modelling approaches have leveraged machine learning to automatically infer students' knowledge, there is limited work that incorporates the fine-grained content from each question in these types of tests into student models that predict student performance at early junctures in gameplay episodes. This work investigates a predictive student modelling approach that leverages the natural language text of the post-gameplay content knowledge questions and the text of the possible answer choices for early prediction of fine-grained individual student performance in game-based learning environments. With data from a study involving 66 undergraduate students from a large public university interacting with a game-based learning environment for microbiology, Crystal Island , we investigate the accuracy and early prediction capacity of student models that use a combination of gameplay features extracted from student log files as well as distributed representations of post-test content assessment questions. The results demonstrate that by incorporating knowledge about assessment questions, early prediction models are able to outperform competing baselines that only use student game trace data with no question-related information. Furthermore, this approach achieves high generalisation, including predicting the performance of students on unseen questions.
Practitioner notes
What is already known about this topic- A distinctive characteristic of game-based learning environments is their capacity to enable fine-grained student assessment.
- Adaptive game-based learning environments offer individualisation based on specific student needs and should be able to assess student competencies using early prediction models of those competencies.
- Word embedding approaches from the field of natural language processing show great promise in the ability to encode semantic information that can be leveraged by predictive student models.
- Investigates word embeddings of assessment question content for reliable early prediction of student performance.
- Demonstrates the efficacy of distributed word embeddings of assessment questions when used by early prediction models compared to models that use either no assessment information or discrete representations of the questions.
- Demonstrates the efficacy and generalisability of word embeddings of assessment questions for predicting the performance of both new students on existing questions and existing students on new questions.
- Word embeddings of assessment questions can enhance early prediction models of student knowledge, which can drive adaptive feedback to students who interact with game-based learning environments.
- Practitioners should determine if new assessment questions will be developed for their game-based learning environment, and if so, consider using our student modelling framework that incorporates early prediction models pretrained with existing student responses to previous assessment questions and is generalisable to the new assessment questions by leveraging distributed word embedding techniques.
- Researchers should consider the most appropriate way to encode the assessment questions in ways that early prediction models are able to infer relationships between the questions and gameplay behaviour to make accurate predictions of student competencies.