共查询到20条相似文献,搜索用时 93 毫秒
1.
Graduate teaching assistants (GTAs) in science, technology, engineering, and mathematics (STEM) have a large impact on undergraduate instruction but are often poorly prepared to teach. Teaching self-efficacy, an instructor’s belief in his or her ability to teach specific student populations a specific subject, is an important predictor of teaching skill and student achievement. A model of sources of teaching self-efficacy is developed from the GTA literature. This model indicates that teaching experience, departmental teaching climate (including peer and supervisor relationships), and GTA professional development (PD) can act as sources of teaching self-efficacy. The model is pilot tested with 128 GTAs from nine different STEM departments at a midsized research university. Structural equation modeling reveals that K–12 teaching experience, hours and perceived quality of GTA PD, and perception of the departmental facilitating environment are significant factors that explain 32% of the variance in the teaching self-efficacy of STEM GTAs. This model highlights the important contributions of the departmental environment and GTA PD in the development of teaching self-efficacy for STEM GTAs.Science, technology, engineering, and mathematics (STEM) graduate teaching assistants (GTAs) play a significant role in the learning environment of undergraduate students. They are heavily involved in the instruction of undergraduate students at master’s- and doctoral-granting universities (Nyquist et al., 1991 ; Johnson and McCarthy, 2000 ; Sundberg et al., 2005 ; Gardner and Jones, 2011 ). GTAs are commonly in charge of laboratory or recitation sections, in which they often have more contact and interaction with the students than the professor who is teaching the course (Abraham et al., 1997 ; Sundberg et al., 2005 ; Prieto and Scheel, 2008 ; Gardner and Jones, 2011 ).Despite the heavy reliance on GTAs for instruction and the large potential for them to influence student learning, there is evidence that many GTAs are completely unprepared or at best poorly prepared for their role as instructors (Abraham et al., 1997 ; Rushin et al., 1997 ; Shannon et al., 1998 ; Golde and Dore, 2001 ; Fagen and Wells, 2004 ; Luft et al., 2004 ; Sundberg et al., 2005 ; Prieto and Scheel, 2008 ). For example, in molecular biology, 71% of doctoral students are GTAs, but only 30% have had an opportunity to take a GTA professional development (PD) course that lasted at least one semester (Golde and Dore, 2001 ). GTAs often teach in a primarily directive manner and have intuitive notions about student learning, motivation, and abilities (Luft et al., 2004 ). For those who experience PD, university-wide PD is often too general (e.g., covering university policies and procedures, resources for students), and departmental PD does not address GTAs’ specific teaching needs; instead departmental PD repeats the university PD (Jones, 1993 ; Golde and Dore, 2001 ; Luft et al., 2004 ). Nor do graduate experiences prepare GTAs to become faculty and teach lecture courses (Golde and Dore, 2001 ).While there is ample evidence that many GTAs are poorly prepared, as well as studies of effective GTA PD programs (biology examples include Schussler et al., 2008 ; Miller et al., 2014 ; Wyse et al., 2014 ), the preparation of a graduate student as an instructor does not occur in a vacuum. GTAs are also integral members of their departments and are interacting with faculty and other GTAs in many different ways, including around teaching (Bomotti, 1994 ; Notarianni-Girard, 1999 ; Belnap, 2005 ; Calkins and Kelly, 2005 ). It is important to build good working relationships among the GTAs and between the GTAs and their supervisors (Gardner and Jones, 2011 ). However, there are few studies that examine the development of GTAs as integral members of their departments and determine how departmental teaching climate, GTA PD, and prior teaching experiences can impact GTAs.To guide our understanding of the development of GTAs as instructors, a theoretical framework is important. Social cognitive theory is a well-developed theoretical framework for describing behavior and can be applied specifically to teaching (Bandura, 1977 , 1986 , 1997 , 2001 ). A key concept in social cognitive theory is self-efficacy, which is a person’s belief in his or her ability to perform a specific task in a specific context (Bandura, 1997 ). High self-efficacy correlates with strong performance in a task such teaching (Bandura, 1997 ; Tschannen-Moran and Hoy, 2007 ). Teaching self-efficacy focuses on teachers’ perceptions of their ability to “organize and execute courses of action required to successfully accomplish a specific teaching task in a particular context” (Tschannen-Moran et al., 1998 , p. 233). High teaching self-efficacy has been shown to predict a variety of types of student achievement among K–12 teachers (Ashton and Webb, 1986 ; Anderson et al., 1988 ; Ross, 1992 ; Dellinger et al., 2008 ; Klassen et al., 2011 ). In GTAs, teaching self-efficacy has been shown to be related to persistence in academia (Elkins, 2005 ) and student achievement in mathematics (Johnson, 1998 ). High teaching self-efficacy is evidenced by classroom behaviors such as efficient classroom management, organization and planning, and enthusiasm (Guskey, 1984 ; Allinder, 1994 ; Dellinger et al., 2008 ). Instructors with high teaching self-efficacy work continually with students to help them in learning the material (Gibson and Dembo, 1984 ). These instructors are also willing to try a variety of teaching methods to improve their teaching (Stein and Wang, 1988 ; Allinder, 1994 ). Instructors with high teaching self-efficacy perform better as teachers, are persistent in difficult teaching tasks, and can positively affect their student’s achievement.These behaviors of successful instructors, which can contribute to student success, are important to foster in STEM GTAs. Understanding of what influences the development of teaching self-efficacy in STEM GTAs can be used to improve their teaching self-efficacy and ultimately their teaching. Therefore, it is important to understand what impacts teaching self-efficacy in STEM GTAs. Current research into factors that influence GTA teaching self-efficacy are generally limited to one or two factors in a study (Heppner, 1994 ; Prieto and Altmaier, 1994 ; Prieto and Meyers, 1999 ; Prieto et al., 2007 ; Liaw, 2004 ; Meyers et al., 2007 ). Studying these factors in isolation does not allow us to understand how they work together to influence GTA teaching self-efficacy. Additionally, most studies of GTA teaching self-efficacy are not conducted with STEM GTAs. STEM instructors teach in a different environment and with different responsibilities than instructors in the social sciences and liberal arts (Lindbloom-Ylanne et al., 2006 ). These differences could impact the development of teaching self-efficacy of STEM GTAs compared with social science and liberal arts GTAs. To further our understanding of the development of STEM GTA teaching self-efficacy, this paper aims to 1) describe a model of factors that could influence GTA teaching self-efficacy, and 2) pilot test the model using structural equation modeling (SEM) on data gathered from STEM GTAs. The model is developed from social cognitive theory and GTA teaching literature, with support from the K–12 teaching self-efficacy literature. This study is an essential first step in improving our understanding of the important factors impacting STEM GTA teaching self-efficacy, which can then be used to inform and support the preparation of effective STEM GTAs. 相似文献
2.
Course-based undergraduate research experiences (CUREs) may be a more inclusive entry point to scientific research than independent research experiences, and the implementation of CUREs at the introductory level may therefore be a way to improve the diversity of the scientific community.The U.S. scientific research community does not reflect America''s diversity. Hispanics, African Americans, and Native Americans made up 31% of the general population in 2010, but they represented only 18 and 7% of science, technology, engineering, and mathematics (STEM) bachelor''s and doctoral degrees, respectively, and 6% of STEM faculty members (National Science Foundation [NSF], 2013 ). Equity in the scientific research community is important for a variety of reasons; a diverse community of researchers can minimize the negative influence of bias in scientific reasoning, because people from different backgrounds approach a problem from different perspectives and can raise awareness regarding biases (Intemann, 2009 ). Additionally, by failing to be attentive to equity, we may exclude some of the best and brightest scientific minds and limit the pool of possible scientists (Intemann, 2009 ). Given this need for equity, how can our scientific research community become more inclusive?Current approaches to improving diversity in scientific research focus on graduating more STEM majors, but graduation with a STEM undergraduate degree alone is not sufficient for entry into graduate school. Undergraduate independent research experiences are becoming more or less a prerequisite for admission into graduate school and eventually a career in academia; a quick look at the recommendations for any of the top graduate programs in biology or science career–related websites state an expectation for undergraduate research and a perceived handicap if recommendation letters for graduate school do not include a discussion of the applicant''s research experience (Webb, 2007 ; Harvard University, 2013 ).Independent undergraduate research experiences have been shown to improve the retention of students in scientific research (National Research Council, 2003 ; Laursen et al., 2010 ; American Association for the Advancement of Science, 2011 ; Eagan et al., 2013 ). Participation in independent research experiences has been shown to increase interest in pursuing a PhD (Seymour et al., 2004 ; Russell et al., 2007 ) and seems to be particularly beneficial for students from historically underrepresented backgrounds (Villarejo et al., 2008 ; Jones et al., 2010 ; Espinosa, 2011 ; Hernandez et al., 2013 ). However, the limited number of undergraduate research opportunities available and the structure of how students are selected for these independent research lab positions exclude many students and can perpetuate inequities in the research community. In this essay, we highlight barriers faced by students interested in pursuing an undergraduate independent research experience and factors that impact how faculty members select students for these limited positions. We examine how bringing research experiences into the required course work for students could mitigate these issues and ultimately make research more inclusive. 相似文献
3.
4.
G. H. Roehrig M. Michlin L. Schmitt C. MacNabb J. M. Dubinsky 《CBE life sciences education》2012,11(4):413-424
In science education, inquiry-based approaches to teaching and learning provide a framework for students to building critical-thinking and problem-solving skills. Teacher professional development has been an ongoing focus for promoting such educational reforms. However, despite a strong consensus regarding best practices for professional development, relatively little systematic research has documented classroom changes consequent to these experiences. This paper reports on the impact of sustained, multiyear professional development in a program that combined neuroscience content and knowledge of the neurobiology of learning with inquiry-based pedagogy on teachers’ inquiry-based practices. Classroom observations demonstrated the value of multiyear professional development in solidifying adoption of inquiry-based practices and cultivating progressive yearly growth in the cognitive environment of impacted classrooms.Current discussion about educational reform among business leaders, politicians, and educators revolves around the idea students need “21st century skills” to be successful today (Rotherham and Willingham, 2009 ). Proponents argue that to be prepared for college and to be competitive in the 21st-century workplace, students need to be able to identify issues, acquire and use new information, understand complex systems, use technologies, and apply critical and creative thinking skills (US Department of Labor, 1991 ; Bybee et al., 2007 ; Conley, 2007 ). Advocates of 21st-century skills favor student-centered methods—for example, problem-based learning and project-based learning. In science education, inquiry-based approaches to teaching and learning provide one framework for students to build these critical-thinking and problem-solving skills (American Association for the Advancement of Science [AAAS], 1993 ; National Research Council [NRC], 2000 ; Capps et al., 2012 ).Unfortunately, in spite of the central role of inquiry in the national and state science standards, inquiry-based instruction is rarely implemented in secondary classrooms (Weiss et al., 1994 ; Bybee, 1997 ; Hudson et al., 2002 ; Smith et al., 2002 ; Capps et al., 2012 ). Guiding a classroom through planning, executing, analyzing, and evaluating open-ended investigations requires teachers to have sufficient expertise, content knowledge, and self-confidence to be able to maneuver through multiple potential roadblocks. Researchers cite myriad reasons for the lack of widespread inquiry-based instruction in schools: traditional beliefs about teaching and learning (Roehrig and Luft, 2004 ; Saad and BouJaoude, 2012 ), lack of pedagogical skills (Shulman, 1986 ; Adams and Krockover, 1997 ; Crawford, 2007 ), lack of time (Loughran, 1994 ), inadequate knowledge of the practice of science (Duschl, 1987 ; DeBoer, 2004 ; Saad and BouJaoude, 2012 ), perceived time constraints due to high-stakes testing, and inadequate preparation in science (Krajcik et al., 2000 ). Yet teachers are necessarily at the center of reform, as they make instructional and pedagogical decisions within their own classrooms (Cuban, 1990 ). Given that effectiveness of teachers’ classroom practices is critical to the success of current science education reforms, teacher professional development has been an ongoing focus for promoting educational reform (Corcoran, 1995 ; Corcoran et al., 1998 ).A review of the education research literature yields an extensive knowledge base in “best practices” for professional development (Corcoran, 1995 ; NRC, 1996 ; Loucks-Horsley and Matsumoto, 1999 ; Loucks-Horsley et al., 2009 ; Haslam and Fabiano, 2001 ; Wei et al., 2010 ). However, in spite of a strong consensus on what constitutes best practices for professional development (Desimone, 2009 ; Wei et al., 2010 ), relatively little systematic research has been conducted to support this consensus (Garet et al., 2001 ). Similarly, when specifically considering the science education literature, several studies have been published on the impact of teacher professional development on inquiry-based practices (e.g., Supovitz and Turner, 2000 ; Banilower et al., 2007 ; Capps et al., 2012 ). Unfortunately, these studies usually rely on teacher self-report data; few studies have reported empirical evidence of what actually occurs in the classroom following a professional development experience.Thus, in this study, we set out to determine through observational empirical data whether documented effective professional development does indeed change classroom practices. In this paper, we describe an extensive professional development experience for middle school biology teachers designed to develop teachers’ neuroscience content knowledge and inquiry-based pedagogical practices. We investigate the impact of professional development delivered collaboratively by experts in science and pedagogy on promoting inquiry-based instruction and an investigative classroom culture. The study was guided by the following research questions:
- Were teachers able to increase their neuroscience content knowledge?
- Were teachers able to effectively implement student-centered reform or inquiry-based pedagogy?
- Would multiple years of professional development result in greater changes in teacher practices?
5.
6.
Although we agree with Theobold and Freeman (2014) that linear models are the most appropriate way in which to analyze assessment data, we show the importance of testing for interactions between covariates and factors.To the Editor:Recently, Theobald and Freeman (2014) reviewed approaches for measuring student learning gains in science, technology, engineering, and mathematics (STEM) education research. In their article, they highlighted the shortcomings of approaches such as raw change scores, normalized gain scores, normalized change scores, and effect sizes when students are not randomly assigned to classes based on the different pedagogies that are being compared. As an alternative, they propose using linear regression models in which characteristics of students, such as pretest scores, are included as independent variables in addition to treatments. Linear models that include both continuous and categorical independent variables are often termed analysis of covariance (ANCOVA) models. The approach of using ANCOVA to control for differences in students among treatments groups has been suggested previously by Weber (2009) . We largely agree with Theobald and Freeman (2014) and Weber (2009) that ANCOVA models are an appropriate method for situations in which students cannot be randomly assigned to treatments and controls. However, in describing how to implement linear regression models to examine student learning gains, Theobald and Freeman (2014) ignore a fundamental assumption of ANCOVA.ANCOVA assumes homogeneity of slopes (McDonald, 2009 ; Sokal and Rohlf, 2011 ). In other words, the slope of the relationship between the covariate (e.g., pretest score) and the dependent variable (e.g., posttest score) is the same for the treatment group and the control. This assumption is a strict assumption of ANCOVA in that violations of this assumption can result in incorrect conclusions (Engqvist, 2005 ). For example, in Figure 1, both pretest score and treatment have statistically significant main effects in a linear model with only pretest score (F(1, 97) = 25.6, p < 0.001) and treatment (F(1, 97) = 42.6, p < 0.01) as independent variables. Therefore, we would conclude that all students in the class with pedagogical innovation had significantly greater posttest scores than those students in the control class for a given pretest score. Furthermore, we would conclude that the pedagogical innovation led to the same increase in score for all students in the treatment class, independent of their pretest scores. Clearly, neither of these conclusions would be justified.Researchers must first test the assumption of the homogeneity of slopes by including an interaction term (covariate × treatment) in their linear model (McDonald, 2009 ; Weber 2009 ; Sokal and Rohlf, 2011 ). For example, if we measured student achievement in two courses with different instructional approaches in a typical pretest/posttest design, then the interaction between students’ pretest scores and the type of instruction must be considered, because the instruction may have a different effect for high- versus low-achieving students. If multiple covariates are included in the linear model (see Equation 1 in Theobald and Freeman, 2014 ), then interaction terms need to be included for each of the covariates in the model. If the interaction term is statistically significant, this suggests that the relationship between the covariate and the dependent variable is different for each treatment group (F(1, 96) = 25.1, p < 0.001; Figure 1). As a result, the effect of the treatment will depend on the value of the covariate, and universal statements about the effect of the treatment are not appropriate (Engqvist, 2005 ). If the interaction term is not statistically significant, it should be removed from the model and the analysis rerun without the interaction term. Failure to remove an interaction term that was not statistically significant also can lead to an incorrect conclusion (Engqvist, 2005 ). Whether there are statistically significant interactions between the “treatment” and the covariates in the data set used by Theobald and Freeman (2014) is unclear.Open in a separate windowFigure 1.Simulated data to demonstrate heterogeneity of slopes. Pretest values were generated from random normal distributions with mean = 59.8 (SD = 18.1) for the treatment course and mean = 59.3 (SD = 17.0) for the control course, based on values given in Theobald and Freeman (2014) . For the treatment course, posttest values were calculated using the formula posttesti = 80 + 0.1 × pre-testi + Ɛi, where Ɛi was selected from a random normal distribution with mean = 0 (SD = 10). For the control course, posttest values were calculated using the formula posttesti = 42 + 0.5 × pre-testi + Ɛi, where Ɛi was selected from a random normal distribution with mean = 0 (SD = 10). n = 50 for both courses.In addition to being a strict assumption of ANCOVA, testing for homogeneity of slopes in a linear model is important in STEM education research, as slopes are likely heterogeneous for several reasons. First, for many instruments used in STEM education research, high-achieving students score high on the pretest. As a result, their ability to improve is limited due to the ceiling effect, and differences between treatment and control groups in posttest scores are likely to be minimal (Figure 1). In contrast, low-achieving students have a greater opportunity to change their scores between their pretest and posttest. Second, pedagogical innovations are more likely to have a greater impact on the learning of lower-performing students than higher-performing students. For example, Beck and Blumer (2012) found statistically greater gains in student confidence and scientific reasoning skills for students in the lowest quartile as compared with students in the highest quartile on pretest assessments in inquiry-based laboratory courses.Theobald and Freeman (2014, p. 47) note that “regression models can also include interaction terms that test whether the intervention has a differential impact on different types of students.” Yet, we argue that these terms must be included and only should be excluded if they are not statistically significant. 相似文献
7.
Howard Garrison 《CBE life sciences education》2013,12(3):357-363
Blacks, Hispanics, and American Indians/Alaskan Natives are underrepresented in science and engineering fields. A comparison of race–ethnic differences at key transition points was undertaken to better inform education policy. National data on high school graduation, college enrollment, choice of major, college graduation, graduate school enrollment, and doctoral degrees were used to quantify the degree of underrepresentation at each level of education and the rate of transition to the next stage. Disparities are found at every level, and their impact is cumulative. For the most part, differences in graduation rates, rather than differential matriculation rates, make the largest contribution to the underrepresentation. The size, scope, and persistence of the disparities suggest that small-scale, narrowly targeted remediation will be insufficient.Most scientists and engineers take great pride in their reliance on logic and empirical evidence in decision making, and they reject the use of emotional, parochial, and irrational criteria. Prejudices of any sort are abjured. The prevalence of laboratory personnel and research collaborators from diverse national origins is often cited as an example of this meritocratic ideal. Therefore, the U.S. biomedical research community was shocked when a study revealed that Black Americans and other groups were substantially underrepresented in the receipt of grants from the National Institutes of Health (NIH), even after other correlates of success were controlled (Ginther et al., 2011 ). This picture clashed dramatically with the standards the community claimed. In the wake of this revelation, NIH created a high-level advisory group to examine the situation and make recommendations to address it (NIH, 2012 ).Concern about underrepresentation of Black Americans and other race–ethnic groups in science is not new (Melnick and Hamilton, 1977 ), and many attempts have been made to ameliorate or eliminate the gaps. While there have been some gains—underrepresented racial minority (URM)1 students rose from 2% of the biomedical graduate students to more than 11% since 1980 (National Research Council, 2011 )—disparities remain in all fields of science and engineering at all education levels and career stages (National Academy of Science, 2011 ).Given the limited progress in correcting this situation, it is essential to have a better understanding of the origin and extent of the problem. Especially in the current fiscal climate, with insufficient funding for education programs, interventions must be accurately targeted and appropriate to reach their goals. How large are the race–ethnic differences in science enrollments at each level of education? Are there general patterns that can help guide policy? Using data from 2008 and 2009, a recent National Science Foundation (NSF) report illustrates the underrepresentation of Blacks, Hispanics, and American Indians/Alaskan Natives at various education levels (NSF, 2011a ). While informative and illustrative of the extent of the problem, this single-year, cross-sectional perspective does not capture the conditions encountered by recent doctorate earners as they progressed through earlier stages in their education. Looking at graduation rates in the life sciences, Ginther et al. (2009) found that minority participation is increasing in biology, but minority students are not transitioning between milestones in the same proportions as Whites. 相似文献
8.
A response to Maskiewicz and Lineback''s essay in the September 2013 issue of CBE-Life Sciences Education.Dear Editor:Maskiewicz and Lineback (2013) have written a provocative essay about how the term misconceptions is used in biology education and the learning sciences in general. Their historical perspective highlights the logic and utility of the constructivist theory of learning. They emphasize that students’ preliminary ideas are resources to be built upon, not errors to be eradicated. Furthermore, Maskiewicz and Lineback argue that the term misconception has been largely abandoned by educational researchers, because it is not consistent with constructivist theory. Instead, they conclude, members of the biology education community should speak of preconceptions, naïve conceptions, commonsense conceptions, or alternative conceptions.We respectfully disagree. Our objections encompass both the semantics of the term misconception and the more general issue of constructivist theory and practice. We now address each of these in turn. (For additional discussion, please see Leonard, Andrews, and Kalinowski , “Misconceptions Yesterday, Today, and Tomorrow,” CBE—Life Sciences Education [LSE], in press, 2014.)Is misconception suitable for use in scholarly discussions? The answer depends partly on the intended audience. We avoid using the term misconception with students, because it could be perceived as pejorative. However, connotations of disapproval are less of a concern for the primary audience of LSE and similar journals, that is, learning scientists, discipline-based education researchers, and classroom teachers.An additional consideration is whether misconception is still used in learning sciences outside biology education. Maskiewicz and Lineback claim that misconception is rarely used in journals such as Cognition and Instruction, Journal of the Learning Sciences, Journal of Research in Science Teaching, and Science Education, yet the term appears in about a quarter of the articles published by these journals in 2013 (National Research Council, 2012 ).
Open in a separate windowaAs of November 25, 2013. Does not include very short editorials, commentaries, corrections, or prepublication online versions.A final consideration is whether any of the possible alternatives to misconception are preferable. We feel that the alternatives suggested by Maskiewicz and Lineback are problematic in their own ways. For example, naïve conception sounds more strongly pejorative to us than misconception. Naïve conception and preconception also imply that conceptual challenges occur only at the very beginning stages of learning, even though multiple rounds of conceptual revisions are sometimes necessary (e.g., see figure 1 of Andrews et al., 2012 ) as students move through learning progressions. Moreover, the terms preferred by Maskiewicz and Lineback are used infrequently (Smith et al. (1993) that they object to statements that misconceptions should be actively confronted, challenged, overcome, corrected, and/or replaced (Smith et al. (1993) argue on theoretical grounds that confrontation does not allow refinement of students’ pre-existing, imperfect ideas; instead, the students must simply choose among discrete prepackaged ideas. From Maskiewicz and Lineback''s perspective, the papers listed in Maskiewicz and Lineback (2013) as using outdated views of misconceptionsa
Open in a separate windowaWhile these papers do not adhere to Smith et al.''s (1993) version of constructivism, they do adhere to the constructivist approach that advocates cognitive dissonance.Our own stance differs from that of Maskiewicz and Lineback, reflecting a lack of consensus within constructivist theory. We agree with those who argue that, not only are confrontations compatible with constructivist learning, they are a central part of it (e.g., Gilbert and Watts, 1983 ; Hammer, 1996 ). We note that Baviskar et al. (2009) list “creating cognitive dissonance” as one of the four main tenets of constructivist teaching. Their work is consistent with research showing that focusing students on conflicting ideas improves understanding more than approaches that do not highlight conflicts (e.g., Kowalski and Taylor, 2009 ; Gadgil et al., 2012 ). Similarly, the Discipline-Based Education Research report (National Research Council, 2012 , p. 70) advocates “bridging analogies,” a form of confrontation, to guide students toward more accurate ways of thinking. Therefore, we do not share Maskiewicz and Lineback''s concerns about the papers listed in Price, 2012 ). We embrace collegial disagreement.Maskiewicz and Lineback imply that labeling students’ ideas as misconceptions essentially classifies these ideas as either right or wrong, with no intermediate stages for constructivist refinement. In fact, a primary goal of creating concept inventories, which use the term misconception profusely (e.g., Morris et al., 2012 ; Prince et al., 2012 ), is to demonstrate that learning is a complex composite of scientifically valid and invalid ideas (e.g., Andrews et al., 2012 ). A researcher or instructor who uses the word misconceptions can agree wholeheartedly with Maskiewicz and Lineback''s point that misconceptions can be a good starting point from which to develop expertise.As we have seen, misconception is itself fraught with misconceptions. The term now embodies the evolution of our understanding of how people learn. We support the continued use of the term, agreeing with Maskiewicz and Lineback that authors should define it carefully. For example, in our own work, we define misconceptions as inaccurate ideas that can predate or emerge from instruction (e.g., Andrews et al., 2012 ). We encourage instructors to view misconceptions as opportunities for cognitive dissonance that students encounter as they progress in their learning. 相似文献
Table 1.
Use of the term misconception in selected education research journals in 2013Journal (total articles published in 2013a) | Articles using misconception (“nondisapproving” articles/total articles) | Articles using other terms |
---|---|---|
LSE (59) | 23/24 | Alternative conception (4) |
Commonsense conception (2) | ||
Naïve conception (1) | ||
Preconception (4) | ||
Cognition and Instruction (16) | 3/3 | None |
Journal of the Learning Sciences (17) | 4/4 | Commonsense science knowledge (1) |
Naïve conception (1) | ||
Prior conception (1) | ||
Journal of Research in Science Teaching (49) | 11/13 | Commonsense idea (1) |
Naïve conception (1) | ||
Preconception (5) | ||
Science Education (36) | 10/11 | Naïve conception (1) |
Article | Example of constructivist language | Example of language suggesting confrontation |
---|---|---|
Andrews et al., 2011 | “Constructivist theory argues that individuals construct new understanding based on what they already know and believe.… We can expect students to retain serious misconceptions if instruction is not specifically designed to elicit and address the prior knowledge students bring to class” (p. 400). | Instructors were scored for “explaining to students why misconceptions were incorrect” and “making a substantial effort toward correcting misconceptions” (p. 399). “Misconceptions must be confronted before students can learn natural selection” (p. 399). “Instructors need to elicit misconceptions, create situations that challenge misconceptions.” (p. 403). |
Baumler et al., 2012 | “The last pair [of students]''s response invoked introns, an informative answer, in that it revealed a misconception grounded in a basic understanding of the Central Dogma” (p. 89; acknowledges students’ useful prior knowledge). | No relevant text found |
Cox-Paulson et al., 2012 | No relevant text found | This paper barely mentions misconceptions, but cites sources (Phillips et al., 2008 ; Robertson and Phillips, 2008 ) that refer to “exposing,” “uncovering,” and “correcting” misconceptions. |
Crowther, 2012 | “Prewritten songs may explain concepts in new ways that clash with students’ mental models and force revision of those models” (p. 28; emphasis added). | “Songs can be particularly useful for countering … conceptual misunderstandings.… Prewritten songs may explain concepts in new ways that clash with students’ mental models and force revision of those models” (p. 28). |
Kalinowski et al., 2010 | “Several different instructional approaches for helping students to change misconceptions … agree that instructors must take students’ prior knowledge into account and help students integrate new knowledge with their existing knowledge” (p. 88). | “One strategy for correcting misconceptions is to challenge them directly by ‘creating cognitive conflict,’ presenting students with new ideas that conflict with their pre-existing ideas about a phenomenon… In addition, study of multiple examples increases the chance of students identifying and overcoming persistent misconceptions” (p. 89). |
9.
Helen L. Vasaly Jose Herrera Charles H. Sullivan Katherine J. Denniston 《CBE life sciences education》2013,12(1):1-4
Many life sciences faculty and administrators are unaware of existing funding programs and of the strategies needed for writing an educationally related proposal. We hope to remedy this problem by making the life sciences audience aware of two National Science Foundation programs underutilized by the biology community.This column has been a welcome opportunity to keep the CBE—Life Sciences Education readership aware of national efforts to improve undergraduate education in the life sciences and of ways to become a part of that effort (Woodin et al. 2009 , 2010 , 2012 ; Wei and Woodin, 2011 ). Throughout the years of engagement in the Vision and Change initiative, from the summer of 2007 to the present, the three primary agencies involved, the National Science Foundation (NSF), the National Institutes of Health (NIH), and the Howard Hughes Medical Institute (HHMI), have continually maintained a dialogue with participants through formal and informal conversations, workshops, and meetings. Our shared focus has been on how the life sciences community itself can change biology undergraduate education in order to better reflect and respond to the current educational environment, including the
- rapid advances in the discipline,
- new educational technologies and platforms becoming available,
- evidence developed through research on effective practices in undergraduate education, and
- challenges of accomplishing the necessary changes with the resources available.
- Transforming Undergraduate Education in Science, Technology, Engineering, and Mathematics (TUES) program (anticipated Spring of 2013 release), and
- Undergraduate Research Coordination Networks–Undergraduate Biology Education (RCN-UBE) program (next deadline is June 14, 2013).
10.
Testing within the science classroom is commonly used for both formative and summative assessment purposes to let the student and the instructor gauge progress toward learning goals. Research within cognitive science suggests, however, that testing can also be a learning event. We present summaries of studies that suggest that repeated retrieval can enhance long-term learning in a laboratory setting; various testing formats can promote learning; feedback enhances the benefits of testing; testing can potentiate further study; and benefits of testing are not limited to rote memory. Most of these studies were performed in a laboratory environment, so we also present summaries of experiments suggesting that the benefits of testing can extend to the classroom. Finally, we suggest opportunities that these observations raise for the classroom and for further research.Almost all science classes incorporate testing. Tests are most commonly used as summative assessment tools meant to gauge whether students have achieved the learning objectives of the course. They are sometimes also used as formative assessment tools—often in the form of low-stakes weekly or daily quizzes—to give students and faculty members a sense of students’ progression toward those learning objectives. Occasionally, tests are also used as diagnostic tools, to determine students’ preexisting conceptions or skills relevant to an upcoming subject. Rarely, however, do we think of tests as learning tools. We may acknowledge that testing promotes student learning, but we often attribute this effect to the studying students do to prepare for the test. And yet, one of the most consistent findings in cognitive psychology is that testing leads to increased retention more than studying alone does (Roediger and Butler, 2011 ; Roediger and Pyc, 2012 ). This effect can be enhanced when students receive feedback for failed tests and can be observed for both short-term and long-term retention. There is some evidence that testing not only improves student memory of the tested information but also ability to remember related information. Finally, testing appears to potentiate further study, allowing students to gain more from study periods that follow a test. Given the potential power of testing as a tool to promote learning, we should consider how to incorporate tests into our courses not only to gauge students’ learning, but also to promote that learning (Klionsky, 2008 ).We provide six observations about the effects of testing from the cognitive psychology literature, summarizing key studies that led to these conclusions (see Study Research question(s) Conclusion Length of delay before final test Study participants Repeated retrieval enhances long-term retention in a laboratory setting “Test-enhanced learning: taking memory tests improves long-term retention” (Roediger and Karpicke, 2006a) Is a testing effect observed in educationally relevant conditions? Is the benefit of testing greater than the benefit of restudy? Do multiple tests produce a greater effect than a single test? Testing improved retention significantly more than restudy in delayed tests. Multiple tests provided greater benefit than a single test. Experiment 1: 2 d; 1 wk Experiment 2: 1 wk Undergraduates ages 18–24, Washington University “Retrieval practice with short-answer, multiple-choice, and hybrid tests” (Smith and Karpicke, 2014) What effect does the type of question presented in retrieval practice have on long-term retention? Retrieval practice with multiple-choice, free-response, and hybrid formats improved students’ performance on a final, delayed test taken 1 wk later when compared with a no-retrieval control. The effect was observed for both questions that required only recall and those that required inference. Hybrid questions provided an advantage when the final test had a short-answer format. 1 wk Undergraduates, Purdue University “Retrieval practice produces more learning that elaborative studying with concept mapping” (Karpicke and Blunt, 2011) What is the effect of retrieval practice on learning relative to elaborative study using a concept map? Students in the retrieval-practice condition had greater gains in meaningful learning compared with those who used elaborative concept mapping as a learning tool. 1 wk Undergraduates Various testing formats can enhance learning “Retrieval practice with short-answer, multiple-choice, and hybrid tests” (Smith and Karpicke, 2014) See above. See above. See above. See above. “Test format and corrective feedback modify the effect of testing on long-term retention” (Kang et al., 2007) What effect does the type of question used for retrieval practice have on retention? Does feedback have an effect on retention for different types of questions? When no feedback was given, the difference in long-term retention between short-answer and multiple-choice questions was insignificant. When feedback was provided, short-answer questions were slightly more beneficial. 3 d Undergraduates, Washington University psychology subjects’ pool “The persisting benefits of using multiple-choice tests as learning events” (Little and Bjork, 2012) What effect does question format have on retention of information previously tested and related information not included in retrieval practice? Both cued-recall and multiple-choice questions improved recall compared with the no-test control. However, multiple-choice questions improved recall more than cued-recall questions for information not included in the retrieval practice, both after a 5-min and a 48-h delay. 48 h Undergraduates, University of California, Los Angeles Feedback enhances benefits of testing “Feedback enhances positive effects and reduces the negative effects of multiple-choice testing” (Butler and Roediger, 2008) What effect does feedback on multiple-choice tests have on long-term retention of information? Feedback improved retention on a final cued-recall test. Delayed feedback resulted in better final performance than immediate feedback, though both showed benefits compared with no feedback. The final test occurred 1 wk after the initial test. 1 wk Undergraduate psychology students, Washington University “Correcting a metacognitive error: feedback increases retention of low-confidence responses” (Butler et al., 2008) What role does feedback play in retrieval practice? Can it correct metacognitive errors as well as memory errors? Both initially correct and incorrect answers were benefited by feedback, but low-confidence answers were most benefited by feedback. 5 min Undergraduate psychology students, Washington University Learning is not limited to rote memory “Retrieval practice produces more learning than elaborative study with concept mapping” (Karpicke and Blunt, 2011) What is the effect of retrieval practice on learning relative to elaborative study using a concept map? Does retrieval practice improve students’ ability to perform higher-order cognitive activities (i.e., building a concept map) as well as simple recall tasks? Compared with elaborative study using concept mapping, retrieval practice improved students’ performance both on final tests that required short answers and final tests that required concept map production. See also earlier entry for this study. 1 wk Undergraduates “Retrieval practice with short-answer, multiple-choice, and hybrid tests” (Smith and Karpicke, 2014) See above. See above. See above. See above. “Repeated testing produces superior transfer of learning relative to repeated studying” (Butler, 2010) Does test-enhanced learning promote transfer of facts and concepts from one domain to another? Testing improved retention and increased transfer of information from one domain to another through test questions that required factual or conceptual recall and inferential questions that required transfer. 1 wk Undergraduate psychology students, Washington University Testing potentiates further study “Pretesting with multiple-choice questions facilitates learning” (Little and Bjork, 2011) Does pretesting using multiple-choice questions improve performance on a later test? Is an effect observed only for pretested information or also for related, previously untested information? A multiple-choice pretest improved performance on a final test, both for information that was included on the pretest and related information. 1 wk Undergraduates, University of California, Los Angeles “The interim test effect: testing prior material can facilitate the learning of new material” (Wissman et al., 2011) Does an interim test over previously learned material improve retention of subsequently learned material? Interim testing improves recall on a final test for information taught before and after the interim test. No delay Undergraduates, Kent State University The benefits of testing appear to extend to the classroom “The exam-a-day procedure improves performance in psychology classes” (Leeming, 2002) What effect does a daily exam have on retention at the end of the semester? Students who took a daily exam in an undergraduate psychology class scored higher on a retention test at the end of the course and had higher average grades than students who only took unit tests. One semester Undergraduates enrolled in Summer term of Introductory Psychology, University of Memphis “Repeated testing improves long-term retention relative to repeated study: a randomized controlled trial” (Larsen et al., 2009) Does repeated testing improve long-term retention in a real learning environment? In a study with medical residents, repeated testing with feedback improved retention more than repeated study for a final recall test 6 mo later. 6 mo Residents from Pediatrics and Emergency Medicine programs, Washington University “Retrieving essential material at the end of lectures improves performance on statistics exams” (Lyle and Crawford, 2011) What effect does daily recall practice using the PUREMEM method have on course exam scores? In an undergraduate psychology course, students using the PUREMEM method had higher exams scores than students taught with traditional lectures, assessed by four noncumulative exams spaced evenly throughout the semester. ∼3.5 wk Undergraduates enrolled in either of two consecutive years of Statistics for Psychology, University of Louisville “Using quizzes to enhance summative-assessment performance in a web-based class: an experimental study” (McDaniel et al., 2012) What effects do online testing resources have on retention of information in an online undergraduate neuroscience course? Both multiple-choice and short-answer quiz questions improved retention and improved scores on the final exam for questions identical to those on the weekly quizzes and those that were related but not identical. 15 wk Undergraduates enrolled in Web-based brain and behavior course “Increasing student success using online quizzing in introductory (majors) biology” (Orr and Foster, 2013) What effect do required pre-exam quizzes have on final exam scores for students in an introductory (major) biology course? Students were required to complete 10 pre-exam quizzes throughout the semester. The scores of students who completed all of the quizzes or none of the quizzes were compared. Students of all abilities who completed all of the pre-exam quizzes had higher average exam scores than those who completed none. One semester Community college students enrolled in an introductory biology course for majors “Teaching students how to study: a workshop on information processing and self-testing helps students learn” (Stanger-Hall et al., 2011) What effect does a self-testing exercise done in a workshop have on final exam questions covering the same topic used in the workshop? Students who participated in the retrieval-practice workshop performed better on the exam questions related to the material covered in the workshop activity. However, there was no difference in overall performance on the exam between the two groups. 10 wk Undergraduate students in a introductory biology class