共查询到20条相似文献,搜索用时 125 毫秒
1.
Course-based undergraduate research experiences (CUREs) may be a more inclusive entry point to scientific research than independent research experiences, and the implementation of CUREs at the introductory level may therefore be a way to improve the diversity of the scientific community.The U.S. scientific research community does not reflect America''s diversity. Hispanics, African Americans, and Native Americans made up 31% of the general population in 2010, but they represented only 18 and 7% of science, technology, engineering, and mathematics (STEM) bachelor''s and doctoral degrees, respectively, and 6% of STEM faculty members (National Science Foundation [NSF], 2013 ). Equity in the scientific research community is important for a variety of reasons; a diverse community of researchers can minimize the negative influence of bias in scientific reasoning, because people from different backgrounds approach a problem from different perspectives and can raise awareness regarding biases (Intemann, 2009 ). Additionally, by failing to be attentive to equity, we may exclude some of the best and brightest scientific minds and limit the pool of possible scientists (Intemann, 2009 ). Given this need for equity, how can our scientific research community become more inclusive?Current approaches to improving diversity in scientific research focus on graduating more STEM majors, but graduation with a STEM undergraduate degree alone is not sufficient for entry into graduate school. Undergraduate independent research experiences are becoming more or less a prerequisite for admission into graduate school and eventually a career in academia; a quick look at the recommendations for any of the top graduate programs in biology or science career–related websites state an expectation for undergraduate research and a perceived handicap if recommendation letters for graduate school do not include a discussion of the applicant''s research experience (Webb, 2007 ; Harvard University, 2013 ).Independent undergraduate research experiences have been shown to improve the retention of students in scientific research (National Research Council, 2003 ; Laursen et al., 2010 ; American Association for the Advancement of Science, 2011 ; Eagan et al., 2013 ). Participation in independent research experiences has been shown to increase interest in pursuing a PhD (Seymour et al., 2004 ; Russell et al., 2007 ) and seems to be particularly beneficial for students from historically underrepresented backgrounds (Villarejo et al., 2008 ; Jones et al., 2010 ; Espinosa, 2011 ; Hernandez et al., 2013 ). However, the limited number of undergraduate research opportunities available and the structure of how students are selected for these independent research lab positions exclude many students and can perpetuate inequities in the research community. In this essay, we highlight barriers faced by students interested in pursuing an undergraduate independent research experience and factors that impact how faculty members select students for these limited positions. We examine how bringing research experiences into the required course work for students could mitigate these issues and ultimately make research more inclusive. 相似文献
2.
3.
4.
G. H. Roehrig M. Michlin L. Schmitt C. MacNabb J. M. Dubinsky 《CBE life sciences education》2012,11(4):413-424
In science education, inquiry-based approaches to teaching and learning provide a framework for students to building critical-thinking and problem-solving skills. Teacher professional development has been an ongoing focus for promoting such educational reforms. However, despite a strong consensus regarding best practices for professional development, relatively little systematic research has documented classroom changes consequent to these experiences. This paper reports on the impact of sustained, multiyear professional development in a program that combined neuroscience content and knowledge of the neurobiology of learning with inquiry-based pedagogy on teachers’ inquiry-based practices. Classroom observations demonstrated the value of multiyear professional development in solidifying adoption of inquiry-based practices and cultivating progressive yearly growth in the cognitive environment of impacted classrooms.Current discussion about educational reform among business leaders, politicians, and educators revolves around the idea students need “21st century skills” to be successful today (Rotherham and Willingham, 2009 ). Proponents argue that to be prepared for college and to be competitive in the 21st-century workplace, students need to be able to identify issues, acquire and use new information, understand complex systems, use technologies, and apply critical and creative thinking skills (US Department of Labor, 1991 ; Bybee et al., 2007 ; Conley, 2007 ). Advocates of 21st-century skills favor student-centered methods—for example, problem-based learning and project-based learning. In science education, inquiry-based approaches to teaching and learning provide one framework for students to build these critical-thinking and problem-solving skills (American Association for the Advancement of Science [AAAS], 1993 ; National Research Council [NRC], 2000 ; Capps et al., 2012 ).Unfortunately, in spite of the central role of inquiry in the national and state science standards, inquiry-based instruction is rarely implemented in secondary classrooms (Weiss et al., 1994 ; Bybee, 1997 ; Hudson et al., 2002 ; Smith et al., 2002 ; Capps et al., 2012 ). Guiding a classroom through planning, executing, analyzing, and evaluating open-ended investigations requires teachers to have sufficient expertise, content knowledge, and self-confidence to be able to maneuver through multiple potential roadblocks. Researchers cite myriad reasons for the lack of widespread inquiry-based instruction in schools: traditional beliefs about teaching and learning (Roehrig and Luft, 2004 ; Saad and BouJaoude, 2012 ), lack of pedagogical skills (Shulman, 1986 ; Adams and Krockover, 1997 ; Crawford, 2007 ), lack of time (Loughran, 1994 ), inadequate knowledge of the practice of science (Duschl, 1987 ; DeBoer, 2004 ; Saad and BouJaoude, 2012 ), perceived time constraints due to high-stakes testing, and inadequate preparation in science (Krajcik et al., 2000 ). Yet teachers are necessarily at the center of reform, as they make instructional and pedagogical decisions within their own classrooms (Cuban, 1990 ). Given that effectiveness of teachers’ classroom practices is critical to the success of current science education reforms, teacher professional development has been an ongoing focus for promoting educational reform (Corcoran, 1995 ; Corcoran et al., 1998 ).A review of the education research literature yields an extensive knowledge base in “best practices” for professional development (Corcoran, 1995 ; NRC, 1996 ; Loucks-Horsley and Matsumoto, 1999 ; Loucks-Horsley et al., 2009 ; Haslam and Fabiano, 2001 ; Wei et al., 2010 ). However, in spite of a strong consensus on what constitutes best practices for professional development (Desimone, 2009 ; Wei et al., 2010 ), relatively little systematic research has been conducted to support this consensus (Garet et al., 2001 ). Similarly, when specifically considering the science education literature, several studies have been published on the impact of teacher professional development on inquiry-based practices (e.g., Supovitz and Turner, 2000 ; Banilower et al., 2007 ; Capps et al., 2012 ). Unfortunately, these studies usually rely on teacher self-report data; few studies have reported empirical evidence of what actually occurs in the classroom following a professional development experience.Thus, in this study, we set out to determine through observational empirical data whether documented effective professional development does indeed change classroom practices. In this paper, we describe an extensive professional development experience for middle school biology teachers designed to develop teachers’ neuroscience content knowledge and inquiry-based pedagogical practices. We investigate the impact of professional development delivered collaboratively by experts in science and pedagogy on promoting inquiry-based instruction and an investigative classroom culture. The study was guided by the following research questions:
- Were teachers able to increase their neuroscience content knowledge?
- Were teachers able to effectively implement student-centered reform or inquiry-based pedagogy?
- Would multiple years of professional development result in greater changes in teacher practices?
5.
The Undergraduate Research Student Self-Assessment (URSSA): Validation for Use in Program Evaluation
This article examines the validity of the Undergraduate Research Student Self-Assessment (URSSA), a survey used to evaluate undergraduate research (UR) programs. The underlying structure of the survey was assessed with confirmatory factor analysis; also examined were correlations between different average scores, score reliability, and matches between numerical and textual item responses. The study found that four components of the survey represent separate but related constructs for cognitive skills and affective learning gains derived from the UR experience. Average scores from item blocks formed reliable but moderate to highly correlated composite measures. Additionally, some questions about student learning gains (meant to assess individual learning) correlated to ratings of satisfaction with external aspects of the research experience. The pattern of correlation among individual items suggests that items asking students to rate external aspects of their environment were more like satisfaction ratings than items that directly ask about student skills attainment. Finally, survey items asking about student aspirations to attend graduate school in science reflected inflated estimates of the proportions of students who had actually decided on graduate education after their UR experiences. Recommendations for revisions to the survey include clarified item wording and increasing discrimination between item blocks through reorganization.Undergraduate research (UR) experiences have long been an important component of science education at universities and colleges but have received greater attention in recent years, as they have been identified as important ways to strengthen preparation for advanced study and work in the science fields, especially among students from underrepresented minority groups (Tsui, 2007 ; Kuh, 2008 ). UR internships provide students with the opportunity to conduct authentic research in laboratories with scientist mentors, as students help design projects, gather and analyze data, and write up and present findings (Laursen et al., 2010 ). The promised benefits of UR experiences include both increased skills and greater familiarity with how science is practiced (Russell et al., 2007 ). While students learn the basics of scientific methods and laboratory skills, they are also exposed to the culture and norms of science (Carlone and Johnson, 2007 ; Hunter et al., 2007 ; Lopatto, 2010 ). Students learn about the day-to-day world of practicing science and are introduced to how scientists design studies, collect and analyze data, and communicate their research. After participating in UR, students may make more informed decisions about their future, and some may be more likely to decide to pursue graduate education in science, technology, engineering, and mathematics (STEM) disciplines (Bauer and Bennett, 2003 ; Russell et al., 2007 ; Eagan et al. 2013 ).While UR experiences potentially have many benefits for undergraduate students, assessing these benefits is challenging (Laursen, 2015 ). Large-scale research-based evaluation of the effects of UR is limited by a range of methodological problems (Eagan et al., 2013 ). True experimental studies are almost impossible to implement, since random assignment of students into UR programs is both logistically and ethically impractical, while many simple comparisons between UR and non-UR groups of students suffer from noncomparable groups and limited generalizability (Maton and Hrabowski, 2004 ). Survey studies often rely on poorly developed measures and use nonrepresentative samples, and large-scale survey research usually requires complex statistical models to control for student self-selection into UR programs (Eagan et al., 2013 ). For smaller-scale program evaluation, evaluators also encounter a number of measurement problems. Because of the wide range of disciplines, research topics, and methods, common standardized tests assessing laboratory skills and understandings across these disciplines are difficult to find. While faculty at individual sites may directly assess products, presentations, and behavior using authentic assessments such as portfolios, rubrics, and performance assessments, these assessments can be time-consuming and not easily comparable with similar efforts at other laboratories (Stokking et al., 2004 ; Kuh et al., 2014 ). Additionally, the affective outcomes of UR are not readily tapped by direct academic assessment, as many of the benefits found for students in UR, such as motivation, enculturation, and self-efficacy, are not measured by tests or other assessments (Carlone and Johnson, 2007 ). Other instruments for assessing UR outcomes, such as Lopatto’s SURE (Lopatto, 2010 ), focus on these affective outcomes rather than direct assessments of skills and cognitive gains.The size of most UR programs also makes assessment difficult. Research Experiences for Undergraduates (REUs), one mechanism by which UR programs may be organized within an institution, are funded by the National Science Foundation (NSF), but unlike many other educational programs at NSF (e.g., TUES) that require fully funded evaluations with multiple sources of evidence (Frechtling, 2010 ), REUs are generally so small that they cannot typically support this type of evaluation unless multiple programs pool their resources to provide adequate assessment. Informal UR experiences, offered to students by individual faculty within their own laboratories, are often more common but are typically not coordinated across departments or institutions or accountable to a central office or agency for assessment. Partly toward this end, the Undergraduate Research Student Self-Assessment (URSSA) was developed as a common assessment instrument that can be compared across multiple UR sites within or across institutions. It is meant to be used as one source of assessment information about UR sites and their students.The current research examines the validity of the URSSA in the context of its use as a self-report survey for UR programs and laboratories. Because the survey has been taken by more than 3400 students, we can test some aspects of how the survey is structured and how it functions. Assessing the validity of the URSSA for its intended use is a process of testing hypotheses about how well the survey represents its intended content. This ongoing process (Messick, 1993 ; Kane, 2001 ) involves gathering evidence from a range of sources to learn whether validity claims are supported by evidence and whether the survey results can be used confidently in specific contexts. For the URSSA, our method of inquiry focuses on how the survey is used to assess consortia of REU sites. In this context, survey results are used for quality assurance and comparisons of average ratings over years and as general indicators of program success in encouraging students to pursue graduate science education and scientific careers. Our research questions focus on the meaning and reliability of “core indicators” used to track self-reported learning gains in four areas and the ability of numerical items to capture student aspirations for future plans to attend graduate school in the sciences. 相似文献
6.
A response to Maskiewicz and Lineback''s essay in the September 2013 issue of CBE-Life Sciences Education.Dear Editor:Maskiewicz and Lineback (2013) have written a provocative essay about how the term misconceptions is used in biology education and the learning sciences in general. Their historical perspective highlights the logic and utility of the constructivist theory of learning. They emphasize that students’ preliminary ideas are resources to be built upon, not errors to be eradicated. Furthermore, Maskiewicz and Lineback argue that the term misconception has been largely abandoned by educational researchers, because it is not consistent with constructivist theory. Instead, they conclude, members of the biology education community should speak of preconceptions, naïve conceptions, commonsense conceptions, or alternative conceptions.We respectfully disagree. Our objections encompass both the semantics of the term misconception and the more general issue of constructivist theory and practice. We now address each of these in turn. (For additional discussion, please see Leonard, Andrews, and Kalinowski , “Misconceptions Yesterday, Today, and Tomorrow,” CBE—Life Sciences Education [LSE], in press, 2014.)Is misconception suitable for use in scholarly discussions? The answer depends partly on the intended audience. We avoid using the term misconception with students, because it could be perceived as pejorative. However, connotations of disapproval are less of a concern for the primary audience of LSE and similar journals, that is, learning scientists, discipline-based education researchers, and classroom teachers.An additional consideration is whether misconception is still used in learning sciences outside biology education. Maskiewicz and Lineback claim that misconception is rarely used in journals such as Cognition and Instruction, Journal of the Learning Sciences, Journal of Research in Science Teaching, and Science Education, yet the term appears in about a quarter of the articles published by these journals in 2013 (National Research Council, 2012 ).
Open in a separate windowaAs of November 25, 2013. Does not include very short editorials, commentaries, corrections, or prepublication online versions.A final consideration is whether any of the possible alternatives to misconception are preferable. We feel that the alternatives suggested by Maskiewicz and Lineback are problematic in their own ways. For example, naïve conception sounds more strongly pejorative to us than misconception. Naïve conception and preconception also imply that conceptual challenges occur only at the very beginning stages of learning, even though multiple rounds of conceptual revisions are sometimes necessary (e.g., see figure 1 of Andrews et al., 2012 ) as students move through learning progressions. Moreover, the terms preferred by Maskiewicz and Lineback are used infrequently (Smith et al. (1993) that they object to statements that misconceptions should be actively confronted, challenged, overcome, corrected, and/or replaced (Smith et al. (1993) argue on theoretical grounds that confrontation does not allow refinement of students’ pre-existing, imperfect ideas; instead, the students must simply choose among discrete prepackaged ideas. From Maskiewicz and Lineback''s perspective, the papers listed in Maskiewicz and Lineback (2013) as using outdated views of misconceptionsa
Open in a separate windowaWhile these papers do not adhere to Smith et al.''s (1993) version of constructivism, they do adhere to the constructivist approach that advocates cognitive dissonance.Our own stance differs from that of Maskiewicz and Lineback, reflecting a lack of consensus within constructivist theory. We agree with those who argue that, not only are confrontations compatible with constructivist learning, they are a central part of it (e.g., Gilbert and Watts, 1983 ; Hammer, 1996 ). We note that Baviskar et al. (2009) list “creating cognitive dissonance” as one of the four main tenets of constructivist teaching. Their work is consistent with research showing that focusing students on conflicting ideas improves understanding more than approaches that do not highlight conflicts (e.g., Kowalski and Taylor, 2009 ; Gadgil et al., 2012 ). Similarly, the Discipline-Based Education Research report (National Research Council, 2012 , p. 70) advocates “bridging analogies,” a form of confrontation, to guide students toward more accurate ways of thinking. Therefore, we do not share Maskiewicz and Lineback''s concerns about the papers listed in Price, 2012 ). We embrace collegial disagreement.Maskiewicz and Lineback imply that labeling students’ ideas as misconceptions essentially classifies these ideas as either right or wrong, with no intermediate stages for constructivist refinement. In fact, a primary goal of creating concept inventories, which use the term misconception profusely (e.g., Morris et al., 2012 ; Prince et al., 2012 ), is to demonstrate that learning is a complex composite of scientifically valid and invalid ideas (e.g., Andrews et al., 2012 ). A researcher or instructor who uses the word misconceptions can agree wholeheartedly with Maskiewicz and Lineback''s point that misconceptions can be a good starting point from which to develop expertise.As we have seen, misconception is itself fraught with misconceptions. The term now embodies the evolution of our understanding of how people learn. We support the continued use of the term, agreeing with Maskiewicz and Lineback that authors should define it carefully. For example, in our own work, we define misconceptions as inaccurate ideas that can predate or emerge from instruction (e.g., Andrews et al., 2012 ). We encourage instructors to view misconceptions as opportunities for cognitive dissonance that students encounter as they progress in their learning. 相似文献
Table 1.
Use of the term misconception in selected education research journals in 2013Journal (total articles published in 2013a) | Articles using misconception (“nondisapproving” articles/total articles) | Articles using other terms |
---|---|---|
LSE (59) | 23/24 | Alternative conception (4) |
Commonsense conception (2) | ||
Naïve conception (1) | ||
Preconception (4) | ||
Cognition and Instruction (16) | 3/3 | None |
Journal of the Learning Sciences (17) | 4/4 | Commonsense science knowledge (1) |
Naïve conception (1) | ||
Prior conception (1) | ||
Journal of Research in Science Teaching (49) | 11/13 | Commonsense idea (1) |
Naïve conception (1) | ||
Preconception (5) | ||
Science Education (36) | 10/11 | Naïve conception (1) |
Article | Example of constructivist language | Example of language suggesting confrontation |
---|---|---|
Andrews et al., 2011 | “Constructivist theory argues that individuals construct new understanding based on what they already know and believe.… We can expect students to retain serious misconceptions if instruction is not specifically designed to elicit and address the prior knowledge students bring to class” (p. 400). | Instructors were scored for “explaining to students why misconceptions were incorrect” and “making a substantial effort toward correcting misconceptions” (p. 399). “Misconceptions must be confronted before students can learn natural selection” (p. 399). “Instructors need to elicit misconceptions, create situations that challenge misconceptions.” (p. 403). |
Baumler et al., 2012 | “The last pair [of students]''s response invoked introns, an informative answer, in that it revealed a misconception grounded in a basic understanding of the Central Dogma” (p. 89; acknowledges students’ useful prior knowledge). | No relevant text found |
Cox-Paulson et al., 2012 | No relevant text found | This paper barely mentions misconceptions, but cites sources (Phillips et al., 2008 ; Robertson and Phillips, 2008 ) that refer to “exposing,” “uncovering,” and “correcting” misconceptions. |
Crowther, 2012 | “Prewritten songs may explain concepts in new ways that clash with students’ mental models and force revision of those models” (p. 28; emphasis added). | “Songs can be particularly useful for countering … conceptual misunderstandings.… Prewritten songs may explain concepts in new ways that clash with students’ mental models and force revision of those models” (p. 28). |
Kalinowski et al., 2010 | “Several different instructional approaches for helping students to change misconceptions … agree that instructors must take students’ prior knowledge into account and help students integrate new knowledge with their existing knowledge” (p. 88). | “One strategy for correcting misconceptions is to challenge them directly by ‘creating cognitive conflict,’ presenting students with new ideas that conflict with their pre-existing ideas about a phenomenon… In addition, study of multiple examples increases the chance of students identifying and overcoming persistent misconceptions” (p. 89). |
7.
Although we agree with Theobold and Freeman (2014) that linear models are the most appropriate way in which to analyze assessment data, we show the importance of testing for interactions between covariates and factors.To the Editor:Recently, Theobald and Freeman (2014) reviewed approaches for measuring student learning gains in science, technology, engineering, and mathematics (STEM) education research. In their article, they highlighted the shortcomings of approaches such as raw change scores, normalized gain scores, normalized change scores, and effect sizes when students are not randomly assigned to classes based on the different pedagogies that are being compared. As an alternative, they propose using linear regression models in which characteristics of students, such as pretest scores, are included as independent variables in addition to treatments. Linear models that include both continuous and categorical independent variables are often termed analysis of covariance (ANCOVA) models. The approach of using ANCOVA to control for differences in students among treatments groups has been suggested previously by Weber (2009) . We largely agree with Theobald and Freeman (2014) and Weber (2009) that ANCOVA models are an appropriate method for situations in which students cannot be randomly assigned to treatments and controls. However, in describing how to implement linear regression models to examine student learning gains, Theobald and Freeman (2014) ignore a fundamental assumption of ANCOVA.ANCOVA assumes homogeneity of slopes (McDonald, 2009 ; Sokal and Rohlf, 2011 ). In other words, the slope of the relationship between the covariate (e.g., pretest score) and the dependent variable (e.g., posttest score) is the same for the treatment group and the control. This assumption is a strict assumption of ANCOVA in that violations of this assumption can result in incorrect conclusions (Engqvist, 2005 ). For example, in Figure 1, both pretest score and treatment have statistically significant main effects in a linear model with only pretest score (F(1, 97) = 25.6, p < 0.001) and treatment (F(1, 97) = 42.6, p < 0.01) as independent variables. Therefore, we would conclude that all students in the class with pedagogical innovation had significantly greater posttest scores than those students in the control class for a given pretest score. Furthermore, we would conclude that the pedagogical innovation led to the same increase in score for all students in the treatment class, independent of their pretest scores. Clearly, neither of these conclusions would be justified.Researchers must first test the assumption of the homogeneity of slopes by including an interaction term (covariate × treatment) in their linear model (McDonald, 2009 ; Weber 2009 ; Sokal and Rohlf, 2011 ). For example, if we measured student achievement in two courses with different instructional approaches in a typical pretest/posttest design, then the interaction between students’ pretest scores and the type of instruction must be considered, because the instruction may have a different effect for high- versus low-achieving students. If multiple covariates are included in the linear model (see Equation 1 in Theobald and Freeman, 2014 ), then interaction terms need to be included for each of the covariates in the model. If the interaction term is statistically significant, this suggests that the relationship between the covariate and the dependent variable is different for each treatment group (F(1, 96) = 25.1, p < 0.001; Figure 1). As a result, the effect of the treatment will depend on the value of the covariate, and universal statements about the effect of the treatment are not appropriate (Engqvist, 2005 ). If the interaction term is not statistically significant, it should be removed from the model and the analysis rerun without the interaction term. Failure to remove an interaction term that was not statistically significant also can lead to an incorrect conclusion (Engqvist, 2005 ). Whether there are statistically significant interactions between the “treatment” and the covariates in the data set used by Theobald and Freeman (2014) is unclear.Open in a separate windowFigure 1.Simulated data to demonstrate heterogeneity of slopes. Pretest values were generated from random normal distributions with mean = 59.8 (SD = 18.1) for the treatment course and mean = 59.3 (SD = 17.0) for the control course, based on values given in Theobald and Freeman (2014) . For the treatment course, posttest values were calculated using the formula posttesti = 80 + 0.1 × pre-testi + Ɛi, where Ɛi was selected from a random normal distribution with mean = 0 (SD = 10). For the control course, posttest values were calculated using the formula posttesti = 42 + 0.5 × pre-testi + Ɛi, where Ɛi was selected from a random normal distribution with mean = 0 (SD = 10). n = 50 for both courses.In addition to being a strict assumption of ANCOVA, testing for homogeneity of slopes in a linear model is important in STEM education research, as slopes are likely heterogeneous for several reasons. First, for many instruments used in STEM education research, high-achieving students score high on the pretest. As a result, their ability to improve is limited due to the ceiling effect, and differences between treatment and control groups in posttest scores are likely to be minimal (Figure 1). In contrast, low-achieving students have a greater opportunity to change their scores between their pretest and posttest. Second, pedagogical innovations are more likely to have a greater impact on the learning of lower-performing students than higher-performing students. For example, Beck and Blumer (2012) found statistically greater gains in student confidence and scientific reasoning skills for students in the lowest quartile as compared with students in the highest quartile on pretest assessments in inquiry-based laboratory courses.Theobald and Freeman (2014, p. 47) note that “regression models can also include interaction terms that test whether the intervention has a differential impact on different types of students.” Yet, we argue that these terms must be included and only should be excluded if they are not statistically significant. 相似文献
8.
9.
10.
Charlene D’Avanzo 《CBE life sciences education》2013,12(3):373-382
The scale and importance of Vision and Change in Undergraduate Biology Education: A Call to Action challenges us to ask fundamental questions about widespread transformation of college biology instruction. I propose that we have clarified the “vision” but lack research-based models and evidence needed to guide the “change.” To support this claim, I focus on several key topics, including evidence about effective use of active-teaching pedagogy by typical faculty and whether certain programs improve students’ understanding of the Vision and Change core concepts. Program evaluation is especially problematic. While current education research and theory should inform evaluation, several prominent biology faculty–development programs continue to rely on self-reporting by faculty and students. Science, technology, engineering, and mathematics (STEM) faculty-development overviews can guide program design. Such studies highlight viewing faculty members as collaborators, embedding rewards faculty value, and characteristics of effective faculty-development learning communities. A recent National Research Council report on discipline-based STEM education research emphasizes the need for long-term faculty development and deep conceptual change in teaching and learning as the basis for genuine transformation of college instruction. Despite the progress evident in Vision and Change, forward momentum will likely be limited, because we lack evidence-based, reliable models for actually realizing the desired “change.”
All members of the biology academic community should be committed to creating, using, assessing, and disseminating effective practices in teaching and learning and in building a true community of scholars. (American Association for the Advancement of Science [AAAS], 2011 , p. 49)Realizing the “vision” in Vision and Change in Undergraduate Biology Education (Vision and Change; AAAS, 2011 ) is an enormous undertaking for the biology education community, and the scale and critical importance of this challenge prompts us to ask fundamental questions about widespread transformation of college biology teaching and learning. For example, Vision and Change reflects the consensus that active teaching enhances the learning of biology. However, what is known about widespread application of effective active-teaching pedagogy and how it may differ across institutional and classroom settings or with the depth of pedagogical understanding a biology faculty member may have? More broadly, what is the research base concerning higher education biology faculty–development programs, especially designs that lead to real change in classroom teaching? Has the develop-and-disseminate approach favored by the National Science Foundation''s (NSF) Division of Undergraduate Education (Dancy and Henderson, 2007 ) been generally effective? Can we directly apply outcomes from faculty-development programs in other science, technology, engineering, and mathematics (STEM) disciplines or is teaching college biology unique in important ways? In other words, if we intend to use Vision and Change as the basis for widespread transformation of biology instruction, is there a good deal of scholarly literature about how to help faculty make the endorsed changes or is this research base lacking?In the context of Vision and Change, in this essay I focus on a few key topics relevant to broad-scale faculty development, highlighting the extent and quality of the research base for it. My intention is to reveal numerous issues that may well inhibit forward momentum toward real transformation of college-level biology teaching and learning. Some are quite fundamental, such as ongoing dependence on less reliable assessment approaches for professional-development programs and mixed success of active-learning pedagogy by broad populations of biology faculty. I also offer specific suggestions to improve and build on identified issues.At the center of my inquiry is the faculty member. Following the definition used by the Professional and Organizational Development Network in Higher Education (www.podnetwork.org), I use “faculty development” to indicate programs that emphasize the individual faculty member as teacher (e.g., his or her skill in the classroom), scholar/professional (publishing, college/university service), and person (time constraints, self-confidence). Of course, faculty members work within particular departments and institutions, and these environments are clearly critical as well (Stark et al., 2002 ). Consequently, in addition to focusing on the individual, faculty-development programs may also consider organizational structure (such as administrators and criteria for reappointment and tenure) and instructional development (the overall curriculum, who teaches particular courses). In fact, Diamond (2002) emphasizes that the three areas of effort (individual, organizational, instructional) should complement one another in faculty-development programs. The scope of the numerous factors impacting higher education biology instruction is a realistic reminder about the complexity and challenge of the second half of the Vision and Change endeavor.This essay is organized around specific topics meant to be representative and to illustrate the state of the art of widespread (beyond a limited number of courses and institutions) professional development for biology faculty. The first two sections focus on active teaching and biology students’ conceptual understanding, respectively. The third section concerns important elements that have been identified as critical for effective STEM faculty-development programs. 相似文献
11.
Howard Garrison 《CBE life sciences education》2013,12(3):357-363
Blacks, Hispanics, and American Indians/Alaskan Natives are underrepresented in science and engineering fields. A comparison of race–ethnic differences at key transition points was undertaken to better inform education policy. National data on high school graduation, college enrollment, choice of major, college graduation, graduate school enrollment, and doctoral degrees were used to quantify the degree of underrepresentation at each level of education and the rate of transition to the next stage. Disparities are found at every level, and their impact is cumulative. For the most part, differences in graduation rates, rather than differential matriculation rates, make the largest contribution to the underrepresentation. The size, scope, and persistence of the disparities suggest that small-scale, narrowly targeted remediation will be insufficient.Most scientists and engineers take great pride in their reliance on logic and empirical evidence in decision making, and they reject the use of emotional, parochial, and irrational criteria. Prejudices of any sort are abjured. The prevalence of laboratory personnel and research collaborators from diverse national origins is often cited as an example of this meritocratic ideal. Therefore, the U.S. biomedical research community was shocked when a study revealed that Black Americans and other groups were substantially underrepresented in the receipt of grants from the National Institutes of Health (NIH), even after other correlates of success were controlled (Ginther et al., 2011 ). This picture clashed dramatically with the standards the community claimed. In the wake of this revelation, NIH created a high-level advisory group to examine the situation and make recommendations to address it (NIH, 2012 ).Concern about underrepresentation of Black Americans and other race–ethnic groups in science is not new (Melnick and Hamilton, 1977 ), and many attempts have been made to ameliorate or eliminate the gaps. While there have been some gains—underrepresented racial minority (URM)1 students rose from 2% of the biomedical graduate students to more than 11% since 1980 (National Research Council, 2011 )—disparities remain in all fields of science and engineering at all education levels and career stages (National Academy of Science, 2011 ).Given the limited progress in correcting this situation, it is essential to have a better understanding of the origin and extent of the problem. Especially in the current fiscal climate, with insufficient funding for education programs, interventions must be accurately targeted and appropriate to reach their goals. How large are the race–ethnic differences in science enrollments at each level of education? Are there general patterns that can help guide policy? Using data from 2008 and 2009, a recent National Science Foundation (NSF) report illustrates the underrepresentation of Blacks, Hispanics, and American Indians/Alaskan Natives at various education levels (NSF, 2011a ). While informative and illustrative of the extent of the problem, this single-year, cross-sectional perspective does not capture the conditions encountered by recent doctorate earners as they progressed through earlier stages in their education. Looking at graduation rates in the life sciences, Ginther et al. (2009) found that minority participation is increasing in biology, but minority students are not transitioning between milestones in the same proportions as Whites. 相似文献
12.
Helen L. Vasaly Jose Herrera Charles H. Sullivan Katherine J. Denniston 《CBE life sciences education》2013,12(1):1-4
Many life sciences faculty and administrators are unaware of existing funding programs and of the strategies needed for writing an educationally related proposal. We hope to remedy this problem by making the life sciences audience aware of two National Science Foundation programs underutilized by the biology community.This column has been a welcome opportunity to keep the CBE—Life Sciences Education readership aware of national efforts to improve undergraduate education in the life sciences and of ways to become a part of that effort (Woodin et al. 2009 , 2010 , 2012 ; Wei and Woodin, 2011 ). Throughout the years of engagement in the Vision and Change initiative, from the summer of 2007 to the present, the three primary agencies involved, the National Science Foundation (NSF), the National Institutes of Health (NIH), and the Howard Hughes Medical Institute (HHMI), have continually maintained a dialogue with participants through formal and informal conversations, workshops, and meetings. Our shared focus has been on how the life sciences community itself can change biology undergraduate education in order to better reflect and respond to the current educational environment, including the
- rapid advances in the discipline,
- new educational technologies and platforms becoming available,
- evidence developed through research on effective practices in undergraduate education, and
- challenges of accomplishing the necessary changes with the resources available.
- Transforming Undergraduate Education in Science, Technology, Engineering, and Mathematics (TUES) program (anticipated Spring of 2013 release), and
- Undergraduate Research Coordination Networks–Undergraduate Biology Education (RCN-UBE) program (next deadline is June 14, 2013).
13.
Testing within the science classroom is commonly used for both formative and summative assessment purposes to let the student and the instructor gauge progress toward learning goals. Research within cognitive science suggests, however, that testing can also be a learning event. We present summaries of studies that suggest that repeated retrieval can enhance long-term learning in a laboratory setting; various testing formats can promote learning; feedback enhances the benefits of testing; testing can potentiate further study; and benefits of testing are not limited to rote memory. Most of these studies were performed in a laboratory environment, so we also present summaries of experiments suggesting that the benefits of testing can extend to the classroom. Finally, we suggest opportunities that these observations raise for the classroom and for further research.Almost all science classes incorporate testing. Tests are most commonly used as summative assessment tools meant to gauge whether students have achieved the learning objectives of the course. They are sometimes also used as formative assessment tools—often in the form of low-stakes weekly or daily quizzes—to give students and faculty members a sense of students’ progression toward those learning objectives. Occasionally, tests are also used as diagnostic tools, to determine students’ preexisting conceptions or skills relevant to an upcoming subject. Rarely, however, do we think of tests as learning tools. We may acknowledge that testing promotes student learning, but we often attribute this effect to the studying students do to prepare for the test. And yet, one of the most consistent findings in cognitive psychology is that testing leads to increased retention more than studying alone does (Roediger and Butler, 2011 ; Roediger and Pyc, 2012 ). This effect can be enhanced when students receive feedback for failed tests and can be observed for both short-term and long-term retention. There is some evidence that testing not only improves student memory of the tested information but also ability to remember related information. Finally, testing appears to potentiate further study, allowing students to gain more from study periods that follow a test. Given the potential power of testing as a tool to promote learning, we should consider how to incorporate tests into our courses not only to gauge students’ learning, but also to promote that learning (Klionsky, 2008 ).We provide six observations about the effects of testing from the cognitive psychology literature, summarizing key studies that led to these conclusions (see Study Research question(s) Conclusion Length of delay before final test Study participants Repeated retrieval enhances long-term retention in a laboratory setting “Test-enhanced learning: taking memory tests improves long-term retention” (Roediger and Karpicke, 2006a) Is a testing effect observed in educationally relevant conditions? Is the benefit of testing greater than the benefit of restudy? Do multiple tests produce a greater effect than a single test? Testing improved retention significantly more than restudy in delayed tests. Multiple tests provided greater benefit than a single test. Experiment 1: 2 d; 1 wk Experiment 2: 1 wk Undergraduates ages 18–24, Washington University “Retrieval practice with short-answer, multiple-choice, and hybrid tests” (Smith and Karpicke, 2014) What effect does the type of question presented in retrieval practice have on long-term retention? Retrieval practice with multiple-choice, free-response, and hybrid formats improved students’ performance on a final, delayed test taken 1 wk later when compared with a no-retrieval control. The effect was observed for both questions that required only recall and those that required inference. Hybrid questions provided an advantage when the final test had a short-answer format. 1 wk Undergraduates, Purdue University “Retrieval practice produces more learning that elaborative studying with concept mapping” (Karpicke and Blunt, 2011) What is the effect of retrieval practice on learning relative to elaborative study using a concept map? Students in the retrieval-practice condition had greater gains in meaningful learning compared with those who used elaborative concept mapping as a learning tool. 1 wk Undergraduates Various testing formats can enhance learning “Retrieval practice with short-answer, multiple-choice, and hybrid tests” (Smith and Karpicke, 2014) See above. See above. See above. See above. “Test format and corrective feedback modify the effect of testing on long-term retention” (Kang et al., 2007) What effect does the type of question used for retrieval practice have on retention? Does feedback have an effect on retention for different types of questions? When no feedback was given, the difference in long-term retention between short-answer and multiple-choice questions was insignificant. When feedback was provided, short-answer questions were slightly more beneficial. 3 d Undergraduates, Washington University psychology subjects’ pool “The persisting benefits of using multiple-choice tests as learning events” (Little and Bjork, 2012) What effect does question format have on retention of information previously tested and related information not included in retrieval practice? Both cued-recall and multiple-choice questions improved recall compared with the no-test control. However, multiple-choice questions improved recall more than cued-recall questions for information not included in the retrieval practice, both after a 5-min and a 48-h delay. 48 h Undergraduates, University of California, Los Angeles Feedback enhances benefits of testing “Feedback enhances positive effects and reduces the negative effects of multiple-choice testing” (Butler and Roediger, 2008) What effect does feedback on multiple-choice tests have on long-term retention of information? Feedback improved retention on a final cued-recall test. Delayed feedback resulted in better final performance than immediate feedback, though both showed benefits compared with no feedback. The final test occurred 1 wk after the initial test. 1 wk Undergraduate psychology students, Washington University “Correcting a metacognitive error: feedback increases retention of low-confidence responses” (Butler et al., 2008) What role does feedback play in retrieval practice? Can it correct metacognitive errors as well as memory errors? Both initially correct and incorrect answers were benefited by feedback, but low-confidence answers were most benefited by feedback. 5 min Undergraduate psychology students, Washington University Learning is not limited to rote memory “Retrieval practice produces more learning than elaborative study with concept mapping” (Karpicke and Blunt, 2011) What is the effect of retrieval practice on learning relative to elaborative study using a concept map? Does retrieval practice improve students’ ability to perform higher-order cognitive activities (i.e., building a concept map) as well as simple recall tasks? Compared with elaborative study using concept mapping, retrieval practice improved students’ performance both on final tests that required short answers and final tests that required concept map production. See also earlier entry for this study. 1 wk Undergraduates “Retrieval practice with short-answer, multiple-choice, and hybrid tests” (Smith and Karpicke, 2014) See above. See above. See above. See above. “Repeated testing produces superior transfer of learning relative to repeated studying” (Butler, 2010) Does test-enhanced learning promote transfer of facts and concepts from one domain to another? Testing improved retention and increased transfer of information from one domain to another through test questions that required factual or conceptual recall and inferential questions that required transfer. 1 wk Undergraduate psychology students, Washington University Testing potentiates further study “Pretesting with multiple-choice questions facilitates learning” (Little and Bjork, 2011) Does pretesting using multiple-choice questions improve performance on a later test? Is an effect observed only for pretested information or also for related, previously untested information? A multiple-choice pretest improved performance on a final test, both for information that was included on the pretest and related information. 1 wk Undergraduates, University of California, Los Angeles “The interim test effect: testing prior material can facilitate the learning of new material” (Wissman et al., 2011) Does an interim test over previously learned material improve retention of subsequently learned material? Interim testing improves recall on a final test for information taught before and after the interim test. No delay Undergraduates, Kent State University The benefits of testing appear to extend to the classroom “The exam-a-day procedure improves performance in psychology classes” (Leeming, 2002) What effect does a daily exam have on retention at the end of the semester? Students who took a daily exam in an undergraduate psychology class scored higher on a retention test at the end of the course and had higher average grades than students who only took unit tests. One semester Undergraduates enrolled in Summer term of Introductory Psychology, University of Memphis “Repeated testing improves long-term retention relative to repeated study: a randomized controlled trial” (Larsen et al., 2009) Does repeated testing improve long-term retention in a real learning environment? In a study with medical residents, repeated testing with feedback improved retention more than repeated study for a final recall test 6 mo later. 6 mo Residents from Pediatrics and Emergency Medicine programs, Washington University “Retrieving essential material at the end of lectures improves performance on statistics exams” (Lyle and Crawford, 2011) What effect does daily recall practice using the PUREMEM method have on course exam scores? In an undergraduate psychology course, students using the PUREMEM method had higher exams scores than students taught with traditional lectures, assessed by four noncumulative exams spaced evenly throughout the semester. ∼3.5 wk Undergraduates enrolled in either of two consecutive years of Statistics for Psychology, University of Louisville “Using quizzes to enhance summative-assessment performance in a web-based class: an experimental study” (McDaniel et al., 2012) What effects do online testing resources have on retention of information in an online undergraduate neuroscience course? Both multiple-choice and short-answer quiz questions improved retention and improved scores on the final exam for questions identical to those on the weekly quizzes and those that were related but not identical. 15 wk Undergraduates enrolled in Web-based brain and behavior course “Increasing student success using online quizzing in introductory (majors) biology” (Orr and Foster, 2013) What effect do required pre-exam quizzes have on final exam scores for students in an introductory (major) biology course? Students were required to complete 10 pre-exam quizzes throughout the semester. The scores of students who completed all of the quizzes or none of the quizzes were compared. Students of all abilities who completed all of the pre-exam quizzes had higher average exam scores than those who completed none. One semester Community college students enrolled in an introductory biology course for majors “Teaching students how to study: a workshop on information processing and self-testing helps students learn” (Stanger-Hall et al., 2011) What effect does a self-testing exercise done in a workshop have on final exam questions covering the same topic used in the workshop? Students who participated in the retrieval-practice workshop performed better on the exam questions related to the material covered in the workshop activity. However, there was no difference in overall performance on the exam between the two groups. 10 wk Undergraduate students in a introductory biology class