首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A response to Maskiewicz and Lineback''s essay in the September 2013 issue of CBE-Life Sciences Education.Dear Editor:Maskiewicz and Lineback (2013) have written a provocative essay about how the term misconceptions is used in biology education and the learning sciences in general. Their historical perspective highlights the logic and utility of the constructivist theory of learning. They emphasize that students’ preliminary ideas are resources to be built upon, not errors to be eradicated. Furthermore, Maskiewicz and Lineback argue that the term misconception has been largely abandoned by educational researchers, because it is not consistent with constructivist theory. Instead, they conclude, members of the biology education community should speak of preconceptions, naïve conceptions, commonsense conceptions, or alternative conceptions.We respectfully disagree. Our objections encompass both the semantics of the term misconception and the more general issue of constructivist theory and practice. We now address each of these in turn. (For additional discussion, please see Leonard, Andrews, and Kalinowski , “Misconceptions Yesterday, Today, and Tomorrow,” CBE—Life Sciences Education [LSE], in press, 2014.)Is misconception suitable for use in scholarly discussions? The answer depends partly on the intended audience. We avoid using the term misconception with students, because it could be perceived as pejorative. However, connotations of disapproval are less of a concern for the primary audience of LSE and similar journals, that is, learning scientists, discipline-based education researchers, and classroom teachers.An additional consideration is whether misconception is still used in learning sciences outside biology education. Maskiewicz and Lineback claim that misconception is rarely used in journals such as Cognition and Instruction, Journal of the Learning Sciences, Journal of Research in Science Teaching, and Science Education, yet the term appears in about a quarter of the articles published by these journals in 2013 (National Research Council, 2012 ).

Table 1.

Use of the term misconception in selected education research journals in 2013
Journal (total articles published in 2013a)Articles using misconception (“nondisapproving” articles/total articles)Articles using other terms
LSE (59)23/24Alternative conception (4)
Commonsense conception (2)
Naïve conception (1)
Preconception (4)
Cognition and Instruction (16)3/3None
Journal of the Learning Sciences (17)4/4Commonsense science knowledge (1)
Naïve conception (1)
Prior conception (1)
Journal of Research in Science Teaching (49)11/13Commonsense idea (1)
Naïve conception (1)
Preconception (5)
Science Education (36)10/11Naïve conception (1)
Open in a separate windowaAs of November 25, 2013. Does not include very short editorials, commentaries, corrections, or prepublication online versions.A final consideration is whether any of the possible alternatives to misconception are preferable. We feel that the alternatives suggested by Maskiewicz and Lineback are problematic in their own ways. For example, naïve conception sounds more strongly pejorative to us than misconception. Naïve conception and preconception also imply that conceptual challenges occur only at the very beginning stages of learning, even though multiple rounds of conceptual revisions are sometimes necessary (e.g., see figure 1 of Andrews et al., 2012 ) as students move through learning progressions. Moreover, the terms preferred by Maskiewicz and Lineback are used infrequently (Smith et al. (1993) that they object to statements that misconceptions should be actively confronted, challenged, overcome, corrected, and/or replaced (Smith et al. (1993) argue on theoretical grounds that confrontation does not allow refinement of students’ pre-existing, imperfect ideas; instead, the students must simply choose among discrete prepackaged ideas. From Maskiewicz and Lineback''s perspective, the papers listed in Maskiewicz and Lineback (2013) as using outdated views of misconceptionsa
ArticleExample of constructivist languageExample of language suggesting confrontation
Andrews et al., 2011 “Constructivist theory argues that individuals construct new understanding based on what they already know and believe.… We can expect students to retain serious misconceptions if instruction is not specifically designed to elicit and address the prior knowledge students bring to class” (p. 400).Instructors were scored for “explaining to students why misconceptions were incorrect” and “making a substantial effort toward correcting misconceptions” (p. 399). “Misconceptions must be confronted before students can learn natural selection” (p. 399). “Instructors need to elicit misconceptions, create situations that challenge misconceptions.” (p. 403).
Baumler et al., 2012 “The last pair [of students]''s response invoked introns, an informative answer, in that it revealed a misconception grounded in a basic understanding of the Central Dogma” (p. 89; acknowledges students’ useful prior knowledge).No relevant text found
Cox-Paulson et al., 2012 No relevant text foundThis paper barely mentions misconceptions, but cites sources (Phillips et al., 2008 ; Robertson and Phillips, 2008 ) that refer to “exposing,” “uncovering,” and “correcting” misconceptions.
Crowther, 2012 “Prewritten songs may explain concepts in new ways that clash with students’ mental models and force revision of those models” (p. 28; emphasis added).“Songs can be particularly useful for countering … conceptual misunderstandings.… Prewritten songs may explain concepts in new ways that clash with students’ mental models and force revision of those models” (p. 28).
Kalinowski et al., 2010 “Several different instructional approaches for helping students to change misconceptions … agree that instructors must take students’ prior knowledge into account and help students integrate new knowledge with their existing knowledge” (p. 88).“One strategy for correcting misconceptions is to challenge them directly by ‘creating cognitive conflict,’ presenting students with new ideas that conflict with their pre-existing ideas about a phenomenon… In addition, study of multiple examples increases the chance of students identifying and overcoming persistent misconceptions” (p. 89).
Open in a separate windowaWhile these papers do not adhere to Smith et al.''s (1993) version of constructivism, they do adhere to the constructivist approach that advocates cognitive dissonance.Our own stance differs from that of Maskiewicz and Lineback, reflecting a lack of consensus within constructivist theory. We agree with those who argue that, not only are confrontations compatible with constructivist learning, they are a central part of it (e.g., Gilbert and Watts, 1983 ; Hammer, 1996 ). We note that Baviskar et al. (2009) list “creating cognitive dissonance” as one of the four main tenets of constructivist teaching. Their work is consistent with research showing that focusing students on conflicting ideas improves understanding more than approaches that do not highlight conflicts (e.g., Kowalski and Taylor, 2009 ; Gadgil et al., 2012 ). Similarly, the Discipline-Based Education Research report (National Research Council, 2012 , p. 70) advocates “bridging analogies,” a form of confrontation, to guide students toward more accurate ways of thinking. Therefore, we do not share Maskiewicz and Lineback''s concerns about the papers listed in Price, 2012 ). We embrace collegial disagreement.Maskiewicz and Lineback imply that labeling students’ ideas as misconceptions essentially classifies these ideas as either right or wrong, with no intermediate stages for constructivist refinement. In fact, a primary goal of creating concept inventories, which use the term misconception profusely (e.g., Morris et al., 2012 ; Prince et al., 2012 ), is to demonstrate that learning is a complex composite of scientifically valid and invalid ideas (e.g., Andrews et al., 2012 ). A researcher or instructor who uses the word misconceptions can agree wholeheartedly with Maskiewicz and Lineback''s point that misconceptions can be a good starting point from which to develop expertise.As we have seen, misconception is itself fraught with misconceptions. The term now embodies the evolution of our understanding of how people learn. We support the continued use of the term, agreeing with Maskiewicz and Lineback that authors should define it carefully. For example, in our own work, we define misconceptions as inaccurate ideas that can predate or emerge from instruction (e.g., Andrews et al., 2012 ). We encourage instructors to view misconceptions as opportunities for cognitive dissonance that students encounter as they progress in their learning.  相似文献   

2.
3.
Case-based learning and problem-based learning have demonstrated great promise in reforming science education. Yet an instructor, in newly considering this suite of interrelated pedagogical strategies, faces a number of important instructional choices. Different features and their related values and learning outcomes are profiled here, including: the level of student autonomy; instructional focus on content, skills development, or nature-of-science understanding; the role of history, or known outcomes; scope, clarity, and authenticity of problems provided to students; extent of collaboration; complexity, in terms of number of interpretive perspectives; and, perhaps most importantly, the role of applying versus generating knowledge.
A leader who gives trust earns trust.His profile is low, his words measured.His work done well, all proclaim,“Look what we’ve accomplished!”—Lao Tsu, Tao Te Ching
Problem-based learning (PBL) and case-based learning (CBL) are at least as old as apprenticeship among craftsmen. One can envision the student of metals at the smelting furnace, the student of herbal remedies at the plant collector''s side, or the student of navigation beside the helm. In recent years, however, PBL and CBL have emerged as powerful teaching tools in reforming science education. Most notably, these approaches exhibit key features advocated by educational researchers. First, both are fundamentally student-centered, acknowledging the importance of actively engaging students in their own learning. As the responsibility for learning shifts toward students, the role of the instructor also shifts, from the conventional authority who dispenses final-form knowledge to an expert guide, who motivates and facilitates the process of learning, while promoting the individual development of learning skills. The efforts of an ideal teacher may well be hidden. As Lao Tsu suggested centuries ago, educational achievement is measured by what a learner learns more than by what the teacher teaches.Second, in orienting more toward student perspectives and motivations, CBL and PBL tend to focus on concrete, specific occasions—cases or problems—wherein the target knowledge is relevant. Contextualizing the learning contributes both to student motivation and to the making of meaning (construed by many educators as central to functional memory and effective learning). The cases and problems are not merely supplemental illustrations or peripheral sidebars, but function centrally as the very occasion for learning. This style of learning resonates with views of cognitive scientists that our minds reason effectively through analogy and models, as much as through the interpretation and application of general, abstract principles.A third feature, and perhaps the most transformative, is the potential of PBL and CBL to contribute to the development of thinking skills and an understanding of the nature of science, beyond the conventional conceptual content. As students work on cases or problems, they typically exercise and hone skills in research, analysis, interpretation, and creative thinking. In addition to benefiting from practice, students may also reflect explicitly on their experience and thereby deepen their understanding of scientific practices. But such lessons do not emerge automatically. The instructor must make deliberate choices and design activities mindfully to support this aim.In these three ways, PBL and CBL have proven valuable in many settings and hold promise more widely. An instructor first venturing into the realm of CBL and PBL, however, may easily be overwhelmed by the variety of approaches and the occasional contradictions among them. The literature is vast and includes sometimes conflicting claims about appropriate or ideal methods. This paper aims to introduce some of the key dimensions and to invite reflection about the respective values and deficits of various alternatives. It hopes to inform pedagogical choices about learning objectives and foster corresponding clarity in classroom practice. It also hopes, indirectly, to promote clarity on values and learning outcomes among current practitioners and in educational research and to provide perspective on the discord among advocates of specific approaches.1The first two sections below introduce CBL and PBL, respectively, as instructional strategies reflecting certain values. (A teacher might well adopt both simultaneously.) Beyond these basics, there are many dimensions or distinctions to consider, addressed in successive sections (and summarized in 2 In addition, PBL gained recognition largely from applications in professional education—medical, business, and law schools (Butler et al., 2005 ). These instructional contexts tend to emphasize training. Contemporary science education, by contrast, tends to highlight student-based inquiry and understanding of scientific practices (National Research Council, 2012 ). The original approaches, as models, may need adapting. Most notably, the difference in context, between learning how to apply knowledge and learning how knowledge is generated, can be critical, as described below. The principles surveyed here can help guide the teacher in crafting an appropriate instructional design to accommodate specific contexts and values.

Table 1.

Key dimensions shaping learning environments and outcomes in CBL and PBL
• Occasion for engaging content: Contextualized (case based) or decontextualized?
• Mode of engaging student: Problem based or authority based?
• Instructional focus: Content, skills, and/or nature of science?
• Epistemic process: Apply knowledge or generate new knowledge?
• Setting: Historical case or contemporary case?
• Epistemic process: Open-ended or close-ended?
• Authenticity: Real case or constructed case?
• Clarity of problem: Well defined, ill defined, or unspecified?
• Social epistemic dimension: Collaborative or individual?
• Complexity of social epistemics: Single perspective or multiple perspectives?
• Scope: Narrow or broad?
• Level of student autonomy: Narrow or broad?
Open in a separate windowFocusing on distinctions in pedagogical approaches encourages one to think more rigorously about educational values and aims. For example, is knowing content the ultimate aim? To what degree is understanding scientific practice and/or its cultural contexts also important? What are the aims regarding analytical or problem-solving skills—or learning how to learn beyond the classroom? Is student motivation, or engagement in learning, a goal? Does one hope to shape student attitudes about the value or authority of science—or to recruit more students into scientific careers or to promote greater gender or ethnic balance? What role is afforded to student autonomy, either in shaping one''s own learning trajectory or as an independent thinker? Possible outcomes range from traditional conceptual content to skills, attitudes, and epistemic understanding. Different methods foster different outcomes. The goal here is to help one clarify one''s aims and align them with the appropriate strategies or teaching tools.3  相似文献   

4.
5.
6.
In science education, inquiry-based approaches to teaching and learning provide a framework for students to building critical-thinking and problem-solving skills. Teacher professional development has been an ongoing focus for promoting such educational reforms. However, despite a strong consensus regarding best practices for professional development, relatively little systematic research has documented classroom changes consequent to these experiences. This paper reports on the impact of sustained, multiyear professional development in a program that combined neuroscience content and knowledge of the neurobiology of learning with inquiry-based pedagogy on teachers’ inquiry-based practices. Classroom observations demonstrated the value of multiyear professional development in solidifying adoption of inquiry-based practices and cultivating progressive yearly growth in the cognitive environment of impacted classrooms.Current discussion about educational reform among business leaders, politicians, and educators revolves around the idea students need “21st century skills” to be successful today (Rotherham and Willingham, 2009 ). Proponents argue that to be prepared for college and to be competitive in the 21st-century workplace, students need to be able to identify issues, acquire and use new information, understand complex systems, use technologies, and apply critical and creative thinking skills (US Department of Labor, 1991 ; Bybee et al., 2007 ; Conley, 2007 ). Advocates of 21st-century skills favor student-centered methods—for example, problem-based learning and project-based learning. In science education, inquiry-based approaches to teaching and learning provide one framework for students to build these critical-thinking and problem-solving skills (American Association for the Advancement of Science [AAAS], 1993 ; National Research Council [NRC], 2000 ; Capps et al., 2012 ).Unfortunately, in spite of the central role of inquiry in the national and state science standards, inquiry-based instruction is rarely implemented in secondary classrooms (Weiss et al., 1994 ; Bybee, 1997 ; Hudson et al., 2002 ; Smith et al., 2002 ; Capps et al., 2012 ). Guiding a classroom through planning, executing, analyzing, and evaluating open-ended investigations requires teachers to have sufficient expertise, content knowledge, and self-confidence to be able to maneuver through multiple potential roadblocks. Researchers cite myriad reasons for the lack of widespread inquiry-based instruction in schools: traditional beliefs about teaching and learning (Roehrig and Luft, 2004 ; Saad and BouJaoude, 2012 ), lack of pedagogical skills (Shulman, 1986 ; Adams and Krockover, 1997 ; Crawford, 2007 ), lack of time (Loughran, 1994 ), inadequate knowledge of the practice of science (Duschl, 1987 ; DeBoer, 2004 ; Saad and BouJaoude, 2012 ), perceived time constraints due to high-stakes testing, and inadequate preparation in science (Krajcik et al., 2000 ). Yet teachers are necessarily at the center of reform, as they make instructional and pedagogical decisions within their own classrooms (Cuban, 1990 ). Given that effectiveness of teachers’ classroom practices is critical to the success of current science education reforms, teacher professional development has been an ongoing focus for promoting educational reform (Corcoran, 1995 ; Corcoran et al., 1998 ).A review of the education research literature yields an extensive knowledge base in “best practices” for professional development (Corcoran, 1995 ; NRC, 1996 ; Loucks-Horsley and Matsumoto, 1999 ; Loucks-Horsley et al., 2009 ; Haslam and Fabiano, 2001 ; Wei et al., 2010 ). However, in spite of a strong consensus on what constitutes best practices for professional development (Desimone, 2009 ; Wei et al., 2010 ), relatively little systematic research has been conducted to support this consensus (Garet et al., 2001 ). Similarly, when specifically considering the science education literature, several studies have been published on the impact of teacher professional development on inquiry-based practices (e.g., Supovitz and Turner, 2000 ; Banilower et al., 2007 ; Capps et al., 2012 ). Unfortunately, these studies usually rely on teacher self-report data; few studies have reported empirical evidence of what actually occurs in the classroom following a professional development experience.Thus, in this study, we set out to determine through observational empirical data whether documented effective professional development does indeed change classroom practices. In this paper, we describe an extensive professional development experience for middle school biology teachers designed to develop teachers’ neuroscience content knowledge and inquiry-based pedagogical practices. We investigate the impact of professional development delivered collaboratively by experts in science and pedagogy on promoting inquiry-based instruction and an investigative classroom culture. The study was guided by the following research questions:
  1. Were teachers able to increase their neuroscience content knowledge?
  2. Were teachers able to effectively implement student-centered reform or inquiry-based pedagogy?
  3. Would multiple years of professional development result in greater changes in teacher practices?
Current reforms in science education require fundamental changes in how students are taught science. For most teachers, this requires rethinking their own practices and developing new roles both for themselves as teachers and for their students (Darling-Hammond and McLaughlin, 1995 ). Many teachers learned to teach using a model of teaching and learning that focuses heavily on memorizing facts (Porter and Brophy, 1988 ; Cohen et al., 1993 ; Darling-Hammond and McLaughlin, 1995 ), and this traditional and didactic model of instruction still dominates instruction in U.S. classrooms. A recent national observation study found that only 14% of science lessons were of high quality, providing students an opportunity to learn important science concepts (Banilower et al., 2006 ). Shifting to an inquiry-based approach to teaching places more emphasis on conceptual understanding of subject matter, as well as an emphasis on the process of establishing and validating scientific concepts and claims (Anderson, 1989 ; Borko and Putnam, 1996 ). In effect, professional development must provide opportunities for teachers to reflect critically on their practices and to fashion new knowledge and beliefs about content, pedagogy, and learners (Darling-Hammond and McLaughlin, 1995 ; Wei et al., 2010 ). If teachers are uncomfortable with a subject or believe they cannot teach science, they may focus less time on it and impart negative feelings about the subject to their students. In this way, content knowledge influences teachers’ beliefs about teaching and personal self-efficacy (Gresham, 2008 ). Personal self-efficacy was first defined as “the conviction that one can successfully execute the behavior required to produce the outcomes” (Bandura, 1977 , p.193). Researchers have reported self-efficacy to be strongly correlated with teachers’ ability to implement reform-based practices (Mesquita and Drake, 1994 ; Marshall et al., 2009 ).Inquiry is “a multifaceted activity that involves making observations, posing questions, examining books and other sources of information, planning investigations, reviewing what is already known in light of evidence, using tools to gather, analyze and interpret data, proposing answers, explanations and predictions, and communicating the results” (NRC, 1996 , p. 23). Unfortunately, most preservice teachers rarely experience inquiry-based instruction in their undergraduate science courses. Instead, they listen to lectures on science and participate in laboratory exercises with guidelines for finding the expected answer (Gess-Newsome and Lederman, 1993 ; DeHaan, 2005 ). As such, teachers’ knowledge and beliefs about teaching and learning were developed over the many years of their own educations, through “apprenticeship of observation” (Lortie, 1975 ), in traditional lecture-based settings that they then replicate in their own classrooms. To support the implementation of inquiry in K–12 classrooms, teachers need firsthand experiences of inquiry, questioning, and experimentation within professional development programs (Gess-Newsome, 1999 ; Supovitz and Turner, 2000 ; Capps et al., 2012 ).A common criticism of professional development activities is that they are too often one-shot workshops with limited follow-up after the workshop activities (Darling-Hammond, 2005 ; Wei et al., 2010 ). The literature on teacher learning and professional development calls for professional development that is sustained over time, as the duration of professional development is related to the depth of teacher change (Shields et al., 1998 ; Weiss et al., 1998 ; Supovitz and Turner, 2000 ; Banilower et al., 2007 ). If the professional development program is too short in duration, teachers may dismiss the suggested practices or at best assimilate teaching strategies into their current repertoire with little substantive change (Tyack and Cuban, 1995 ; Coburn, 2004 ). For example, Supovitz and Turner (2000 ) found that sustained professional development (more than 80 h) was needed to create an investigative classroom culture in science, as opposed to small-scale changes in practices. Teachers need professional development that is interactive with their teaching practices; in other words, professional development programs should allow time for teachers to try out new practices, to obtain feedback on their teaching, and to reflect on these new practices. Not only is duration (total number of hours) of professional development important, but also the time span of the professional development experience (number of years across which professional hours are situated) to allow for multiple cycles of presentation and reflection on practices (Blumenfeld et al., 1991 ; Garet et al., 2001 ). Supovitz and Turner''s study (2000) suggests that it is more difficult to change classroom culture than teaching practices; the greatest changes in teaching practices occurred after 80 h of professional development, while changes in classroom investigative culture did not occur until after 160 h of professional development.Finally, research indicates that professional development that focuses on science content and how children learn is important in changing teaching practices (e.g., Corcoran, 1995 ; Desimone, 2009 ), particularly when the goal is the implementation of inquiry-like instruction designed to improve students’ conceptual understanding (Fennema et al., 1996 ; Cohen and Hill, 1998 ). The science content chosen for the professional development series described in this study was neuroscience. This content is relevant for both middle and high school science teachers and has direct connections to standards. It also is unique in that it encompasses material on the neurological basis for learning, thus allowing discussions about student learning to occur within both a scientific and pedagogical context. As a final note, it is rare for even a life science teacher to have taken any coursework in neuroscience. The inquiry-based lessons and experiments encountered by the teachers during the professional development provide an authentic learning experience, allowing teachers to truly inhabit the role of a learner in an inquiry-based setting.  相似文献   

7.
Graduate teaching assistants (GTAs) in science, technology, engineering, and mathematics (STEM) have a large impact on undergraduate instruction but are often poorly prepared to teach. Teaching self-efficacy, an instructor’s belief in his or her ability to teach specific student populations a specific subject, is an important predictor of teaching skill and student achievement. A model of sources of teaching self-efficacy is developed from the GTA literature. This model indicates that teaching experience, departmental teaching climate (including peer and supervisor relationships), and GTA professional development (PD) can act as sources of teaching self-efficacy. The model is pilot tested with 128 GTAs from nine different STEM departments at a midsized research university. Structural equation modeling reveals that K–12 teaching experience, hours and perceived quality of GTA PD, and perception of the departmental facilitating environment are significant factors that explain 32% of the variance in the teaching self-efficacy of STEM GTAs. This model highlights the important contributions of the departmental environment and GTA PD in the development of teaching self-efficacy for STEM GTAs.Science, technology, engineering, and mathematics (STEM) graduate teaching assistants (GTAs) play a significant role in the learning environment of undergraduate students. They are heavily involved in the instruction of undergraduate students at master’s- and doctoral-granting universities (Nyquist et al., 1991 ; Johnson and McCarthy, 2000 ; Sundberg et al., 2005 ; Gardner and Jones, 2011 ). GTAs are commonly in charge of laboratory or recitation sections, in which they often have more contact and interaction with the students than the professor who is teaching the course (Abraham et al., 1997 ; Sundberg et al., 2005 ; Prieto and Scheel, 2008 ; Gardner and Jones, 2011 ).Despite the heavy reliance on GTAs for instruction and the large potential for them to influence student learning, there is evidence that many GTAs are completely unprepared or at best poorly prepared for their role as instructors (Abraham et al., 1997 ; Rushin et al., 1997 ; Shannon et al., 1998 ; Golde and Dore, 2001 ; Fagen and Wells, 2004 ; Luft et al., 2004 ; Sundberg et al., 2005 ; Prieto and Scheel, 2008 ). For example, in molecular biology, 71% of doctoral students are GTAs, but only 30% have had an opportunity to take a GTA professional development (PD) course that lasted at least one semester (Golde and Dore, 2001 ). GTAs often teach in a primarily directive manner and have intuitive notions about student learning, motivation, and abilities (Luft et al., 2004 ). For those who experience PD, university-wide PD is often too general (e.g., covering university policies and procedures, resources for students), and departmental PD does not address GTAs’ specific teaching needs; instead departmental PD repeats the university PD (Jones, 1993 ; Golde and Dore, 2001 ; Luft et al., 2004 ). Nor do graduate experiences prepare GTAs to become faculty and teach lecture courses (Golde and Dore, 2001 ).While there is ample evidence that many GTAs are poorly prepared, as well as studies of effective GTA PD programs (biology examples include Schussler et al., 2008 ; Miller et al., 2014 ; Wyse et al., 2014 ), the preparation of a graduate student as an instructor does not occur in a vacuum. GTAs are also integral members of their departments and are interacting with faculty and other GTAs in many different ways, including around teaching (Bomotti, 1994 ; Notarianni-Girard, 1999 ; Belnap, 2005 ; Calkins and Kelly, 2005 ). It is important to build good working relationships among the GTAs and between the GTAs and their supervisors (Gardner and Jones, 2011 ). However, there are few studies that examine the development of GTAs as integral members of their departments and determine how departmental teaching climate, GTA PD, and prior teaching experiences can impact GTAs.To guide our understanding of the development of GTAs as instructors, a theoretical framework is important. Social cognitive theory is a well-developed theoretical framework for describing behavior and can be applied specifically to teaching (Bandura, 1977 , 1986 , 1997 , 2001 ). A key concept in social cognitive theory is self-efficacy, which is a person’s belief in his or her ability to perform a specific task in a specific context (Bandura, 1997 ). High self-efficacy correlates with strong performance in a task such teaching (Bandura, 1997 ; Tschannen-Moran and Hoy, 2007 ). Teaching self-efficacy focuses on teachers’ perceptions of their ability to “organize and execute courses of action required to successfully accomplish a specific teaching task in a particular context” (Tschannen-Moran et al., 1998 , p. 233). High teaching self-efficacy has been shown to predict a variety of types of student achievement among K–12 teachers (Ashton and Webb, 1986 ; Anderson et al., 1988 ; Ross, 1992 ; Dellinger et al., 2008 ; Klassen et al., 2011 ). In GTAs, teaching self-efficacy has been shown to be related to persistence in academia (Elkins, 2005 ) and student achievement in mathematics (Johnson, 1998 ). High teaching self-efficacy is evidenced by classroom behaviors such as efficient classroom management, organization and planning, and enthusiasm (Guskey, 1984 ; Allinder, 1994 ; Dellinger et al., 2008 ). Instructors with high teaching self-efficacy work continually with students to help them in learning the material (Gibson and Dembo, 1984 ). These instructors are also willing to try a variety of teaching methods to improve their teaching (Stein and Wang, 1988 ; Allinder, 1994 ). Instructors with high teaching self-efficacy perform better as teachers, are persistent in difficult teaching tasks, and can positively affect their student’s achievement.These behaviors of successful instructors, which can contribute to student success, are important to foster in STEM GTAs. Understanding of what influences the development of teaching self-efficacy in STEM GTAs can be used to improve their teaching self-efficacy and ultimately their teaching. Therefore, it is important to understand what impacts teaching self-efficacy in STEM GTAs. Current research into factors that influence GTA teaching self-efficacy are generally limited to one or two factors in a study (Heppner, 1994 ; Prieto and Altmaier, 1994 ; Prieto and Meyers, 1999 ; Prieto et al., 2007 ; Liaw, 2004 ; Meyers et al., 2007 ). Studying these factors in isolation does not allow us to understand how they work together to influence GTA teaching self-efficacy. Additionally, most studies of GTA teaching self-efficacy are not conducted with STEM GTAs. STEM instructors teach in a different environment and with different responsibilities than instructors in the social sciences and liberal arts (Lindbloom-Ylanne et al., 2006 ). These differences could impact the development of teaching self-efficacy of STEM GTAs compared with social science and liberal arts GTAs. To further our understanding of the development of STEM GTA teaching self-efficacy, this paper aims to 1) describe a model of factors that could influence GTA teaching self-efficacy, and 2) pilot test the model using structural equation modeling (SEM) on data gathered from STEM GTAs. The model is developed from social cognitive theory and GTA teaching literature, with support from the K–12 teaching self-efficacy literature. This study is an essential first step in improving our understanding of the important factors impacting STEM GTA teaching self-efficacy, which can then be used to inform and support the preparation of effective STEM GTAs.  相似文献   

8.
With the improvement of people's living standards, gastrointestinal adverse reactions caused by various adverse factors have attracted more and more people's attention. A recent study has indicated that coronavirus disease 2019(COVID-19) could also invade the gastrointestinal tract, leading to gastrointestinal adverse reactions(Song et al., 2020). In recent years, immunotherapy has provided certain effects for some patients with advanced malignant tumors.  相似文献   

9.
Although we agree with Theobold and Freeman (2014) that linear models are the most appropriate way in which to analyze assessment data, we show the importance of testing for interactions between covariates and factors.To the Editor:Recently, Theobald and Freeman (2014) reviewed approaches for measuring student learning gains in science, technology, engineering, and mathematics (STEM) education research. In their article, they highlighted the shortcomings of approaches such as raw change scores, normalized gain scores, normalized change scores, and effect sizes when students are not randomly assigned to classes based on the different pedagogies that are being compared. As an alternative, they propose using linear regression models in which characteristics of students, such as pretest scores, are included as independent variables in addition to treatments. Linear models that include both continuous and categorical independent variables are often termed analysis of covariance (ANCOVA) models. The approach of using ANCOVA to control for differences in students among treatments groups has been suggested previously by Weber (2009) . We largely agree with Theobald and Freeman (2014) and Weber (2009) that ANCOVA models are an appropriate method for situations in which students cannot be randomly assigned to treatments and controls. However, in describing how to implement linear regression models to examine student learning gains, Theobald and Freeman (2014) ignore a fundamental assumption of ANCOVA.ANCOVA assumes homogeneity of slopes (McDonald, 2009 ; Sokal and Rohlf, 2011 ). In other words, the slope of the relationship between the covariate (e.g., pretest score) and the dependent variable (e.g., posttest score) is the same for the treatment group and the control. This assumption is a strict assumption of ANCOVA in that violations of this assumption can result in incorrect conclusions (Engqvist, 2005 ). For example, in Figure 1, both pretest score and treatment have statistically significant main effects in a linear model with only pretest score (F(1, 97) = 25.6, p < 0.001) and treatment (F(1, 97) = 42.6, p < 0.01) as independent variables. Therefore, we would conclude that all students in the class with pedagogical innovation had significantly greater posttest scores than those students in the control class for a given pretest score. Furthermore, we would conclude that the pedagogical innovation led to the same increase in score for all students in the treatment class, independent of their pretest scores. Clearly, neither of these conclusions would be justified.Researchers must first test the assumption of the homogeneity of slopes by including an interaction term (covariate × treatment) in their linear model (McDonald, 2009 ; Weber 2009 ; Sokal and Rohlf, 2011 ). For example, if we measured student achievement in two courses with different instructional approaches in a typical pretest/posttest design, then the interaction between students’ pretest scores and the type of instruction must be considered, because the instruction may have a different effect for high- versus low-achieving students. If multiple covariates are included in the linear model (see Equation 1 in Theobald and Freeman, 2014 ), then interaction terms need to be included for each of the covariates in the model. If the interaction term is statistically significant, this suggests that the relationship between the covariate and the dependent variable is different for each treatment group (F(1, 96) = 25.1, p < 0.001; Figure 1). As a result, the effect of the treatment will depend on the value of the covariate, and universal statements about the effect of the treatment are not appropriate (Engqvist, 2005 ). If the interaction term is not statistically significant, it should be removed from the model and the analysis rerun without the interaction term. Failure to remove an interaction term that was not statistically significant also can lead to an incorrect conclusion (Engqvist, 2005 ). Whether there are statistically significant interactions between the “treatment” and the covariates in the data set used by Theobald and Freeman (2014) is unclear.Open in a separate windowFigure 1.Simulated data to demonstrate heterogeneity of slopes. Pretest values were generated from random normal distributions with mean = 59.8 (SD = 18.1) for the treatment course and mean = 59.3 (SD = 17.0) for the control course, based on values given in Theobald and Freeman (2014) . For the treatment course, posttest values were calculated using the formula posttesti = 80 + 0.1 × pre-testi + Ɛi, where Ɛi was selected from a random normal distribution with mean = 0 (SD = 10). For the control course, posttest values were calculated using the formula posttesti = 42 + 0.5 × pre-testi + Ɛi, where Ɛi was selected from a random normal distribution with mean = 0 (SD = 10). n = 50 for both courses.In addition to being a strict assumption of ANCOVA, testing for homogeneity of slopes in a linear model is important in STEM education research, as slopes are likely heterogeneous for several reasons. First, for many instruments used in STEM education research, high-achieving students score high on the pretest. As a result, their ability to improve is limited due to the ceiling effect, and differences between treatment and control groups in posttest scores are likely to be minimal (Figure 1). In contrast, low-achieving students have a greater opportunity to change their scores between their pretest and posttest. Second, pedagogical innovations are more likely to have a greater impact on the learning of lower-performing students than higher-performing students. For example, Beck and Blumer (2012) found statistically greater gains in student confidence and scientific reasoning skills for students in the lowest quartile as compared with students in the highest quartile on pretest assessments in inquiry-based laboratory courses.Theobald and Freeman (2014, p. 47) note that “regression models can also include interaction terms that test whether the intervention has a differential impact on different types of students.” Yet, we argue that these terms must be included and only should be excluded if they are not statistically significant.  相似文献   

10.
Course-based undergraduate research experiences (CUREs) may be a more inclusive entry point to scientific research than independent research experiences, and the implementation of CUREs at the introductory level may therefore be a way to improve the diversity of the scientific community.The U.S. scientific research community does not reflect America''s diversity. Hispanics, African Americans, and Native Americans made up 31% of the general population in 2010, but they represented only 18 and 7% of science, technology, engineering, and mathematics (STEM) bachelor''s and doctoral degrees, respectively, and 6% of STEM faculty members (National Science Foundation [NSF], 2013 ). Equity in the scientific research community is important for a variety of reasons; a diverse community of researchers can minimize the negative influence of bias in scientific reasoning, because people from different backgrounds approach a problem from different perspectives and can raise awareness regarding biases (Intemann, 2009 ). Additionally, by failing to be attentive to equity, we may exclude some of the best and brightest scientific minds and limit the pool of possible scientists (Intemann, 2009 ). Given this need for equity, how can our scientific research community become more inclusive?Current approaches to improving diversity in scientific research focus on graduating more STEM majors, but graduation with a STEM undergraduate degree alone is not ­sufficient for entry into graduate school. Undergraduate independent research experiences are becoming more or less a prerequisite for admission into graduate school and eventually a career in academia; a quick look at the recommendations for any of the top graduate programs in biology or science career–related websites state an expectation for ­undergraduate research and a perceived handicap if recommendation letters for graduate school do not include a ­discussion of the applicant''s research experience (Webb, 2007 ; Harvard ­University, 2013 ).Independent undergraduate research experiences have been shown to improve the retention of students in scientific research (National Research Council, 2003 ; Laursen et al., 2010 ; American Association for the Advancement of Science, 2011 ; Eagan et al., 2013 ). Participation in independent research experiences has been shown to increase interest in pursuing a PhD (Seymour et al., 2004 ; Russell et al., 2007 ) and seems to be particularly beneficial for students from historically underrepresented backgrounds (Villarejo et al., 2008 ; Jones et al., 2010 ; Espinosa, 2011 ; Hernandez et al., 2013 ). However, the limited number of undergraduate research opportunities available and the structure of how students are selected for these independent research lab positions exclude many students and can perpetuate inequities in the research community. In this essay, we highlight barriers faced by students interested in pursuing an undergraduate independent research experience and factors that impact how faculty members select students for these limited positions. We examine how bringing research experiences into the required course work for students could mitigate these issues and ultimately make research more inclusive.  相似文献   

11.
This article examines the validity of the Undergraduate Research Student Self-Assessment (URSSA), a survey used to evaluate undergraduate research (UR) programs. The underlying structure of the survey was assessed with confirmatory factor analysis; also examined were correlations between different average scores, score reliability, and matches between numerical and textual item responses. The study found that four components of the survey represent separate but related constructs for cognitive skills and affective learning gains derived from the UR experience. Average scores from item blocks formed reliable but moderate to highly correlated composite measures. Additionally, some questions about student learning gains (meant to assess individual learning) correlated to ratings of satisfaction with external aspects of the research experience. The pattern of correlation among individual items suggests that items asking students to rate external aspects of their environment were more like satisfaction ratings than items that directly ask about student skills attainment. Finally, survey items asking about student aspirations to attend graduate school in science reflected inflated estimates of the proportions of students who had actually decided on graduate education after their UR experiences. Recommendations for revisions to the survey include clarified item wording and increasing discrimination between item blocks through reorganization.Undergraduate research (UR) experiences have long been an important component of science education at universities and colleges but have received greater attention in recent years, as they have been identified as important ways to strengthen preparation for advanced study and work in the science fields, especially among students from underrepresented minority groups (Tsui, 2007 ; Kuh, 2008 ). UR internships provide students with the opportunity to conduct authentic research in laboratories with scientist mentors, as students help design projects, gather and analyze data, and write up and present findings (Laursen et al., 2010 ). The promised benefits of UR experiences include both increased skills and greater familiarity with how science is practiced (Russell et al., 2007 ). While students learn the basics of scientific methods and laboratory skills, they are also exposed to the culture and norms of science (Carlone and Johnson, 2007 ; Hunter et al., 2007 ; Lopatto, 2010 ). Students learn about the day-to-day world of practicing science and are introduced to how scientists design studies, collect and analyze data, and communicate their research. After participating in UR, students may make more informed decisions about their future, and some may be more likely to decide to pursue graduate education in science, technology, engineering, and mathematics (STEM) disciplines (Bauer and Bennett, 2003 ; Russell et al., 2007 ; Eagan et al. 2013 ).While UR experiences potentially have many benefits for undergraduate students, assessing these benefits is challenging (Laursen, 2015 ). Large-scale research-based evaluation of the effects of UR is limited by a range of methodological problems (Eagan et al., 2013 ). True experimental studies are almost impossible to implement, since random assignment of students into UR programs is both logistically and ethically impractical, while many simple comparisons between UR and non-UR groups of students suffer from noncomparable groups and limited generalizability (Maton and Hrabowski, 2004 ). Survey studies often rely on poorly developed measures and use nonrepresentative samples, and large-scale survey research usually requires complex statistical models to control for student self-selection into UR programs (Eagan et al., 2013 ). For smaller-scale program evaluation, evaluators also encounter a number of measurement problems. Because of the wide range of disciplines, research topics, and methods, common standardized tests assessing laboratory skills and understandings across these disciplines are difficult to find. While faculty at individual sites may directly assess products, presentations, and behavior using authentic assessments such as portfolios, rubrics, and performance assessments, these assessments can be time-consuming and not easily comparable with similar efforts at other laboratories (Stokking et al., 2004 ; Kuh et al., 2014 ). Additionally, the affective outcomes of UR are not readily tapped by direct academic assessment, as many of the benefits found for students in UR, such as motivation, enculturation, and self-efficacy, are not measured by tests or other assessments (Carlone and Johnson, 2007 ). Other instruments for assessing UR outcomes, such as Lopatto’s SURE (Lopatto, 2010 ), focus on these affective outcomes rather than direct assessments of skills and cognitive gains.The size of most UR programs also makes assessment difficult. Research Experiences for Undergraduates (REUs), one mechanism by which UR programs may be organized within an institution, are funded by the National Science Foundation (NSF), but unlike many other educational programs at NSF (e.g., TUES) that require fully funded evaluations with multiple sources of evidence (Frechtling, 2010 ), REUs are generally so small that they cannot typically support this type of evaluation unless multiple programs pool their resources to provide adequate assessment. Informal UR experiences, offered to students by individual faculty within their own laboratories, are often more common but are typically not coordinated across departments or institutions or accountable to a central office or agency for assessment. Partly toward this end, the Undergraduate Research Student Self-Assessment (URSSA) was developed as a common assessment instrument that can be compared across multiple UR sites within or across institutions. It is meant to be used as one source of assessment information about UR sites and their students.The current research examines the validity of the URSSA in the context of its use as a self-report survey for UR programs and laboratories. Because the survey has been taken by more than 3400 students, we can test some aspects of how the survey is structured and how it functions. Assessing the validity of the URSSA for its intended use is a process of testing hypotheses about how well the survey represents its intended content. This ongoing process (Messick, 1993 ; Kane, 2001 ) involves gathering evidence from a range of sources to learn whether validity claims are supported by evidence and whether the survey results can be used confidently in specific contexts. For the URSSA, our method of inquiry focuses on how the survey is used to assess consortia of REU sites. In this context, survey results are used for quality assurance and comparisons of average ratings over years and as general indicators of program success in encouraging students to pursue graduate science education and scientific careers. Our research questions focus on the meaning and reliability of “core indicators” used to track self-reported learning gains in four areas and the ability of numerical items to capture student aspirations for future plans to attend graduate school in the sciences.  相似文献   

12.
13.
14.
Blacks, Hispanics, and American Indians/Alaskan Natives are underrepresented in science and engineering fields. A comparison of race–ethnic differences at key transition points was undertaken to better inform education policy. National data on high school graduation, college enrollment, choice of major, college graduation, graduate school enrollment, and doctoral degrees were used to quantify the degree of underrepresentation at each level of education and the rate of transition to the next stage. Disparities are found at every level, and their impact is cumulative. For the most part, differences in graduation rates, rather than differential matriculation rates, make the largest contribution to the underrepresentation. The size, scope, and persistence of the disparities suggest that small-scale, narrowly targeted remediation will be insufficient.Most scientists and engineers take great pride in their reliance on logic and empirical evidence in decision making, and they reject the use of emotional, parochial, and irrational criteria. Prejudices of any sort are abjured. The prevalence of laboratory personnel and research collaborators from diverse national origins is often cited as an example of this meritocratic ideal. Therefore, the U.S. biomedical research community was shocked when a study revealed that Black Americans and other groups were substantially underrepresented in the receipt of grants from the National Institutes of Health (NIH), even after other correlates of success were controlled (Ginther et al., 2011 ). This picture clashed dramatically with the standards the community claimed. In the wake of this revelation, NIH created a high-level advisory group to examine the situation and make recommendations to address it (NIH, 2012 ).Concern about underrepresentation of Black Americans and other race–ethnic groups in science is not new (Melnick and Hamilton, 1977 ), and many attempts have been made to ameliorate or eliminate the gaps. While there have been some gains—underrepresented racial minority (URM)1 students rose from 2% of the biomedical graduate students to more than 11% since 1980 (National Research Council, 2011 )—disparities remain in all fields of science and engineering at all education levels and career stages (National Academy of Science, 2011 ).Given the limited progress in correcting this situation, it is essential to have a better understanding of the origin and extent of the problem. Especially in the current fiscal climate, with insufficient funding for education programs, interventions must be accurately targeted and appropriate to reach their goals. How large are the race–ethnic differences in science enrollments at each level of education? Are there general patterns that can help guide policy? Using data from 2008 and 2009, a recent National Science Foundation (NSF) report illustrates the underrepresentation of Blacks, Hispanics, and American Indians/Alaskan Natives at various education levels (NSF, 2011a ). While informative and illustrative of the extent of the problem, this single-year, cross-sectional perspective does not capture the conditions encountered by recent doctorate earners as they progressed through earlier stages in their education. Looking at graduation rates in the life sciences, Ginther et al. (2009) found that minority participation is increasing in biology, but minority students are not transitioning between milestones in the same proportions as Whites.  相似文献   

15.
The scale and importance of Vision and Change in Undergraduate Biology Education: A Call to Action challenges us to ask fundamental questions about widespread transformation of college biology instruction. I propose that we have clarified the “vision” but lack research-based models and evidence needed to guide the “change.” To support this claim, I focus on several key topics, including evidence about effective use of active-teaching pedagogy by typical faculty and whether certain programs improve students’ understanding of the Vision and Change core concepts. Program evaluation is especially problematic. While current education research and theory should inform evaluation, several prominent biology faculty–development programs continue to rely on self-reporting by faculty and students. Science, technology, engineering, and mathematics (STEM) faculty-development overviews can guide program design. Such studies highlight viewing faculty members as collaborators, embedding rewards faculty value, and characteristics of effective faculty-development learning communities. A recent National Research Council report on discipline-based STEM education research emphasizes the need for long-term faculty development and deep conceptual change in teaching and learning as the basis for genuine transformation of college instruction. Despite the progress evident in Vision and Change, forward momentum will likely be limited, because we lack evidence-based, reliable models for actually realizing the desired “change.”
All members of the biology academic community should be committed to creating, using, assessing, and disseminating effective practices in teaching and learning and in building a true community of scholars. (American Association for the Advancement of Science [AAAS], 2011 , p. 49)
Realizing the “vision” in Vision and Change in Undergraduate Biology Education (Vision and Change; AAAS, 2011 ) is an enormous undertaking for the biology education community, and the scale and critical importance of this challenge prompts us to ask fundamental questions about widespread transformation of college biology teaching and learning. For example, Vision and Change reflects the consensus that active teaching enhances the learning of biology. However, what is known about widespread application of effective active-teaching pedagogy and how it may differ across institutional and classroom settings or with the depth of pedagogical understanding a biology faculty member may have? More broadly, what is the research base concerning higher education biology faculty–development programs, especially designs that lead to real change in classroom teaching? Has the develop-and-disseminate approach favored by the National Science Foundation''s (NSF) Division of Undergraduate Education (Dancy and Henderson, 2007 ) been generally effective? Can we directly apply outcomes from faculty-development programs in other science, technology, engineering, and mathematics (STEM) disciplines or is teaching college biology unique in important ways? In other words, if we intend to use Vision and Change as the basis for widespread transformation of biology instruction, is there a good deal of scholarly literature about how to help faculty make the endorsed changes or is this research base lacking?In the context of Vision and Change, in this essay I focus on a few key topics relevant to broad-scale faculty development, highlighting the extent and quality of the research base for it. My intention is to reveal numerous issues that may well inhibit forward momentum toward real transformation of college-level biology teaching and learning. Some are quite fundamental, such as ongoing dependence on less reliable assessment approaches for professional-development programs and mixed success of active-learning pedagogy by broad populations of biology faculty. I also offer specific suggestions to improve and build on identified issues.At the center of my inquiry is the faculty member. Following the definition used by the Professional and Organizational Development Network in Higher Education (www.podnetwork.org), I use “faculty development” to indicate programs that emphasize the individual faculty member as teacher (e.g., his or her skill in the classroom), scholar/professional (publishing, college/university service), and person (time constraints, self-confidence). Of course, faculty members work within particular departments and institutions, and these environments are clearly critical as well (Stark et al., 2002 ). Consequently, in addition to focusing on the individual, faculty-development programs may also consider organizational structure (such as administrators and criteria for reappointment and tenure) and instructional development (the overall curriculum, who teaches particular courses). In fact, Diamond (2002) emphasizes that the three areas of effort (individual, organizational, instructional) should complement one another in faculty-development programs. The scope of the numerous factors impacting higher education biology instruction is a realistic reminder about the complexity and challenge of the second half of the Vision and Change endeavor.This essay is organized around specific topics meant to be representative and to illustrate the state of the art of widespread (beyond a limited number of courses and institutions) professional development for biology faculty. The first two sections focus on active teaching and biology students’ conceptual understanding, respectively. The third section concerns important elements that have been identified as critical for effective STEM faculty-development programs.  相似文献   

16.
17.
Preeclampsia(PE)refers to a group of dysfunction syndromes associated with elevated blood pressure and proteinuria in women with previously normal blood pressure after 20 weeks of pregnancy,and it may be accompanied by symptoms including headache。  相似文献   

18.
This edited volume of essays presents a countermainstream view against genetic underpinnings for cancer, behavior, and psychiatric conditions.This edited volume is a project from the Council of Responsible Genetics, a private organization based in Cambridge, Massachusetts, whose mission, as stated on its website, includes as one of several goals to “expose oversimplified and distorted scientific claims regarding the role of genetics in human disease, development and behavior.” This book represents such an effort. Editors Krimsky and Gruber are chair and president/executive director, respectively, of the organization and appear to have solicited contributions to the book from affiliates and other colleagues. Fewer than half of the 16 chapters are written by active laboratory scientists, however, and as a result, the book suffers from arguments clouded by imprecise use of terminology and preconceptions about genes and their functions. One might consider this book, or parts thereof, for an advanced undergraduate genetics class in which positions counter to the mainstream scientific view are presented and evaluated, and in which students are challenged to critically assess the quality of support for all arguments.The general theme of this book is to question the role of genes (and reproducible molecular mechanisms, more broadly) in cancer, behavior, psychiatric disorders, evolution, and other phenomena. One chapter promotes the tissue organization field theory (TOFT) against the somatic mutation theory of cancer. TOFT was proposed by the chapter authors in 2011 (Soto and Sonnenschein, 2011 ) but has not found traction and has garnered little attention beyond an initial refutation (Vaux, 2011 ). The authors assert that cancer is a disease of development and tissue repair primarily from environmental exposures and independent of genetic changes. Most cancer researchers agree that environmental factors can trigger cell growth but that ensuing mutations complete the picture in the genesis of malignancies. This chapter would be a good starting point from which one could assign students to explore papers cited in the Cancer Genome Atlas database, a growing resource compiling cancer genome data and subsequent validation in other systems of the effects of mutations found. In another chapter, a nonscientist author asserts that “in only a small percentage of cases are genes notable contributors to breast cancer,” implying imprecisely that only rare inherited cancer predisposition is genetic, when in fact cancer stemming from somatic mutations is also gene based. To assert that cancer stems only from environmental effects, to the exclusion of genes, overlooks the intertwining of the two arenas—radiation induces somatic mutations, for example, and estrogen mimics trigger cell division, which sets the stage for additional new mutations during DNA replication. Open in a separate windowOther sections of the book argue a lack of evidence for genetic influence on behaviors and psychiatric conditions. One chapter centers on several refuted ideas of biology and behavior (for example XYY and monoamine oxidase genotypes associated with aggression), with the intended implication that all other biological connections to behavior must be suspect. A chapter on autism accepts but downplays a partial role of genetics in the disorder, while emphasizing environmental exposures. Students exploring this topic could examine the growing literature on de novo mutations found in autism patients (Huguet et al., 2013 ), among other autism studies, to see how interlocking causes of the disorder might best explained by the available data. In the context of disorders such as schizophrenia, the book does not acknowledge or address the literature reporting genetic associations with psychiatric predispositions. In a troubling instance, a cited reference is misrepresented as refuting a genetic connection to schizophrenia; the reference in question (Collins et al., 2012 ) actually reports genome-wide association studies showing linkage of schizophrenia to particular loci (just not to the genes originally suspected). The same research group the previous month reported copy number variations associated with schizophrenia (Kirov et al., 2012 ), but this finding was not cited. Psychiatric genetics is a rich area for students to explore, and the contrarian viewpoint of the book can provide a starting point to trigger students’ delving into the literature.Genetic Explanations: Sense and Nonsense includes two chapters with assertions counter to the neo-Darwinian synthesis of evolution. One claims, fairly misleadingly, that “a growing number of evolutionary biologists … believe that macroevolution was the result of mechanisms other than natural selection.” Another states that “not genomic DNA but epigenetic environmental influences … overwhelmingly affect our health and well being.” The idea that gene regulation via environmental and epigenetic effects is somehow not reducible to genes (and that genes are therefore not central to evolution) would be an interesting subject for students to explore in the literature to see what the data actually support.This book is recommended only for use in advanced classes centered on weighing evidence and dissecting arguments in scientific controversies. The book''s countermainstream assertion of a lack of significant genetic connection to cancer, autism, schizophrenia, and other phenomena provides multiple opportunities for students to explore the scientific literature surrounding such genetic connections.  相似文献   

19.
Many life sciences faculty and administrators are unaware of existing funding programs and of the strategies needed for writing an educationally related proposal. We hope to remedy this problem by making the life sciences audience aware of two National Science Foundation programs underutilized by the biology community.This column has been a welcome opportunity to keep the CBE—Life Sciences Education readership aware of national efforts to improve undergraduate education in the life sciences and of ways to become a part of that effort (Woodin et al. 2009 , 2010 , 2012 ; Wei and Woodin, 2011 ). Throughout the years of engagement in the Vision and Change initiative, from the summer of 2007 to the present, the three primary agencies involved, the National Science Foundation (NSF), the National Institutes of Health (NIH), and the Howard Hughes Medical Institute (HHMI), have continually maintained a dialogue with participants through formal and informal conversations, workshops, and meetings. Our shared focus has been on how the life sciences community itself can change biology undergraduate education in order to better reflect and respond to the current educational environment, including the
  • rapid advances in the discipline,
  • new educational technologies and platforms becoming available,
  • evidence developed through research on effective practices in undergraduate education, and
  • challenges of accomplishing the necessary changes with the resources available.
As the participants have talked and the funding agencies have listened, it has become clear that many life sciences faculty and administrators are unaware of existing funding programs and of the strategies needed for writing an educationally related proposal. In this column, we hope to remedy this problem (in part) by making the life sciences audience aware of two NSF programs particularly relevant to Vision and Change that appear to be underutilized by the biology community. These are:
  • Transforming Undergraduate Education in Science, Technology, Engineering, and Mathematics (TUES) program (anticipated Spring of 2013 release), and
  • Undergraduate Research Coordination Networks–Undergraduate Biology Education (RCN-UBE) program (next deadline is June 14, 2013).
  相似文献   

20.
The focus of this paper is on sense-making and the use of real-world knowledge in mathematical modeling in schools. Arguments are put forward that classroom word problem solving is more—and also less—than the analysis of subject-matter structures. Students easily “solve” stereotyped, even unsolvable, problems without any regard to the constraints of factual reality. Mathematics learning in schools is inseparable not only from the materials employed, but from the macro- and microcultural web of practices within the social context of schooling. It represents, beyond the insightful activity of ideal problem solving, a type of socio-cognitive skill.The two experiments reported replicate and extend a study by Verschaffel, De Corte, and Lasure (1994). In the first experiment, a list of standard problems that could be solved by straightforward use of arithmetic operations, and a parallel list of problems which were problematic with respect to realistic mathematical modeling, were administered to fourth and fifth graders. In the second experiment, a similar list of problematic problems was presented to seventh graders under three socio-contextual conditions varying in the degree to which the pupils were told or signaled that the problems were more difficult to solve than it seemed at first or that they even could be unsolvable. The result of both studies was that most pupils “solved” a significant part of the unsolvable problems without evincing “realistic reactions”. This overall finding is discussed with respect to three issues:
  • 1. 
    (i) the quality of word problems employed in mathematics education,
  • 2. 
    (ii) the culture of teaching and learning, and
  • 3. 
    (iii) the more general issue of social rationality in school mathematics problem solving.
  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号