首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 109 毫秒
1.
This paper assesses Sarkar's ([2003]) deflationary account ofgenetic information. On Sarkar's account, genes carry informationabout proteins because protein synthesis exemplifies what Sarkarcalls a ‘formal information system’. Furthermore,genes are informationally privileged over non-genetic factorsof development because only genes enter into arbitrary relationsto their products (in virtue of the alleged arbitrariness ofthe genetic code). I argue that the deflationary theory doesnot capture four essential features of the ordinary conceptof genetic information: intentionality, exclusiveness, asymmetry,and causal relevance. It is therefore further removed from whatis customarily meant by genetic information than Sarkar admits.Moreover, I argue that it is questionable whether the accountsucceeds in demonstrating that information is theoreticallyuseful in molecular genetics.
  1. Introduction
  2. Sarkar's InformationSystem
  3. The Pre-theoretic Features of Genetic Information
    3.1Intentionality
    3.2 Exclusiveness
    3.3 Asymmetry
    3.4 Causalrelevance
  4. Theoretical Usefulness
  5. Conclusion
  相似文献   

2.
Going back at least to Duhem, there is a tradition of thinkingthat crucial experiments are impossible in science. I analyseDuhem's arguments and show that they are based on the excessivelystrong assumption that only deductive reasoning is permissiblein experimental science. This opens the possibility that someprinciple of inductive inference could provide a sufficientreason for preferring one among a group of hypotheses on thebasis of an appropriately controlled experiment. To be sure,there are analogues to Duhem's problems that pertain to inductiveinference. Using a famous experiment from the history of molecularbiology as an example, I show that an experimentalist versionof inference to the best explanation (IBE) does a better jobin handling these problems than other accounts of scientificinference. Furthermore, I introduce a concept of experimentalmechanism and show that it can guide inferences from data withinan IBE-based framework for induction.
  1. Introduction
  2. Duhem onthe Logic of Crucial Experiments
  3. ‘The Most BeautifulExperiment in Biology’
  4. Why Not Simple Elimination?
  5. SevereTesting
  6. An Experimentalist Version of IBE
    6.1 Physiologicaland experimentalmechanisms
    6.2 Explaining the data
    6.3IBE and the problemof untested auxiliaries
    6.4 IBE-turtlesall the way down
  7. Van Fraassen's ‘Bad Lot’ Argument
  8. IBE and Bayesianism
  9. Conclusions
  相似文献   

3.
This essay presents results about a deviation from independencemeasure called focused correlation. This measure explicatesthe formal relationship between probabilistic dependence ofan evidence set and the incremental confirmation of a hypothesis,resolves a basic question underlying Peter Klein and Ted Warfield's‘truth-conduciveness’ problem for Bayesian coherentism,and provides a qualified rebuttal to Erik Olsson's claim thatthere is no informative link between correlation and confirmation.The generality of the result is compared to recent programsin Bayesian epistemology that attempt to link correlation andconfirmation by utilizing a conditional evidential independencecondition. Several properties of focused correlation are alsohighlighted.
  1. Introduction
  2. Correlation Measures
    2.1 Standard covarianceand correlation measures
    2.2 The Wayne–Shogenjimeasure
    2.3 Interpreting correlation measures
    2.4 Correlationandevidential independence
  3. Focused Correlation
  4. Conclusion
Appendix  相似文献   

4.
What Are the New Implications of Chaos for Unpredictability?   总被引:1,自引:0,他引:1  
From the beginning of chaos research until today, the unpredictabilityof chaos has been a central theme. It is widely believed andclaimed by philosophers, mathematicians and physicists alikethat chaos has a new implication for unpredictability, meaningthat chaotic systems are unpredictable in a way that other deterministicsystems are not. Hence, one might expect that the question ‘Whatare the new implications of chaos for unpredictability?’has already been answered in a satisfactory way. However, thisis not the case. I will critically evaluate the existing answersand argue that they do not fit the bill. Then I will approachthis question by showing that chaos can be defined via mixing,which has never before been explicitly argued for. Based onthis insight, I will propose that the sought-after new implicationof chaos for unpredictability is the following: for predictingany event, all sufficiently past events are approximately probabilisticallyirrelevant.
  1. Introduction
  2. Dynamical Systems and Unpredictability
    2.1 Dynamical systems
    2.2 Natural invariant measures
    2.3Unpredictability
  3. Chaos
    3.1 Defining chaos
    3.2 Definingchaos via mixing
  4. Criticism of Answers in the Literature
    4.1 Asymptotic unpredictability?
    4.2 Unpredictability dueto rapid or exponential divergence?
    4.3 Macro-predictabilityand Micro-unpredictability?
  5. A General New Implication ofChaos for Unpredictability
    5.1Approximate probabilistic irrelevance
    5.2 Sufficiently pastevents are approximately probabilisticallyirrelevant for predictions
  6. Conclusion
  相似文献   

5.
Many have found attractive views according to which the veracityof specific causal judgements is underwritten by general causallaws. This paper describes various variants of that view andexplores complications that appear when one looks at a certainsimple type of example from physics. To capture certain causaldependencies, physics is driven to look at equations which,I argue, are not causal laws. One place where physics is forcedto look at such equations (and not the only place) is in itshandling of Green's functions which reveal point-wise causaldependencies. Thus, I claim that there is no simple relationshipbetween causal dependence and causal laws of the sort oftenpictured. Rather, this paper explores the complexity of therelationship in a certain well-understood case.
1 Introduction
2 The Causal Covering-Law Thesis
3 The Laws of String Motion
4 Green's Functions and Causation
5 Green's Functions andBoundary Conditions
6 The Green's Function as a Violationof the Wave Equation
6.1The Green's Function and other Sensesof ‘Causal Law’:Temporal Propagation and LocalPropagation
7 The Distributional Wave Equation
8 Whyis not the Green's Function a ‘Causal Law’?
9Conclusion
  相似文献   

6.
The advent of formal definitions of the simplicity of a theoryhas important implications for model selection. But what isthe best way to define simplicity? Forster and Sober ([1994])advocate the use of Akaike's Information Criterion (AIC), anon-Bayesian formalisation of the notion of simplicity. Thisforms an important part of their wider attack on Bayesianismin the philosophy of science. We defend a Bayesian alternative:the simplicity of a theory is to be characterised in terms ofWallace's Minimum Message Length (MML). We show that AIC isinadequate for many statistical problems where MML performswell. Whereas MML is always defined, AIC can be undefined. WhereasMML is not known ever to be statistically inconsistent, AICcan be. Even when defined and consistent, AIC performs worsethan MML on small sample sizes. MML is statistically invariantunder 1-to-1 re-parametrisation, thus avoiding a common criticismof Bayesian approaches. We also show that MML provides answersto many of Forster's objections to Bayesianism. Hence an importantpart of the attack on Bayesianism fails.
  1. Introduction
  2. TheCurve Fitting Problem
    2.1 Curves and families of curves
    2.2 Noise
    2.3 Themethod of Maximum Likelihood
    2.4 ML and over-fitting
  3. Akaike's Information Criterion(AIC)
  4. The Predictive Accuracy Framework
  5. The Minimum MessageLength (MML) Principle
    5.1 The Strict MML estimator
    5.2 Anexample: Thebinomial distribution
    5.3 Properties ofthe SMML estimator
    5.3.1  Bayesianism
    5.3.2  Languageinvariance
    5.3.3Generality
    5.3.4  Consistencyand efficiency
    5.4 Similarity to false oracles
    5.5 Approximationsto SMML
  6. Criticisms of AIC
    6.1 Problems with ML
    6.1.1  Smallsample biasin a Gaussian distribution
    6.1.2  Thevon Misescircular and von Mises—Fisherspherical distributions
    6.1.3  The Neyman–Scottproblem
    6.1.4  Neyman–Scott,predictive accuracyandminimum expected KL distance
    6.2 Otherproblems with AIC
    6.2.1  Univariate polynomial regression
    6.2.2  Autoregressiveeconometric time series
    6.2.3  Multivariatesecond-orderpolynomial modelselection
    6.2.4  Gapor no gap:a clustering-like problem forAIC
    6.3 Conclusionsfrom the comparison of MML and AIC
  7. Meeting Forster's objectionsto Bayesianism
    7.1 The sub-family problem
    7.2 Theproblem of approximation,or, which framework forstatistics?
  8. Conclusion
  1. Details of the derivation of the Strict MMLestimator
  2. MML, AIC and the Gap vs. No Gap Problem
    B.1 Expectedsize of the largest gap
    B.2 Performanceof AIC on thegap vs. no gap problem
    B.3 Performanceof MML in thegap vs. no gap problem
  相似文献   

7.
In a recent issue of this journal, P.E. Vermaas ([2005]) claimsto have demonstrated that standard quantum mechanics is technologicallyinadequate in that it violates the ‘technical functionscondition’. We argue that this claim is false becausebased on a ‘narrow’ interpretation of this technicalfunctions condition that Vermaas can only accept on pain ofcontradiction. We also argue that if, in order to avoid thiscontradiction, the technical functions condition is interpreted‘widely’ rather than ‘narrowly’, thenVermaas, argument for his claim collapses. The conclusion isthat Vermaas' claim that standard quantum mechanics is technologicallyinadequate evaporates.
1 Introduction
2 The Narrow Interpretation
3 The Wide Interpretation
4 The Teleportation Scheme
5Conclusions
  相似文献   

8.
A consensus exists among contemporary philosophers of biologyabout the history of their field. According to the receivedview, mainstream philosophy of science in the 1930s, 40s, and50s focused on physics and general epistemology, neglectinganalyses of the ‘special sciences’, including biology.The subdiscipline of philosophy of biology emerged (and couldonly have emerged) after the decline of logical positivism inthe 1960s and 70s. In this article, I present bibliometric datafrom four major philosophy of science journals (Erkenntnis,Philosophy of Science, Synthese, and the British Journal forthe Philosophy of Science), covering 1930–59, which challengethis view.
1 Introduction
2 Methods
3 Results
4 Conclusions
  相似文献   

9.
I argue in this article that there is a mistake in Searle'sChinese room argument that has not received sufficient attention.The mistake stems from Searle's use of the Church–Turingthesis. Searle assumes that the Church–Turing thesis licencesthe assumption that the Chinese room can run any program. Iargue that it does not, and that this assumption is false. Anumber of possible objections are considered and rejected. Myconclusion is that it is consistent with Searle's argument tohold onto the claim that understanding consists in the runningof a program.
1 Searle's Argument
1.1 The Church–Turingthesis
2 Criticism of Searle's Argument
3 Objectionsand Replies
3.1 The virtual brain machine objection
3.2The brain-basedobjection
3.3 The syntax/physics objection
3.4 The abstractionobjection
3.5 The ‘same conclusion’objection
3.6 The ‘unnecessary baggage’ objection
3.7The Chinese gym objection
3.8 The syntax/semantics objection
3.9 Turing's definition of algorithm
3.9.1 Consequences
3.9.2 Criticism of the defence
4 Conclusion
  相似文献   

10.
The evidence from randomized controlled trials (RCTs) is widelyregarded as supplying the ‘gold standard’ in medicine—wemay sometimes have to settle for other forms of evidence, butthis is always epistemically second-best. But how well justifiedis the epistemic claim about the superiority of RCTs? This paperadds to my earlier (predominantly negative) analyses of theclaims produced in favour of the idea that randomization playsa uniquely privileged epistemic role, by closely inspectingthree related arguments from leading contributors to the burgeoningfield of probabilistic causality—Papineau, Cartwrightand Pearl. It concludes that none of these further argumentssupplies any practical reason for thinking of randomizationas having unique epistemic power.
1 Introduction
2 Why theissue is of great practical importance—the ECMOcase
3Papineau on the ‘virtues of randomization’
4 Cartwrighton causality and the ‘ideal’ randomizedexperiment
5 Pearl on randomization, nets and causes
6 Conclusion
  相似文献   

11.
Maddy and Mathematics: Naturalism or Not   总被引:1,自引:0,他引:1  
Penelope Maddy advances a purportedly naturalistic account ofmathematical methodology which might be taken to answer thequestion `What justifies axioms of set theory?' I argue thather account fails both to adequately answer this question andto be naturalistic. Further, the way in which it fails to answerthe question deprives it of an analog to one of the chief attractionsof naturalism. Naturalism is attractive to naturalists andnonnaturalists alike because it explains the reliability ofscientific practice. Maddy's account, on the other hand, appearsto be unable to similarly explain the reliability of mathematicalpractice without violating one of its central tenets.
1 Introduction
2 Mathematical Naturalism
3 Desiderata and the attractionof naturalism
4 Assessment: Naturalism and names
4.1 Taking‘naturalism’seriously
4.2 Second philosophy (orwhat's in a name)
5 A way out?
6 Or out of the way?
  相似文献   

12.
Le Poidevin on the Reduction of Chemistry   总被引:1,自引:0,他引:1  
In this article we critically evaluate Robin Le Poidevin's recentattempt to set out an argument for the ontological reductionof chemistry independently of intertheoretic reduction. We argue,firstly, that the argument he envisages applies only to a smallpart of chemistry, and that there is no obvious way to extendit. We argue, secondly, that the argument cannot establish thereduction of chemistry, properly so called.
1 Introduction
2The scope of the reductionist claim
3 The combinatorial argument
4 The strength of the ‘reduction’
5 Concludingremarks
  相似文献   

13.
By and large, we think (Strevens's [2005]) is a useful replyto our original critique (Fitelson and Waterman [2005]) of hisarticle on the Quine–Duhem (QD) problem (Strevens [2001]).But, we remain unsatisfied with several aspects of his reply(and his original article). Ultimately, we do not think he properlyaddresses our most important worries. In this brief rejoinder,we explain our remaining worries, and we issue a revised challengefor Strevens's approach to QD.
1 Strevens's ‘clarifications’
2 Strevens's new-and-improved ‘negligibility arguments’
  相似文献   

14.
This paper is a review of work on Newman's objection to epistemicstructural realism (ESR). In Section 2, a brief statement ofESR is provided. In Section 3, Newman's objection and its recentvariants are outlined. In Section 4, two responses that arguethat the objection can be evaded by abandoning the Ramsey-sentenceapproach to ESR are considered. In Section 5, three responsesthat have been put forward specifically to rescue the Ramsey-sentenceapproach to ESR from the modern versions of the objection arediscussed. Finally, in Section 6, three responses are consideredthat are neutral with respect to one's approach to ESR and allargue (in different ways) that the objection can be evaded byintroducing the notion that some relations/structures are privilegedover others. It is concluded that none of these suggestionsis an adequate response to Newman's objection, which thereforeremains a serious problem for ESRists.
  1. Introduction
  2. EpistemicStructural Realism
    2.1 Ramsey-sentences and ESR
    2.2WESR andSESR
  3. The Objection
    3.1 Newman's version
    3.2 Demopoulosand Friedman'sand Ketland's versions
  4. Replies that Abandonthe Ramsey-Sentence Approach to ESR
    4.1Redhead's reply
    4.2French and Ladyman's reply
  5. Replies Designed to Rescue theRamsey-Sentence Approach
    5.1Zahar's reply
    5.2 Cruse's reply
    5.3 Melia and Saatsi's reply
  6. Replies that Argue thatSome Structures/Relations are Privileged
    6.1 A Carnapian reply
    6.2 Votsis' reply
    6.3 The Merrill/Lewis/Psillosreply
  7. Summary
  相似文献   

15.
While there is no universal logic of induction, the probabilitycalculus succeeds as a logic of induction in many contexts throughits use of several notions concerning inductive inference. Theyinclude Addition, through which low probabilities representdisbelief as opposed to ignorance; and Bayes property, whichcommits the calculus to a ‘refute and rescale’ dynamicsfor incorporating new evidence. These notions are independentand it is urged that they be employed selectively accordingto needs of the problem at hand. It is shown that neither isadapted to inductive inference concerning some indeterministicsystems.
1 Introduction
2 Failure of demonstrations of universality
2.1 Working backwards
2.2 The surface logic
3 Framework
3.1 The properties
3.2 Boundaries
3.2.1 Universalcomparability
3.2.2 Transitivity
3.2.3 Monotonicity
4 Addition
4.1 The property: disbelief versus ignorance
4.2Boundaries
5 Bayes property
5.1 The property
5.2 Bayes' theorem
5.3Boundaries
5.3.1 Dogmatism of the priors
5.3.2 Impossibilityof prior ignorance
5.3.3 Accommodation of virtues
6Real values
7 Sufficiency and independence
8 Illustrations
8.1 All properties retained
8.2 Bayes propertyonly retained
8.3 Induction without additivity and Bayes property
9Conclusion
  相似文献   

16.
Starting from a brief recapitulation of the contemporary debateon scientific realism, this paper argues for the following thesis:Assume a theory T has been empirically successful in a domainof application A, but was superseded later on by a superiortheory T*, which was likewise successful in A but has an arbitrarilydifferent theoretical superstructure. Then under natural conditionsT contains certain theoretical expressions, which yielded T'sempirical success, such that these T-expressions correspond(in A) to certain theoretical expressions of T*, and given T*is true, they refer indirectly to the entities denoted by theseexpressions of T*. The thesis is first motivated by a studyof the phlogiston–oxygen example. Then the thesis is provedin the form of a logical theorem, and illustrated by furtherexamples. The final sections explain how the correspondencetheorem justifies scientific realism and work out the advantagesof the suggested account.
  1. Introduction: Pessimistic Meta-induction vs. Structural Correspondence
  2. The Case of the Phlogiston Theory
  3. Steps Towards a SystematicCorrespondence Theorem
  4. The Correspondence Theorem and ItsOntological Interpretation
  5. Further Historical Applications
  6. Discussion of the Correspondence Theorem: Objections and Replies
  7. Consequences for Scientific Realism and Comparison with OtherPositions
    7.1 Comparison with constructive empiricism
    7.2Major difference from standard scientific realism
    7.3 Fromminimal realism and correspondence to scientific realism
    7.4Comparison with particular realistic positions
  相似文献   

17.
The rejection of an infinitesimal solution to the zero-fit problemby A. Elga ([2004]) does not seem to appreciate the opportunitiesprovided by the use of internal finitely-additive probabilitymeasures. Indeed, internal laws of probability can be used tofind a satisfactory infinitesimal answer to many zero-fit problems,not only to the one suggested by Elga, but also to the Markovchain (that is, discrete and memory-less) models of reality.Moreover, the generalization of likelihoods that Elga has inmind is not as hopeless as it appears to be in his article.In fact, for many practically important examples, through theuse of likelihoods one can succeed in circumventing the zero-fitproblem.
1 The Zero-fit Problem on Infinite State Spaces
2Elga's Critique of the Infinitesimal Approach to the Zero-fitProblem
3 Two Examples for Infinitesimal Solutions to theZero-fit Problem
4 Mathematical Modelling in Nonstandard Universes?
5 Are Nonstandard Models Unnatural?
6 Likelihoods and Densities
A Internal Probability Measures and the Loeb Measure Construction
B The (Countable) Coin Tossing Sequence Revisited
C Solutionto the Zero-fit Problem for a Finite-state Modelwithout Memory
D An Additional Note on ‘Integrating over Densities’
E Well-defined Continuous Versions of Density Functions
  相似文献   

18.
Many people believe that there is a Dutch Book argument establishingthat the principle of countable additivity is a condition ofcoherence. De Finetti himself did not, but for reasons thatare at first sight perplexing. I show that he rejected countableadditivity, and hence the Dutch Book argument for it, becausecountable additivity conflicted with intuitive principles aboutthe scope of authentic consistency constraints. These he oftenclaimed were logical in nature, but he never attempted to relatethis idea to deductive logic and its own concept of consistency.This I do, showing that at one level the definitions of deductiveand probabilistic consistency are identical, differing onlyin the nature of the constraints imposed. In the probabilisticcase I believe that R.T. Cox's ‘scale-free’ axiomsfor subjective probability are the most suitable candidates.
1 Introduction
2 Coherence and Consistency
3 The InfiniteFair Lottery
4 The Puzzle Resolved—But Replaced by Another
5 Countable Additivity, Conglomerability and Dutch Books
6The Probability Axioms and Cox's Theorem
7 Truth and Probability
8 Conclusion: ‘Logical Omniscience’
  相似文献   

19.
Stochastic Einstein Locality Revisited   总被引:1,自引:0,他引:1  
I discuss various formulations of stochastic Einstein locality(SEL), which is a version of the idea of relativistic causality,that is, the idea that influences propagate at most as fastas light. SEL is similar to Reichenbach's Principle of the CommonCause (PCC), and Bell's Local Causality. My main aim is to discuss formulations of SEL for a fixed backgroundspacetime. I previously argued that SEL is violated by the outcomedependence shown by Bell correlations, both in quantum mechanicsand in quantum field theory. Here I reassess those verdictsin the light of some recent literature which argues that outcomedependence does not violate the PCC. I argue that the verdictsabout SEL still stand. Finally, I briefly discuss how to formulate relativistic causalityif there is no fixed background spacetime.
1 Introduction
2Formulating Stochastic Einstein Locality
2.1 Events and regions
2.2 The idea of SEL
2.3 Three formulations of SEL
2.3.1The formulations
2.3.2Comparisons
2.4 Implications betweenthe formulations
2.4.1 Conditions forthe equivalence of SELD1and SELD2
2.4.2 Conditions for theequivalence of SELS andSELD2
3 Relativistic Causality in the Bell Experiment
3.1 The background
3.1.1 The Bell experiment reviewed
3.1.2My previous position
3.2 A common common cause? The Budapestschool
3.2.1 Resuscitatingthe PCC
3.2.2 Known proofs of aBell inequality need a strongPCC
3.2.3 Two distinctions
3.2.4Szabó's model
3.2.5A common common cause is plausible
3.2.6 Bell inequalitiesfrom a weak PCC: the Bern school
3.3 SEL in the Bell experiment
3.3.1 PCC and SEL are connectedby PPSI
3.3.2 The need forother judgments
3.3.3 Weak vs.strong SELD
4 SEL in Algebraic Quantum Field Theory
4.1 The story so far
4.2 Questions
4.2.1 Our formulations
4.2.2 The BudapestandBern schools
5 SEL in DynamicalSpacetimes
5.1 SEL for metric structure?
5.2 SEL for causalsets?
5.2.1 The causal set approach
5.2.2Labelled causalsets; general covariance
5.2.3 Deducing thedynamics
5.2.4The fate of SEL
  相似文献   

20.
The traditional Bayesian qualitative account of evidential support(TB) takes assertions of the form ‘E evidentially supportsH’ to affirm the existence of a two-place relation ofevidential support between E and H. The analysans given forthis relation is C(H,E) =def Pr(H|E) > Pr(H). Now it is wellknown that when a hypothesis H entails evidence E, not onlyis it the case that C(H,E), but it is also the case that C(H&X,E)for any arbitrary X. There is a widespread feeling that thisis a problematic result for TB. Indeed, there are a number ofcases in which many feel it is false to assert ‘E evidentiallysupports H&X’, despite H entailing E. This is known,by those who share that feeling, as the ‘tacking problem’for Bayesian confirmation theory. After outlining a generalizationof the problem, I argue that the Bayesian response has so farbeen unsatisfactory. I then argue the following: (i) There exists,either instead of, or in addition to, a two-place relation ofconfirmation, a three-place, ‘contrastive’ relationof confirmation, holding between an item of evidence E and twocompeting hypotheses H1 and H2. (ii) The correct analysans ofthe relation is a particular probabilistic inequality, abbreviatedC(H1, H2, E). (iii) Those who take the putative counterexamplesto TB discussed to indeed be counterexamples are interpretingthe relevant utterances as implicitly contrastive, contrastingthe relevant hypothesis H1 with a particular competitor H2.(iv) The probabilistic structure of these cases is such thatC(H1, H2, E). This solves my generalization of the tacking problem.I then conclude with some thoughts about the relationship betweenthe traditional Bayesian account of evidential support and myproposed account of the three-place relation of confirmation.
1 The ‘tacking problem’ and the traditional Bayesianresponse
2 Contrastive support
3 Concluding comments
  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号