首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
What Are the New Implications of Chaos for Unpredictability?   总被引:1,自引:0,他引:1  
From the beginning of chaos research until today, the unpredictabilityof chaos has been a central theme. It is widely believed andclaimed by philosophers, mathematicians and physicists alikethat chaos has a new implication for unpredictability, meaningthat chaotic systems are unpredictable in a way that other deterministicsystems are not. Hence, one might expect that the question ‘Whatare the new implications of chaos for unpredictability?’has already been answered in a satisfactory way. However, thisis not the case. I will critically evaluate the existing answersand argue that they do not fit the bill. Then I will approachthis question by showing that chaos can be defined via mixing,which has never before been explicitly argued for. Based onthis insight, I will propose that the sought-after new implicationof chaos for unpredictability is the following: for predictingany event, all sufficiently past events are approximately probabilisticallyirrelevant.
  1. Introduction
  2. Dynamical Systems and Unpredictability
    2.1 Dynamical systems
    2.2 Natural invariant measures
    2.3Unpredictability
  3. Chaos
    3.1 Defining chaos
    3.2 Definingchaos via mixing
  4. Criticism of Answers in the Literature
    4.1 Asymptotic unpredictability?
    4.2 Unpredictability dueto rapid or exponential divergence?
    4.3 Macro-predictabilityand Micro-unpredictability?
  5. A General New Implication ofChaos for Unpredictability
    5.1Approximate probabilistic irrelevance
    5.2 Sufficiently pastevents are approximately probabilisticallyirrelevant for predictions
  6. Conclusion
  相似文献   

2.
Starting from a brief recapitulation of the contemporary debateon scientific realism, this paper argues for the following thesis:Assume a theory T has been empirically successful in a domainof application A, but was superseded later on by a superiortheory T*, which was likewise successful in A but has an arbitrarilydifferent theoretical superstructure. Then under natural conditionsT contains certain theoretical expressions, which yielded T'sempirical success, such that these T-expressions correspond(in A) to certain theoretical expressions of T*, and given T*is true, they refer indirectly to the entities denoted by theseexpressions of T*. The thesis is first motivated by a studyof the phlogiston–oxygen example. Then the thesis is provedin the form of a logical theorem, and illustrated by furtherexamples. The final sections explain how the correspondencetheorem justifies scientific realism and work out the advantagesof the suggested account.
  1. Introduction: Pessimistic Meta-induction vs. Structural Correspondence
  2. The Case of the Phlogiston Theory
  3. Steps Towards a SystematicCorrespondence Theorem
  4. The Correspondence Theorem and ItsOntological Interpretation
  5. Further Historical Applications
  6. Discussion of the Correspondence Theorem: Objections and Replies
  7. Consequences for Scientific Realism and Comparison with OtherPositions
    7.1 Comparison with constructive empiricism
    7.2Major difference from standard scientific realism
    7.3 Fromminimal realism and correspondence to scientific realism
    7.4Comparison with particular realistic positions
  相似文献   

3.
The advent of formal definitions of the simplicity of a theoryhas important implications for model selection. But what isthe best way to define simplicity? Forster and Sober ([1994])advocate the use of Akaike's Information Criterion (AIC), anon-Bayesian formalisation of the notion of simplicity. Thisforms an important part of their wider attack on Bayesianismin the philosophy of science. We defend a Bayesian alternative:the simplicity of a theory is to be characterised in terms ofWallace's Minimum Message Length (MML). We show that AIC isinadequate for many statistical problems where MML performswell. Whereas MML is always defined, AIC can be undefined. WhereasMML is not known ever to be statistically inconsistent, AICcan be. Even when defined and consistent, AIC performs worsethan MML on small sample sizes. MML is statistically invariantunder 1-to-1 re-parametrisation, thus avoiding a common criticismof Bayesian approaches. We also show that MML provides answersto many of Forster's objections to Bayesianism. Hence an importantpart of the attack on Bayesianism fails.
  1. Introduction
  2. TheCurve Fitting Problem
    2.1 Curves and families of curves
    2.2 Noise
    2.3 Themethod of Maximum Likelihood
    2.4 ML and over-fitting
  3. Akaike's Information Criterion(AIC)
  4. The Predictive Accuracy Framework
  5. The Minimum MessageLength (MML) Principle
    5.1 The Strict MML estimator
    5.2 Anexample: Thebinomial distribution
    5.3 Properties ofthe SMML estimator
    5.3.1  Bayesianism
    5.3.2  Languageinvariance
    5.3.3Generality
    5.3.4  Consistencyand efficiency
    5.4 Similarity to false oracles
    5.5 Approximationsto SMML
  6. Criticisms of AIC
    6.1 Problems with ML
    6.1.1  Smallsample biasin a Gaussian distribution
    6.1.2  Thevon Misescircular and von Mises—Fisherspherical distributions
    6.1.3  The Neyman–Scottproblem
    6.1.4  Neyman–Scott,predictive accuracyandminimum expected KL distance
    6.2 Otherproblems with AIC
    6.2.1  Univariate polynomial regression
    6.2.2  Autoregressiveeconometric time series
    6.2.3  Multivariatesecond-orderpolynomial modelselection
    6.2.4  Gapor no gap:a clustering-like problem forAIC
    6.3 Conclusionsfrom the comparison of MML and AIC
  7. Meeting Forster's objectionsto Bayesianism
    7.1 The sub-family problem
    7.2 Theproblem of approximation,or, which framework forstatistics?
  8. Conclusion
  1. Details of the derivation of the Strict MMLestimator
  2. MML, AIC and the Gap vs. No Gap Problem
    B.1 Expectedsize of the largest gap
    B.2 Performanceof AIC on thegap vs. no gap problem
    B.3 Performanceof MML in thegap vs. no gap problem
  相似文献   

4.
This essay presents results about a deviation from independencemeasure called focused correlation. This measure explicatesthe formal relationship between probabilistic dependence ofan evidence set and the incremental confirmation of a hypothesis,resolves a basic question underlying Peter Klein and Ted Warfield's‘truth-conduciveness’ problem for Bayesian coherentism,and provides a qualified rebuttal to Erik Olsson's claim thatthere is no informative link between correlation and confirmation.The generality of the result is compared to recent programsin Bayesian epistemology that attempt to link correlation andconfirmation by utilizing a conditional evidential independencecondition. Several properties of focused correlation are alsohighlighted.
  1. Introduction
  2. Correlation Measures
    2.1 Standard covarianceand correlation measures
    2.2 The Wayne–Shogenjimeasure
    2.3 Interpreting correlation measures
    2.4 Correlationandevidential independence
  3. Focused Correlation
  4. Conclusion
Appendix  相似文献   

5.
An assessment is offered of the recent debate on informationin the philosophy of biology, and an analysis is provided ofthe notion of information as applied in scientific practicein molecular genetics. In particular, this paper deals withthe dependence of basic generalizations of molecular biology,above all the ‘central dogma’, on the so-called‘informational talk’ (Maynard Smith [2000a]). Itis argued that talk of information in the ‘central dogma’can be reduced to causal claims. In that respect, the primaryaim of the paper is to consider a solution to the major difficultyof the causal interpretation of genetic information: how todistinguish the privileged causal role assigned to nucleic acids,DNA in particular, in the processes of replication and proteinproduction. A close reading is proposed of Francis H. C. Crick'sOn Protein Synthesis (1958) and related works, to which we owethe first explicit definition of information within the scientificpractice of molecular biology.
  1. Introduction
    1.1 The basicquestions of the information debate
    1.2 Thecausal interpretation(CI) of biological informationand Crick's‘central dogma’
  2. Crick's definitions of genetic information
  3. The main requirementfor (CI)
  4. Types of causation in molecular biology
    4.1 Structuralcausation in molecular biology
    4.2 Nucleicacids as correlativecausal factors
  5. The ‘central dogma’ withoutthe notion of information
  6. Concluding remarks
  相似文献   

6.
This paper assesses Sarkar's ([2003]) deflationary account ofgenetic information. On Sarkar's account, genes carry informationabout proteins because protein synthesis exemplifies what Sarkarcalls a ‘formal information system’. Furthermore,genes are informationally privileged over non-genetic factorsof development because only genes enter into arbitrary relationsto their products (in virtue of the alleged arbitrariness ofthe genetic code). I argue that the deflationary theory doesnot capture four essential features of the ordinary conceptof genetic information: intentionality, exclusiveness, asymmetry,and causal relevance. It is therefore further removed from whatis customarily meant by genetic information than Sarkar admits.Moreover, I argue that it is questionable whether the accountsucceeds in demonstrating that information is theoreticallyuseful in molecular genetics.
  1. Introduction
  2. Sarkar's InformationSystem
  3. The Pre-theoretic Features of Genetic Information
    3.1Intentionality
    3.2 Exclusiveness
    3.3 Asymmetry
    3.4 Causalrelevance
  4. Theoretical Usefulness
  5. Conclusion
  相似文献   

7.
Going back at least to Duhem, there is a tradition of thinkingthat crucial experiments are impossible in science. I analyseDuhem's arguments and show that they are based on the excessivelystrong assumption that only deductive reasoning is permissiblein experimental science. This opens the possibility that someprinciple of inductive inference could provide a sufficientreason for preferring one among a group of hypotheses on thebasis of an appropriately controlled experiment. To be sure,there are analogues to Duhem's problems that pertain to inductiveinference. Using a famous experiment from the history of molecularbiology as an example, I show that an experimentalist versionof inference to the best explanation (IBE) does a better jobin handling these problems than other accounts of scientificinference. Furthermore, I introduce a concept of experimentalmechanism and show that it can guide inferences from data withinan IBE-based framework for induction.
  1. Introduction
  2. Duhem onthe Logic of Crucial Experiments
  3. ‘The Most BeautifulExperiment in Biology’
  4. Why Not Simple Elimination?
  5. SevereTesting
  6. An Experimentalist Version of IBE
    6.1 Physiologicaland experimentalmechanisms
    6.2 Explaining the data
    6.3IBE and the problemof untested auxiliaries
    6.4 IBE-turtlesall the way down
  7. Van Fraassen's ‘Bad Lot’ Argument
  8. IBE and Bayesianism
  9. Conclusions
  相似文献   

8.
1 Introduction
2 Causality in 19th and Early 20th CenturyMedicine
3 A Lakatosian Approach to the History of Medicine
  相似文献   

9.
While there is no universal logic of induction, the probabilitycalculus succeeds as a logic of induction in many contexts throughits use of several notions concerning inductive inference. Theyinclude Addition, through which low probabilities representdisbelief as opposed to ignorance; and Bayes property, whichcommits the calculus to a ‘refute and rescale’ dynamicsfor incorporating new evidence. These notions are independentand it is urged that they be employed selectively accordingto needs of the problem at hand. It is shown that neither isadapted to inductive inference concerning some indeterministicsystems.
1 Introduction
2 Failure of demonstrations of universality
2.1 Working backwards
2.2 The surface logic
3 Framework
3.1 The properties
3.2 Boundaries
3.2.1 Universalcomparability
3.2.2 Transitivity
3.2.3 Monotonicity
4 Addition
4.1 The property: disbelief versus ignorance
4.2Boundaries
5 Bayes property
5.1 The property
5.2 Bayes' theorem
5.3Boundaries
5.3.1 Dogmatism of the priors
5.3.2 Impossibilityof prior ignorance
5.3.3 Accommodation of virtues
6Real values
7 Sufficiency and independence
8 Illustrations
8.1 All properties retained
8.2 Bayes propertyonly retained
8.3 Induction without additivity and Bayes property
9Conclusion
  相似文献   

10.
Stochastic Einstein Locality Revisited   总被引:1,自引:0,他引:1  
I discuss various formulations of stochastic Einstein locality(SEL), which is a version of the idea of relativistic causality,that is, the idea that influences propagate at most as fastas light. SEL is similar to Reichenbach's Principle of the CommonCause (PCC), and Bell's Local Causality. My main aim is to discuss formulations of SEL for a fixed backgroundspacetime. I previously argued that SEL is violated by the outcomedependence shown by Bell correlations, both in quantum mechanicsand in quantum field theory. Here I reassess those verdictsin the light of some recent literature which argues that outcomedependence does not violate the PCC. I argue that the verdictsabout SEL still stand. Finally, I briefly discuss how to formulate relativistic causalityif there is no fixed background spacetime.
1 Introduction
2Formulating Stochastic Einstein Locality
2.1 Events and regions
2.2 The idea of SEL
2.3 Three formulations of SEL
2.3.1The formulations
2.3.2Comparisons
2.4 Implications betweenthe formulations
2.4.1 Conditions forthe equivalence of SELD1and SELD2
2.4.2 Conditions for theequivalence of SELS andSELD2
3 Relativistic Causality in the Bell Experiment
3.1 The background
3.1.1 The Bell experiment reviewed
3.1.2My previous position
3.2 A common common cause? The Budapestschool
3.2.1 Resuscitatingthe PCC
3.2.2 Known proofs of aBell inequality need a strongPCC
3.2.3 Two distinctions
3.2.4Szabó's model
3.2.5A common common cause is plausible
3.2.6 Bell inequalitiesfrom a weak PCC: the Bern school
3.3 SEL in the Bell experiment
3.3.1 PCC and SEL are connectedby PPSI
3.3.2 The need forother judgments
3.3.3 Weak vs.strong SELD
4 SEL in Algebraic Quantum Field Theory
4.1 The story so far
4.2 Questions
4.2.1 Our formulations
4.2.2 The BudapestandBern schools
5 SEL in DynamicalSpacetimes
5.1 SEL for metric structure?
5.2 SEL for causalsets?
5.2.1 The causal set approach
5.2.2Labelled causalsets; general covariance
5.2.3 Deducing thedynamics
5.2.4The fate of SEL
  相似文献   

11.
By and large, we think (Strevens's [2005]) is a useful replyto our original critique (Fitelson and Waterman [2005]) of hisarticle on the Quine–Duhem (QD) problem (Strevens [2001]).But, we remain unsatisfied with several aspects of his reply(and his original article). Ultimately, we do not think he properlyaddresses our most important worries. In this brief rejoinder,we explain our remaining worries, and we issue a revised challengefor Strevens's approach to QD.
1 Strevens's ‘clarifications’
2 Strevens's new-and-improved ‘negligibility arguments’
  相似文献   

12.
I argue in this article that there is a mistake in Searle'sChinese room argument that has not received sufficient attention.The mistake stems from Searle's use of the Church–Turingthesis. Searle assumes that the Church–Turing thesis licencesthe assumption that the Chinese room can run any program. Iargue that it does not, and that this assumption is false. Anumber of possible objections are considered and rejected. Myconclusion is that it is consistent with Searle's argument tohold onto the claim that understanding consists in the runningof a program.
1 Searle's Argument
1.1 The Church–Turingthesis
2 Criticism of Searle's Argument
3 Objectionsand Replies
3.1 The virtual brain machine objection
3.2The brain-basedobjection
3.3 The syntax/physics objection
3.4 The abstractionobjection
3.5 The ‘same conclusion’objection
3.6 The ‘unnecessary baggage’ objection
3.7The Chinese gym objection
3.8 The syntax/semantics objection
3.9 Turing's definition of algorithm
3.9.1 Consequences
3.9.2 Criticism of the defence
4 Conclusion
  相似文献   

13.
In this paper I argue—against van Fraassen's constructiveempiricism—that the practice of saving phenomena is muchbroader than usually thought, and includes unobservable phenomenaas well as observable ones. My argument turns on the distinctionbetween data and phenomena: I discuss how unobservable phenomenamanifest themselves in data models and how theoretical modelsable to save them are chosen. I present a paradigmatic casestudy taken from the history of particle physics to illustratemy argument. The first aim of this paper is to draw attentionto the experimental practice of saving unobservable phenomena,which philosophers have overlooked for too long. The secondaim is to explore some far-reaching implications this practicemay have for the debate on scientific realism and constructiveempiricism.
1 Introduction
2 Unobservable Phenomena
2.1 Dataand phenomena
2.2 What isa data model?
2.3 Data modelsand unobservable phenomena
3 Saving Unobservable Phenomena:An Exemplar
4 The October Revolution of 1974: From the J/to Charmonium
4.1 A new unobservable phenomenon at 3.1 Ge V
4.2 How thecharmonium model saved the new unobservable phenomenon
4.2.1The J/ as a baryon–antibaryon bound state
4.2.2TheJ/ as the spin-1 meson of a model with three charmedquarks
4.2.3 The J/ as a charmonium state
5 Concluding Remarks
  相似文献   

14.
The rejection of an infinitesimal solution to the zero-fit problemby A. Elga ([2004]) does not seem to appreciate the opportunitiesprovided by the use of internal finitely-additive probabilitymeasures. Indeed, internal laws of probability can be used tofind a satisfactory infinitesimal answer to many zero-fit problems,not only to the one suggested by Elga, but also to the Markovchain (that is, discrete and memory-less) models of reality.Moreover, the generalization of likelihoods that Elga has inmind is not as hopeless as it appears to be in his article.In fact, for many practically important examples, through theuse of likelihoods one can succeed in circumventing the zero-fitproblem.
1 The Zero-fit Problem on Infinite State Spaces
2Elga's Critique of the Infinitesimal Approach to the Zero-fitProblem
3 Two Examples for Infinitesimal Solutions to theZero-fit Problem
4 Mathematical Modelling in Nonstandard Universes?
5 Are Nonstandard Models Unnatural?
6 Likelihoods and Densities
A Internal Probability Measures and the Loeb Measure Construction
B The (Countable) Coin Tossing Sequence Revisited
C Solutionto the Zero-fit Problem for a Finite-state Modelwithout Memory
D An Additional Note on ‘Integrating over Densities’
E Well-defined Continuous Versions of Density Functions
  相似文献   

15.
Your evidence constrains your rational degrees of confidenceboth locally and globally. On the one hand, particular bitsof evidence can boost or diminish your rational degree of confidencein various hypotheses, relative to your background information.On the other hand, epistemic rationality requires that, forany hypothesis h, your confidence in h is proportional to thesupport that h receives from your total evidence. Why is itthat your evidence has these two epistemic powers? I argue thatvarious proposed accounts of what it is for something to bean element of your evidence set cannot answer this question.I then propose an alternative account of what it is for somethingto be an element of your evidence set.
1 Introduction
2 Theelements of one's evidence set are propositions
3 Which kindsof propositions are in one's evidence set?
3.1 Doxastic accountsof evidence
3.2 Non-doxastic accountsof evidence
4 Elaboratingand defending the LIE
  相似文献   

16.
The paper considers our ordinary mentalistic discourse in relationto what we should expect from any genuine science of the mind.A meta-scientific eliminativism is commended and distinguishedfrom the more familiar eliminativism of Skinner and the Churchlands.Meta-scientific eliminativism views folk psychology qua folksyas unsuited to offer insight into the structure of cognition,although it might otherwise be indispensable for our socialcommerce and self-understanding. This position flows from ageneral thesis that scientific advance is marked by an eschewalof folk understanding. The latter half of the paper argues that,contrary to the received view, Chomsky's review of Skinner offersnot just an argument against Skinner's eliminativism, but, morecentrally, one in favour of the second eliminativism.
1 Introduction
2 Preliminaries: What Meta-scientific Eliminativism is Not
3 Meta-scientific Eliminativism
3.1 Folk psychology and cognitivescience
4 Two Readings of Chomsky's Review of Skinner
5Issues of Interpretation
5.1 A grammar as a theory
5.2 Cartesianlinguistics
5.3 Common cause
6 Chomsky's Current View
  相似文献   

17.
What belongs to quantum theory is no more than what is neededfor its derivation. Keeping to this maxim, we record a paradigmaticshift in the foundations of quantum mechanics, where the focushas recently moved from interpreting to reconstructing quantumtheory. Several historic and contemporary reconstructions areanalyzed, including the work of Hardy, Rovelli, and Clifton,Bub and Halvorson. We conclude by discussing the importanceof a novel concept of intentionally incomplete reconstruction.
1 What is Wrong with Interpreting Quantum Mechanics
2 Reconstructionof Physical Theory
2.1 Schema
2.2 Selectionof the first principles
2.3 Status of the first principles
3 Examples of Reconstruction
3.1 Early examples of reconstruction
3.2 Hardy's reconstruction
3.3 Rovelli's reconstruction
3.4 The CBH reconstruction
3.5 Intentionally incompletereconstructions
4 Conclusion
  相似文献   

18.
Questions about the function(s) of consciousness have long beencentral to discussions of consciousness in philosophy and psychology.Intuitively, consciousness has an important role to play inthe control of many everyday behaviors. However, this view hasrecently come under attack. In particular, it is becoming increasinglycommon for scientists and philosophers to argue that a significantbody of data emerging from cognitive science shows that consciousstates are not involved in the control of behavior. Accordingto these theorists, nonconscious states control most everydaybehaviors. Andy Clark ([2001]) does an admirable job of summarizingand defending the most important data thought to support thisview. In this paper, I argue that the evidence available doesnot in fact threaten the view that conscious states play animportant and intimate role in the control of much everydaybehavior. I thereby defend a philosophically intuitive viewabout the functions of conscious states in action.
1 Introduction
2 Clarifying EBC
2.1 Control and guidance
2.2 Fine-tunedactivity
3 The empirical case against EBC
4 Conclusion
  相似文献   

19.
James Ladyman ([2000]) argues that constructive empiricism isuntenable because it cannot adequately account for modal statementsabout observability. In this paper, I attempt to resist Ladyman'sconclusion, arguing that the constructive empiricist can granthis modal discourse objective, theory-independent truth-conditions,yet without compromising his empiricism.
1 Ladyman's dilemma
2 Constructive empiricism and modal agnosticism
3 Conclusion
  相似文献   

20.
Many standard philosophical accounts of scientific practicefail to distinguish between modeling and other types of theoryconstruction. This failure is unfortunate because there areimportant contrasts among the goals, procedures, and representationsemployed by modelers and other kinds of theorists. We can seesome of these differences intuitively when we reflect on themethods of theorists such as Vito Volterra and Linus Paulingon the one hand, and Charles Darwin and Dimitri Mendeleev onthe other. Much of Volterra's and Pauling's work involved modeling;much of Darwin's and Mendeleev's did not. In order to capturethis distinction, I consider two examples of theory constructionin detail: Volterra's treatment of post-WWI fishery dynamicsand Mendeleev's construction of the periodic system. I arguethat modeling can be distinguished from other forms of theorizingby the procedures modelers use to represent and to study real-worldphenomena: indirect representation and analysis. This differentiationbetween modelers and non-modelers is one component of the largerproject of understanding the practice of modeling, its distinctivefeatures, and the strategies of abstraction and idealizationit employs.
1 Introduction
2 The essential contrast
2.1 Modeling
2.2 Abstract directrepresentation
3 Scientific models
4 Distinguishing modeling from ADR
4.1 The first and secondstages of modeling
4.2 Third stage of modeling
4.3 ADR
5 Who is not a modeler?
6 Conclusion: who is a modeler?
  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号