Brian: I’m not the messiah.
Acolyte: Only the true messiah would deny that he was the messiah!
Brian: OK. I’m the messiah.
Mob: He’s the messiah! He’s the messiah!
—Life of Brian
Guy’s out walking in Manhattan when he sees a street vendor selling unmarked aerosol cans. He’s curious and asks what’s in them, and the vendor says, “Tiger repellent.” The guy points out that there are no tigers in New York City, and the vendor replies, “See how well it works?”
Certain ideas enter the world, like Athena, fully armed. Most of these are disreputable. Conspiracy theorists frequently insist that the absence of evidence for their theory constitutes proof of the power of the conspiracy; otherwise how could they cover it all up? Child “therapists” invoke the absence of any memory of sexual abuse as proof of the same; the horrific experience has been repressed. As Renee Fredrickson puts it, with all seriousness, in Repressed Memories: A Journey to Recovery from Sexual Abuse, “The existence of profound disbelief is an indication that the memories are real.” The major religions, of course, are the greatest tiger repellent of all. Good is proof of God’s wisdom and mercy; evil of his subtlety and inscrutability. Throw in a sacred text that it is blasphemy even to translate, and a standing order to slaughter the infidels, and you’ve really built something to last.
Tiger repellent also insinuates itself into more respectable precincts. In 1903 a well-known French physicist named Rene Blondlot announced the discovery of N-rays. Over the next three years more than 300 papers, published by 120 different scientists, enumerated some of the remarkable properties of these rays. They passed through platinum but not rock, dry cigarette paper but not wet. Rabbits and frogs emitted them. They could be conducted along wires. They strengthened faint luminosity, with the aid of a steel file.
N-rays, however, turned out to be highly temperamental. You could produce only so much of them, no matter how many rabbits or frogs you lined up. Noise would spoil their effect. Your instruments had to be tuned just so. Blondlot gave complex instructions for observing them, and still numerous physicists failed, for the excellent reason that N-rays do not exist. The American prankster physicist Robert Wood finally settled the matter by visiting Blondlot’s lab in 1906 and playing several cruel tricks on him. Wood surreptitiously removed the dispersing prism that was supposed to be indispensable to the observation of the rays. Blondlot claimed to see them anyway, and when he died thirty years later he was still firmly convinced of their existence.
It is easy to laugh at Blondlot from a century’s distance. But he was not dishonest, and many of the scientists who replicated his results were highly competent. N-rays were, on the face of it, no more improbable than X-rays, discovered a few years earlier. But a phenomenon so faint, so susceptible to external conditions, so difficult to reproduce, is tiger repellent.
All of this struck Karl Popper with such force that he attempted to erect an entire philosophy of science upon it. One sympathizes. Popper began to formulate his philosophy in the 1920s, when psychology, the largest tiger repellent manufacturer of the 20th century, was coming of age. Popper also, unlike most of his colleagues, does not give the impression of squinting at his subject through binoculars from a distant hill. He knows something of math and science and incorporates examples from them liberally. It is no surprise that of all philosophers of science only Popper, Kuhn possibly excepted, has a significant following among actual scientists.
For Popper a scientific theory must be falsifiable, by which he means that one could imagine an experimental result that would refute it. Scientific theories, it follows, are not verifiable either. No matter how many times a theory has been confirmed, no matter what its explanatory or predictive value, it is on probation, permanently. The very next experiment may blow it all to pieces.
Popper’s imaginary experimental result need not exist in our universe — some theories are true — but merely in some other possible universe far, far away. This possible universe may, indeed must, differ from ours in its particulars but may not violate the laws of logic. That 2 + 2 = 4 is necessary, true in all possible universes; that water boils at 100°C at sea level is contingent, true in ours. Science deals only in the contingent.
This distinction is essential to falsifiability. Some imaginary experimental results are valid, some are not, and this is how you tell the difference. In philosophy it has been formally known, since Kant, as the analytic/synthetic dichotomy. Analytic truths are tautologies; they are necessary; all of the information is contained in the premises. The locus classicus of the analytic is mathematics. Following Wittgenstein, Popper views math as “unpacking tautologies,” and therefore excludes it from science. It is, for him, a form of tiger repellent — useful to be sure, but tiger repellent just the same.
Trouble is, there’s no such distinction, at least not as Popper conceives it. Quine’s refutation in “Two Dogmas of Empiricism” is decisive. His arguments are well-known, if technical, and I will not recount them here. They amount to the contention that the analytic always bleeds into the synthetic and vice versa. Even mathematics turns out not to be strictly analytic, to Wittgenstein’s chagrin. If, as Gödel demonstrated, there are true statements in any formal system that cannot be reached from its axioms, how do we classify them? Are they analytic, or synthetic, or what? When Popper first published The Logic of Scientific Discovery, in 1934, Gödel was already internationally famous. Its index is replete with the names of contemporary scientists and mathematicians. Gödel’s does not appear.
The analytic/synthetic dichotomy has shown considerable staying power, its flaws notwithstanding, because it resembles the way people really think. Gerald Edelman’s theory of consciousness, for one, with its modes of “logic” and “selectionism,” maps quite well to analytic and synthetic. But the philosopher, in his hubris, elevates ways of thinking to categories of knowledge. Matt McIntosh, a convinced Popperian who rejects the analytic/synthetic dichotomy, has promised to salvage falsifiability notwithstanding. He assures me that this post will be coming, as we say in software, real soon now. (Note to Matt: I haven’t grown a beard while waiting, but I could have. Easily.)
The laws of thermodynamics, science by anyone’s standard, are probabilistic. It is not impossible for a stone to roll uphill, merely so unlikely that the contingency can be safely disregarded. Modern physics is statistical, and was in Popper’s day too. Popper acknowledges that respectable science employs probability statements all the time, which leaves him with two choices. He can admit what inituition would say, and what the Law of Large Numbers does say, that probability statements are falsifiable, to any desired degree of certainty, given sufficient trials. But then he would have to abandon his theory. By this same logic probability statements are also verifiable, and verifiability, in science, is what Popper is concerned to deny.
Hence he takes the opposite view: he denies that probability statements are falsifiable. More precisely, he denies it, and then admits it, and then denies it, and then admits it again. The lengthy section on probability in The Logic of Scientific Discovery twists and turns hideously, finally concluding that probability statements, though not falsifiable themselves, can be used as if they were falsifiable. No, really:
Following [the physicist] I shall disallow the unlimited application of probability hypotheses: I propose that we take the methodological decision never to explain physical effects, i.e. reproducible regularities, as accumulation of accidents. [Sec. 67, italics his.]
In other words, let’s pretend that the obvious — probability statements are falsifiable — is, in fact, true. Well, as long as we’re playing Let’s Pretend, I have a better idea: let’s pretend that Popper’s philosophy is true. We can, and should, admit that “what would prove it wrong?” remains an excellent question for any theory, and credit Popper for insisting so strenuously on it. We can, and should, deny that falsifiability demarcates science from non-science absolutely. Falsifiability is a superb heuristic, which is not be confused with a philosophy.
(Update: Matt came through with his post after all, which is well worth reading, along with the rest of his “Knowledge and Information” series. Billy Beck comments. He seems to think I play on Popper’s team.)
Now this is an actual serious criticism, and far better than anything David Stove ever cooked up. I like it.
I’m not sure if we have anything to argue about, actually. To jump ahead and partially give away the ending, I actually do plan on making one big concession: falsification fails at providing a crisp, rigorous line to distinguish science from metaphysics, for Quinean reasons. But that’s okay, because we can still make due with a measure of greater or lesser falsifiability (or conversely, ad hocness) using information theory and computational complexity.
Many delicious nuggets in there. I’m going mining.
Popper’s demarcation is not meant to be sharp and clear, as he pointed out, some theories are more testable than others and the situation changes with development of theories and advances in experimental or observational technology.
The historical significance of the idea was to emancipate the philosophy of science from the rut where the positivists had deposited it. Good scientists have always wanted their theories to be testable and Popper just helped the philosophers to catch up with the scientists by getting the problems of positivism (including the problem of induction) out of the way.
As for testing probability statements, we are actually testing rival theories which predict different scatter distributions in the observations. That is a problem for statistical analysis but not necessarily for the demarcation criterion. It is not really about theory choice, but whether observations can have a potentially decisive influence in the decision.
It helps to remember two other things:
1. Popper insisted on the distinction between logical falsifiability (the relationship between a general statement and a true singular statement) and falsification in the real world which is always conjectural due to theory dependence, Quine factors and the fallibility of senses and equipment.
2. The demarcation criterion is only a small component in Popper’s epistemology which has been played up because it is the most obvious difference between himself and the positivists and empiricists. The more significant parts are the theory of five forms of criticism, and the conjectural or non-authoritarian theory of knowledge and the cognate criticism of "justificationism" (see Bartley for more on that).
You criticize Poppers notion of falsifiability on the grounds that it requires the analytic/synthetic distinction, which you claim: 1) Quine proved does not exist 2) does not even completely apply to mathematics, and 3) cannot handle scientific statements that are probabilistic. Quine proved no such thing, mathematical statements are analytic, and you yourself show falsifiability can be extended to handle probabilistic statements, although Popper didnt do it. My attempt to substantiate these assertions takes up the rest of this super-long post.
Quine showed, quite brilliantly, that there was no completely philosophical account of analyticity that did not depend on equally problematic notions like synonymy, but he also admitted that there was no problem with the analyticity of logical statements like No unmarried man is married, and statements that involve explicit definitions like All squares are polygons. [my example] These cases appear to cover mathematics, and Quine allowed that analyticity was easy to define in artificial languages, although this shed no light on what analyticity really means. Quine did claim, as you did, that the analytic sometimes (you said always) bleeds into the synthetic, and gave one example, Everything green is extended, which he claimed he could not classify as analytic or synthetic even though he understood the terms green and extended perfectly well. This sentence seems analytic to me, but I concede there may be difficult cases. All we need for falsifiability, however, is to be able to tell analytic from synthetic in particular cases, and just because there are some organisms which are hard to classify as animals or plants does not mean we cannot tell a hippopotamus from a violet.
As for Gdel, he did NOT demonstrate that there are true statements in any formal system that cannot be reached from its axioms. It is wrong to speak of the truth of a statement in a formal system, as statements are either theorems or (in a consistent formal system) they are not. Given a model of a formal system, a statement is either true in the model, or its contradiction is true. What Gdel showed is that in any formal system powerful enough to have the natural numbers as a model, there is a statement such that neither it nor its contradiction is a theorem, i.e., any formal system that has the natural numbers as a model also has models that are not isomorphic to the natural numbers. (This is because all the statements that are true in all models of a formal system are theorems of that system. [Gdel himself, 1929]) A true statement in a model of a formal system is true because of the interpretation of its symbols in the model, i.e., it is analytic.
Popper made a hash of considering the falsifiability of probabilistic statements, but there is a way to do it. The key is to ask what are the criteria for accepting a scientific statement in the first place. A determinalistic statement is accepted if, among other criteria, it has no counterexamples, so any counterexample will do to falsify the statement. A probabilistic statement is accepted if there are no counterexamples to instances of the statements that have probabilities approaching 1, and so, as you say, probability statements are falsifiable, to any desired degree of certainty, given sufficient trials. That Popper didnt allow this is a mark against Popper, not his theory.
Popper took Godel’s ideas on board later, as shown in this 1962 essay that is on line at the great Critical Rationalism site set up by Matt Diogardi.
http://groups.yahoo.com/group/StudyRoomMD/message/17
Sorry, the reference to Godel is hard to find. It is near the end of section 3. Actually it is more about Tarski than Godel.
"One immediate result of Tarski’s work on truth is the following theorem of logic there can be no general criterion of truth (except with respect to certain artificial language systems of a somewhat impoverished kind)."
"This result can be exactly established; and its establishment makes use of the notion of truth as correspondence with the facts."
"We have here an intrinsic and philosophically very important result (important especially in connection with the problem of an authoritarian theory of knowledge). But this result has been established with the help of a notion — in this case the notion of truth — for which we have no criterion. The unreasonable demand of the criterion-philosophies that we should not take a notion serious before a criterion has been established would therefore, if adhered to in this case, have for ever prevented us form attaining a logical result of great philosophical interest."
"Incidentally, the result that there can be no general criterion of truth is a direct consequence of the still more important result (which Tarski obtained by combining Godel’s undecidability theorem with his own theory of truth) that there can be no general criterion of truth even for the comparatively narrow field of number theory, or for any science which makes full use of arithmetic. It applies a fortiori to truth in any extra-mathematical field in which unrestricted use is made of arithmetic."
The key is to ask what are the criteria for accepting a scientific statement in the first place.
Yes. A convenient idealization but remarkably troublesome in theory and practice.
That Popper didnt allow this is a mark against Popper, not his theory.
If we are now viewing it as a theory as opposed to some form of metatheory, it is inconsistent with itself. That is a mark against both.
It applies a fortiori to truth in any extra-mathematical field in which unrestricted use is made of arithmetic.
Though brilliant developments, this assessment of their impact is long since out of date:
“The famous impossibility results by Godel and Tarski that have dominated the field for the last sixty years turn out to be much less significant than has been thought.
…
The most pervasive misconception about the role of logic in mathematical theorizing may turn out to be the most important one. […] What is the role of logic in mathematics?
…
Philosophers sometimes think of the axiomatic method as a way of justifying truths that an axiom system captures as its theorems. If so, the axioms have to be more obvious than the theorems, and the derivation of the theorems from the axioms has to preserve truth. The later requirement is discussed below. As to the former, the obviousness requirement plays no role in the most important scientific theories.
No one has ever claimed that Maxwell’s or Schrodinger’s equations are intuitively obvious. The interest of such fundamental equations is not even diminished essentially by the knowledge that they are only approximately true. The explanation is that such equations still offer an overview of a wide range of phenomena. They are means of the intellectual mastery of the part of reality they deal with. Generally speaking, this task of intellectual mastery is a much more important motivation of the axiomatic method than a quest of certainty.
Hilbert’s axiomatization of geometry is an extreme example of this fact. Hilbert does not even raise the question whether the axioms of Euclidean geometry are true in actual physical space. All he is interested in is the structure delineated by the axioms. This structure is spelled out by the theorems of the axiom system. Whether this structure is instantiated by what we call points, lines and planes or by entities of some other kind is immaterial to his purpose.
…
Hintikka, “The Principles of Mathematics Revisited”
"If we are now viewing it as a theory as opposed to some form of metatheory, it is inconsistent with itself."
I think Aaron’s dad was being a bit sloppy here. It’s not a theory in the scientific sense, it’s prescriptive methodology.
"He can admit what inituition would say, and what the Law of Large Numbers does say, that probability statements are falsifiable, to any desired degree of certainty, given sufficient trials. But then he would have to abandon his theory."
This does not follow. Did Popper ascribe some different standards to probability statements. If they are are falsifiable EVEN IN THEORY then they are "scientific", as I understand it.
BTW, a "scientific" proposition to Popper can be either true or false.
If I may, and with all the genuine deference to mathematics-types I can muster, I believe there is a kind of overthinking going on here.
There’s a Simpsons episode where Lisa protests ‘isn’t that fascism’ and when she asks ‘why not’ the response is ‘because that’s not what we call it’.
Science is both a label and a method, and if we want to call mathematics "science" there’s no great harm. On the other end, I believe psychology is a science, however, it is clearly less exact in its proposals and solutions, and frankly uses more of the purely theoretical.
But to my point (such as it is): I think falsifiability can be fairly, and only somewhat sloppily, be re-named "the capacity for revision based on better evidence."
I think this is more-or-less to equivalent to a nagative definition of science – that science is knowledge which does not rest on ‘faith’.
This might be the rhetorical equivalent of a mugging, but I don’t think any of this involves the larger question of Truth, only pragmatic truths.
A friend and I once argued about .9 repeating. He maintained that it is equivalent to "1" and cited what must be a very impressive proof.
At the end of it, I wrote down a .9 and circled it, and asked him to identify it. "point 9" he said, stifling a yawn. Well, I kept adding in .9s and asked him to name the circled number again. I did this a few times and asked him which additional .9 gives the power to change the circled number to a 1.0.
A few of you might be smiling a bit, and that’s completely understandable, but you see, .9 repeating isn’t 1 – that’s just what you call it. :0p
Cheers,
Kev
BTW Bourbaki, I am happy that Hintikka is known for something other than the performative utterance.
If I may, and with all the genuine deference to mathematics-types I can muster, I believe there is a kind of overthinking going on here.
Or circumlocution.
Your story of the n-rays is a good one.
Plate tectonics in the early 60s was seen as hocus-pocus.
whatever theory ur side subscribes to is truth and the other side are dunderheads and religious extremists, not worthy of a scientist’s time.
goes on today in the creation-evolution argument.
Bourbaki, have you ever seen .999> cat?
I’m aware that mathematical proofs prove the matter, and rely on the notion of ‘infinity’.
This is freely granted, and I’m not sure why you’d focus on that, use the word circumlocution (not something I’m generally guilty of, unlike pleonasm, dude!), and neglect the ontological implications inherent to insisting numbers occupy the same reality as, say, cats.
What I was trying to do in a sort of gentle way was try to say that ‘science’ and ‘falsifiability’ and the whether or not mathematics really is ‘unpacking tautologies’ all sort of presupposes Truth-with-a-capital-T, but, apart from tautologies and circular definitions, there’s no logical argument that can itself prove that "Truth" exists at all.
I’m not sure (rather, I don’t know) if the proof of .999> = 1 is itself an analytic or a synthetic statement (…Haspels???) but perhaps it does’t really matter.
I think falsifiability posits that a theory can never really be proven True, however, it is a strong theory to the extent it can’t be demonstrated to be false. Some theories are so strong that (perhaps like .999>) they can in practical terms be considered true. Yet, it’s one thing to be very confident a coin will never land on its edge, and another to actually do it an infinite amount of times.
The analytic/synthetic dichotomy seems to stand in this regard. For example, you could say "there are either an even, or an odd number of coins in your pocket" is a statement which is True, because it is necessarily true by definition. In other words, the "world" by definition is divided into only two possible states of existence, even or odd.
But synthetic statements are incapable of dividing the world into only two [or three, etc.] possible states of existence because, simply enough, the "world" which the statement is about might include other possible states of existence.
Falsifiability is recognizing that even if we repeatedly, consistently demonstrate that the world is a certain way, the world itself is not the same as our definition of it [and ‘odd’ and ‘even’ are essentially only definitions of things, not things-in-themselves].
Then again, the First Rule of Holes suggests I stop digging…
Mr. von Einstein,
Circling 9s? I couldn’t resist. It was a bad pun.
The developments earlier this century led some philosophers to reformulate mathematics as the science of structure. Structuralists are realists in truth-value but tend not to be realists in ontology (unlike the Platonists).
Structuralism rejects any sort of ontological independence among the natural numbers. The character of a natural number is its relationship to other natural numbers.
A person can learn about the number 2 while being ignorant of the number 3541. This is epistemic independence that doesn’t preclude an ontological relationship or its truth-value within a structure.
0.999 > cat and associated Caesar problems were issues in Frege’s logicism.
Consider a game of chess. There are a set of allowed moves. Asking "Is a pawn made of marble?", "What should I do with a bishop?", "Which moves will let me win?" or "Should I eat this sandwich?" independent of a game and opponent or structure is meaningless. Where are we in the game? Where are your pieces? How may your opponent react in the next move? In two moves? etc?
Back to probability. I don’t see how Popper’s "self-inconsistent prescriptive methodology" (heuristic?) can be equated to Kolmogorov complexity as Mr. McIntosh asserts.
Kolmogorov complexity is neither self-inconsistent nor prescriptive. Although reducing it might simplify the task of falsification.
Kolmogorov also happened to extend probability through measure theory. How do you choose a measure and a model? For accuracy? For expediency?
For top-down reasons like pragmatism?
Based on what criteria? Based on what structure?
Bour – can you dumb that down a bit for me? I’d like to follow it, and my spider-sense tells me I’d largely be in agreemnt with you, but I can’t be sure.
What I’m wondering now is about the interplay between probability as such and ‘degree’ of certainty.
A line from the Naked Gun comes to mind – in re: the fate of Nordberg which loosely paraphrased was ‘there’s a 50/50 chance he’ll make a full recovery, though there’s only a 10% chance of *that*.
Perhaps this is a stupid question (I’m only slightly embarrassed to ask stupid questions) but does the probability of probability change?
Stupidity aside, that might not be entirely unreasonable a question. For example, by one theory, gravity might effect the probability of electron states – there might be a sort of gravitational ‘tipping point’ at which probability collapses to about 1. Of course, that’s just a theory…
You know, I’ll shut up now.
I don’t see how Popper’s "self-inconsistent prescriptive methodology" (heuristic?) can be equated to Kolmogorov complexity as Mr. McIntosh asserts.
This is backward. Falsifiability is inversely proportional to Kolmogorov complexity. This rests on four basic steps which I think are pretty hard to challenge:
1. Simpler theories have a lower prior probability than more complex ones.
2. Lower prior probability equals greater degree of falsifiability.
3. All scientific theories can be coded as algorithms.
4. Chaitin-Kolmogorov information is a rigorous measure of the complexity or simplicity of an algorithm (and hence of a theory).
Which part of this seems questionable?
How do you choose a measure and a model? For accuracy? For expediency? For top-down reasons like pragmatism? Based on what criteria? Based on what structure?
As a rule, scientists should do whatever allows them to test the theory most accurately and severely, within whatever economic constraints are imposed on them. These are practical concerns that will vary with circumstances.
Mr. McIntosh,
Which part of this seems questionable?
Your original assertion of equality.
The advantage in reducing KC was already acknowledged above.
"Kolmogorov complexity is neither self-inconsistent nor prescriptive. Although reducing it might simplify the task of falsification."
But even the correlation seems tricky. Consider a 1-bit system. You’ve just falsified 0. Are you saying then that you’ve increased the falsifiability of 1?
I’m not sure what you mean by "a 1-bit system". Do you mean the algorithm is one bit, or that there’s only one bit in the output? If the former, that’s a practical impossibility when we’re talking about theories. If the latter, then obviously there is no simpler theory than the output itself, which is no theory at all in any meaningful sense of the word.
I’m taking it as assumed here that we’re talking only about theories, which are a particular subset of algorithms. Kolmogorov complexity can apply to any kind of algorithm, but I’m only concerned with the subset that constitute scientific theories. For that set, lower Kolmogorov complexity=lower prior probability=greater falsifiability.
Mr. McIntosh,
So now we a philosophy of science involving practical scientists and approved Turing-machine compatible sets of ideas? This reads more like legislation.
Bits are the currency of Kolmogorov complexity. The Kolmogorov complexity of a bitstring x is the length of the shortest program that computes x and halts.
KC offers no methodology to determine if a given program is the shortest or whether it will ever halt even in theory.
– Is there a minimum program length in theory that qualifies an idea for this set?
– Is there a maximum halting wait time?
– How would you rank Newtonian Gravity and General Relativity.
– How would you rank the Out-of-Africa and Multiple-Origins theories of Homo sapien lineage?
What would these programs look like?
Science as practiced is generally more forgiving and less binding than this. It’s more about (potential) capabilities, explanatory power, and openness versus authority and faith. Although research directors are just as prone as anyone else to say “Because I say so.” Compared to the final results, the process looks and smells more like a sausage factory.
Mr. von Einstein,
What I’m wondering now is about the interplay between probability as such and ‘degree’ of certainty.
Our degree of certainty is related to our ability to weigh possible alternatives. The alternatives we consider are, in turn, driven by the rules we use. A person who is ignorant of the rules of chess would see no problem in simply reaching across the board and nabbing the king. A more experienced player might realize the futility of two knights and a king against a lone king and save another piece instead.
Probabilities constantly change. They depend on available information (filtration) and our choice of measure and model. In fact, even the rules themselves can change. For example, at typical biological energy levels, conservation of energy and matter holds (which might hint at an explanation of our "a priori" affinity to arithmetic). At higher energy levels, at least the ones we’ve investigated, only conservation of energy holds.
Mr. McIntosh,
We’re talking about assessing the finished product.
With what?
The issue here is that there’s no methodology to calculate KC even in theory. I’m not saying you shouldn’t investigate its use but as it stands, the measurement itself is more theoretically troublesome than the theories it’s meant to evaluate.
Why burden a useful and simple non-scientific heuristic with a theoretically uncomputable Gordian metric?
In other words, why are you trying to turn it into a science?
Bourbaki,
I think we’re getting muddled here, which I’m willing to assume is my fault. First of all, my posts weren’t intended to outline any philosophy of science, and I did concede that there isn’t really any hard and fast criterion that demarcates science from everything else. I was just attempting to tighten up some older ideas (i.e. from Quine and Popper) by bringing more recent insights to bear on them, and hopefully generate a little light in the process. Consiliation, if you will.
Secondly, I’m frankly confused by your remark about legislation and approval because I said no such thing. All theories are algorithms, but not all algorithms are theories; which ones are theories is not decided by any particular person(s), they just become theories by definition when we use them to make predictions about anything (i.e. "the digits of pi are…", "the path of the orbit of Mercury is…", "the products of this reaction will be…", "the evolutionarily stable strategy/ies for these conditions will be…", etc.). Trivially, it makes no sense to talk about falsifying an algorithm if it isn’t being used to predict anything.
"KC offers no methodology to determine if a given program is the shortest or whether it will ever halt even in theory."
Of course there’s no general solution to that: it’s the halting problem. So what? We can still compare between theories to find out which is more elegent/falsifiable, and we also have an upper bound on ad hocness.
"Is there a maximum halting wait time?"
This will be decided by the brute economics of the situation and is not my concern, because the economics change all the time. Following Chaitin, I’m interested in size rather than runtime.
"How would you rank Newtonian Gravity and General Relativity."
Thumb in the air, without running the calculations, Newton would appear to be more elegant. Newton is also false, but that’s a different issue; I’m only doing prior rather than posterior analysis.
"How would you rank the Out-of-Africa and Multiple-Origins theories of Homo sapien lineage?"
This is like asking you what alpha has to say about my hygiene habits.
Re: sausage factory, absolutely true. But the fact that science is messy in practice it the same as pointing out that mathematicians are sloppy and make mistakes. We’re talking about assessing the finished product.
Well like I said, it was mainly an exercise in trying to link together what I saw as being related ideas. As you said to Jim way back when, we don’t need a GPS to find the bathroom and for most purposes the heuristic version works just fine.
Chaitin, who I presume knows what he’s talking about since he was one of the creators of AIT, is the one who explicitly made the connection between AIT and the elegance of theories. I just picked it up and ran with it. If it turns out he’s wrong about what is and isn’t possible with it, then I’m completly barking up the wrong tree.
Kolmogorov and Chaitin–first rate and I’ve got nothing against elegance–although I wish Chaitin would lay off the exclamation points!
I don’t want to discourage you from pursuing the idea. It’s just that the theoretical issues might indicate a dead end. If you can find a way around them, brilliant.
For the moment, let’s ignore them.
If you could effortlessly calculate this number,
(1) Which competing set of actual scientific theories would you evaluate first?
(2) How would/should scientists treat the scored collection differently than they do today? In other words, which existing factors would this score override?
They may not necissarily score differently at all. Simplicity is already considered a virtue in a theory (second only to consistency, both internal and with all observations); all this would do is tighten up the notion of what simplicity actually is.
It may indeed turn out to be a dead end, which wouldn’t be a big deal. Having thought about the issue a bit more, I’ve just realized that Quine’s criticism is not as damaging as I thought; or more precisely, it’s damaging to positivists, but not so much to anyone else. I may post on it in a few days, after I’ve re-read "Two Dogmas" at least once more and thought this out more thoroughly.
Just reread "2 Dogmas" again after 32 years. I’m afraid Aaron’s dad is correct. Type 1 analyticity is not foresworn.
What Popper was discussing was not per se a theory of truth. Instead, it was really a theory for using the encomium "science". Popper would clearly agree that there are plenty of true propositions which could not properly be called "science". Type 1 analytical statements would be included because they are not falsifiable, yet they would be true.
I’d like to aim a lot lower than "truth" or "science". I would like to know if anyone has a theory of plausibility.
If every link in a chain of reasoning is likely, is the result always plausible?
Here, as promised. Assuming I haven\’t made any significant errors (famous last words!), this whole thing was rather less complicated than I thought. Personally I blame Aaron and Quine for messing me up — Popper doesn\’t need analytic/synthetic at all, at least not as Quine defines it. Quine\’s arguments are indeed decisive, but they also turn out to be irrelevant if I\’ve got this right.
Flat Earth. Spherical Earth. Infinite Earth. All topological theories in the falsifiable sense at one time or another. Where do these "three worlds megaphysics" stand today?
Try being a little less opaque and I might be able to get a grip on the question. Taking you literally, two out of three are falsified, but this is so trivial I have to wonder if I’m missing the point.
I pointed at the sky once but only poked the eyes, said a salad covered fish.
Re: the flat earth: Were we to falsify this, it would yield similar results.
The world is not a perfect sphere. Is anything ever perfectly flat? The world is infinite in relation to nothing? (but not to a fish)
Hey Boubaki, since I know it concerns you, I got rid of the Saab and got a Benz. I like the way the Saab drives better, but they could fix the a/c. I guess they didn’t understand Boltzmann Gibbs or Maxwell’s demon.
Surely a good Popperian argument against Quine is just that our knowledge of the analytic / synthetic distinction is, itself, conjectural.
Yes, I may be wrong thinking this was an analytic question rather than a synthetic one. But I may be wrong in thinking that the number 7 bus stops at Golders Green. There’s no difference in the kinds of "wrong" I am. And there’s no reason to be embarrassed because I can’t be sure a priori where the frontier lies.
It’s only the justificationist who needs to worry about this kind of thing.
Off topic slightly but I wanted to congratulate aaron on his almost 6 month anniversary of adding sfa to this what was otherwise an appealing weblog.
IF: I were compulsed to reach for my pistol every time I started seeing this sort of thing float up in practical politics…
THEN: I would have to start shooting at you bastards first.
Onward, through The Endarkenment.