Part 1: Starting from Zero
Part 2: Meet Eustace
Part 3: Bernoulli Trials
Part 4: Why Randomness is Not All Equally Random
Have you ever been told what to do with your life? A particular college, major, grad school, career? Isn’t it annoying, that someone would presume to plot out your life for you, as if you had no say in the matter? Probability theory has a term for this (the plotting, not the annoyance): strong solution.
A strong solution is any specified trajectory for a random process. In our coin flipping game it would be the realized sequence of heads and tails. Of course Eustace can’t know such a path in advance. The best he can do is to construct a distribution of possible outcomes. This distribution is a weak solution, which is defined, not by its path, which is unknown, but only by the moments of a probability distribution. If Eustace knows a random process is stationary, he has confidence that the moments of the process will converge to the same values every time. The coin flipping game, for instance, is stationary: its long term average winnings, given a fair coin, will always converge to zero. Looking into an uncertain future, Eustace is always limited to a weak solution: it is specified by the expectations, or moments, of the underlying random process. The actual path remains a mystery.
So far we haven’t given poor Eustace much help. A weak “solution” is fine for mathematics; but being a mere cloud of possibilities, it is, from Eustace’s point of view, no solution at all. (A Eustace entranced by the weak solution is what we commonly call a perfectionist.) Sooner rather than later, he must risk a strong solution. He must chart a course: he must act.
Well then, on what basis? Probability theory has a term for this too. The accumulated information on which Eustace can base his course is called a filtration. A filtration represents all information available to Eustace when he chooses a course of action. Technically, it is an algebra (sigma algebra) defined on an increasing set of information (Borel set). The more of the available filtration Eustace uses, the better he does in the casino.
In the coin flipping game, Eustace’s filtration includes, most obviously, the record of the previous flips. Of course in this case the filtration doesn’t help him predict the next flip, but it does help him predict his overall wins and losses. If Eustace wins the first flip (t=1), he knows that after the next flip (t=2), he can’t be negative. This is more information than he had when he started (t=0). If the coin is fair, Eustace has an equal likelihood of winning or losing $1. Therefore, the expected value of his wealth at any point is simply what he has won up to that point. The past reveals what Eustace has won. The future of this stationary distribution is defined by unchanging moments. In a fair game, Eustace can expect to make no money no matter how lucky he feels.
His filtration also includes, less obviously, the constraints of the game itself. Recall that if he wins $100 he moves to a better game and if he loses $100 he’s out in the street. To succeed he must eliminate paths that violate known constraints; a path to riches, for instance, that requires the casino to offer an unlimited line of credit is more likely a path to the poorhouse.
We can summarize all of this with two simple equations:
E(wealth@t | F@t-1) = wealth@t-1 (first moment)
variance(wealth@t|F@t-1) = 1 (second moment)
The expected wealth at any time t is simply the wealth Eustace has accumulated up until time t-1. E is expected value. t is commonly interpreted as a time index. More generally, it is an index that corresponds to the size of the filtration, F. F accumulates the set of distinguishable events in the realized history of a random process. In our coin game, the outcome of each flip adds information to Eustace’s filtration.
We have also assumed that when Eustace’s wealth reaches zero he must stop playing. Game over. There is always a termination point, though it need not always be zero; maybe Eustace needs to save a few bucks for the bus ride home. Let’s give this point a name; call it wealthc (critical). Introducing this term into our original equation for expected wealth, we now have:
max E(wealth@t – wealthc | F@t-1)
His thermodynamic environment works the same way. In the casino, Eustace can’t blindly apply any particular strong solution — an a priori fixed recipe for a particular sequence of hits and stands at the blackjack table. Each card dealt in each hand will, or should, influence his subsequent actions in accordance with the content of his filtration. The best strategy is always the one with max E(wealth@t|F@t-1) at each turn. In this case, F@t-1 represents the history of dealt cards.
As Eustace graduates to higher levels of the casino, the games become more complex. Eustace needs some way of accommodating histories: inflexibility is a certain path to ruin. Card-counters differ from suckers at blackjack only by employing a more comprehensive model that adapts to the available filtration. They act on more information — the history of the cards dealt, the number of decks in the chute, the number of cards that are played before a reshuffle. By utilizing all the information in their filtration, card counters can apply the optimal strong solution every step of the way.
In the alpha casino, Eustace encounters myriad random processes. His ability to mediate the effects of these interactions is a direct consequence of the configuration of his alpha model. The best he can hope to do is accommodate as much of the filtration into this model as he can to generate the best possible response. Suboptimal responses will result in smaller gains or greater losses of alpha. We will take up the policy implications, as one of my readers put it, of all this in Part 6.
Disclaimer: Although I use the language of agency — to know, to act, to look into the future — nothing in this discussion is intended to impute agency, or consciousness, or even life, to Eustace. One could speak of any inanimate object with a feedback mechanism — a thermostat, a coffeemaker — in exactly the same way. Unfortunately English does not permit discussing these matters in any other terms. Which is why I sometimes want to run shrieking back to my equations. You may feel otherwise.
english does permit discussion in other terms, but it is in the combination of multiple terms, not in the form of one.
To mediate is to resolve as an intermediate agent or mechanism. So you are saying that Eustace, an object with feedback that is separate from the random processes it encounters, is able to resolve (or settle) the conflict between the effects (an effect is a thing brought about by a cause or agent of cause and is therefore a result of such action or happening, and is therefore a result) of the interactions of random processes and the optimal processes of Eustace such that Eustace benefits, because of (the configuration of) Eustaces alpha model. Eustace is also, according to you, capable of, via feedback as you put it, resolving the effect of random processes and Eustaces optimal processes such that the results of the random processes favor or benefit the optimal processes of Eustace. Eustace does this via optimal response which, again, must stem from Eustaces Alpha Model.
Eustace achieves this, ahem, fed back response by filtrating results (effects that come from consequential interactions between Eustaces optimal process model and any random process, which must therefore come from the Eustace alpha model) through a generative system wholly aligned to, and operating in accordance with, the understood optimal responsive operation that governs the process of what you would have us understand as optimal outcome, which again must stem from the alpha model.
So what you are saying is that Eustace is not simply an object with feedback but also an object of optimal continuance, a thing concerned with responses that generate optimal processes, processes that are allowed for by a system of governance you call the alpha model.
Again, Aaron, I fail to see how this is not simply a tautology, or a thing that defines itself by being itself, or that allows for itself only because of itself, or any other silly thing. The predictive ability and the relevance of such prediction that comes from an object of feedback wholly adherent to a governing system with an absolute understanding of optimal cannot be the resolution (however its mediated) that optimizes random precession, as randomness by nature is both chaotic and unknown, and therefore unable to be resolved via a strict and absolute standard and turned optimal. The whole of random procession has done nothing but take a backseat to a specific system whose course is pre-charted and on spotlight at center stage because, well, you told us to place Eustace there. Eustace is the afterthought of random procession, the small and ordered composition of feedback mechanisms designed to achieve an end already understood by Eustace to be actual and achievable, an end you have labeled optimal, which is the staking claim that Eustace has against itself, Eustace, simply being a random process. However, even an optimal response and an optimal occurrence are random processes were it not, again, for the alpha model, the alpha model of a feedback system that has now supplanted random process because you told us it does.
So what you are saying is that Eustace is able to determine optimal process in light of the conflicting interactions to random processes. You would have us believe this because, as per YOUR definition, Eustace has conquered all unknown chaos by establishing that an effect is an outcome and not a happening, and that through level digressions, all of course generated by a feedback response (in a humans case his brain presumably), which itself comes from the alpha model…through level digressions Eustace may achieve more or less of the absolute optimal outcome by responding to random processes in one way and not another.
All you are doing is giving the alpha model the power over all that is other. It is not evident that this is so. You are simply saying that it is because you think equations are pretty and that closed systems of logic signify absolutely and may convey this unto all systems of understanding and actuality. This, too, I believe, is a tautology.
Mr. UGRP
You are, (to quote you) ahem, putting the cart before the horse.
First things first, are you saying the equations are wrong or that they have been explained poorly?
If the language is ambiguous, Mr. Haspel should clean it up. If the derivation is wrong, you’ll need to point out the offending equations and explain why.
Let’s not make this another game about word choice.
Aaron,
People have no problem purging "agency" from their talk when it comes to inanimate matter. Historically, primitives generally all believe that everything is like themselves–i.e., conscious and volitional (neither rocks nor fish ever regard the world thus, only agents)–but as science developed, this was reduced to the true bases of consciousness (animals) and full volition (humans.) Hence, I never have trouble saying that when rock fell due to an earthquake, it was "caused," it "happened," but never, it "chose," or it "intended." No, language does not ever NEED to contain any of the language of agency when speaking of that which does not choose. Let me suggest that if the language of agency is really that inescapable in a given context, then maybe you are talking about real agents.
Also, as I’m sure you recognize, you have not proved that "Eustace" is either the pattern of biological evolution or that of human choosing yet. (Humans need not optimize their choices at all, but, much worse than lemmings, can make poor choices all the way into extinction, right? That’s also why we can "make this another game about word choice.") So far, it’s just an interesting approach to two kinds of randomness/uncertainty that (even I can see) implies some normative guidelines.
Moreover, unless I’ve missed something, this approach so far does not account for probabilities greater than the two kinds of randomness thus far explored. Some things are known with a high degree of probability and others with complete certainty (or you could never have gotten this theory off of the ground.)
"Randomness" (leaving the cutting-edge and theoretical assertions of quantum physics aside) is the epistemic state of relative ignorance. If all of the involved factors are known, the coin-toss outcome would be a certainty. Many things I have considered in life have gone from a mere possibility, to a degree of probability, to a contextual certainty, as my level of understanding in the area has improved.
Set me straight, boss…
You can combine the terms of agency all you like, and you still wind up with agency. Nor should we be surprised. After all, we were hanging dogs for killing sheep no more than a couple hundred years ago, and we still worship gods and search for a purpose to the cosmos. Humans are wired to find agency everywhere, and our language can only be expected to reflect that fact.
Mr. Valliant,
I believe Mr. Haspel is attempting to balance opposing forces of making some very challenging equations intuitively appealing in prose without corrupting their actual consequences.
"Eustace is acted upon by energy. Modifications made on Eustace by these forces may change Eustace’s configuration. These new configurations will, in turn, react differently to subsequent forces."
This is more accurate but, some would say, not quite as appealing as picturing Eustace evolving as he travels through space or plays in a casino. Recursive probabilities make my head hurt. Again, it is easy to argue for either approach but fortunately, there are mathematical equations that can be invoked and challenged to dispel any ambiguity.
"Also, as I’m sure you recognize, you have not proved that "Eustace" is either the pattern of biological evolution or that of human choosing yet."
The proof is not finished so I don’t think Mr. Haspel has proved anything at all yet. Let’s not get too far ahead of ourselves before the next post. It appears that Mr. Haspel has to deal with his own Theodoric in between posts.
This has been proposed thus far:
We do know that all living organisms process energy to sustain themselves. Maximizing entropy, S, for any living system will cause it to "die".
Alpha is a metric that measures the coherence or dissipation of energy flux. All interactions leave a thermodynamic wake for which we can calculate alpha. If a single Eustace or a whole species interact with their surroundings so that alpha is reduced, each will dissipate (go extinct).
As stated in the current post, the optimal set of choices (an anticipating strong solution) is by no means guaranteed for any Eustace. So poor choices are not only possible, but common. As far as I’m concerned, I don’t need any further evidence to prove that I have the capacity to make poor choices.
"Moreover, unless I’ve missed something, this approach so far does not account for probabilities greater than the two kinds of randomness thus far explored."
Contextual certainty (as you put it) was also invoked in the post.
Let M = moment
M1 = E[f(x)@tF@t-1] = y
M2 = variance[f(x)@tF@t-1] = 0
M3 …
Mn = 0
where all higher moments > 1 are zero. In other words, nothing in the available filtration offers any uncertainty in the outcome of the event. Mr. Haspel invoked this himself when he asserted that the moments themselves of a stationary process (fair coin tosses) can be known with absolute certainty to converge via the Central Limit Theorem.
Mr. UGRP
Here we go again. Mr. Haspel kindly reformatted your post to make it more manageable–sort of.
If you are going to core dump your brain, please spare us the Montezuma’s revenge. It’s a royal pain to thresh through so much unsubstantiated, rambling conjecture.
To mediate is to resolve as an intermediate agent or mechanism.
A configuration of Eustace is an intermediate between thermodynamic states: before the energy transfer and after the energy transfer.
The concept may be phrased other ways without using the word ‘mediate’ if that is what troubles you.
So you are saying that Eustace, … (the configuration of) Eustaces alpha model.
This is one sentence? You gotta do something about the runs…maybe stop drinking the water?
Eustace is an open thermodynamic system defined within a volume of space. A Eustace that does not process energy to maximize alpha has a greater likelihood of dissipation. See statistical thermodynamics.
Eustace is also, according to you, capable of, via feedback as you put it, resolving the effect of random processes and Eustaces optimal processes such that the results of the random processes favor or benefit the optimal processes of Eustace. Eustace does this via optimal response which, again, must stem from Eustaces Alpha Model.
The configuration of Eustace may contain coupled processes that affect each other. You’ve got it backwards…again.
Eustace may respond in any number of possible ways–if the consequence is a reduction of alpha, Eustace’s physical stability will compromised (entropy will increase).
Eustace achieves this, ahem, fed back response by filtrating results (effects that come from consequential interactions between Eustaces optimal process model and any random process, which must therefore come from the Eustace alpha model) through a generative system wholly aligned to, and operating in accordance with, the understood optimal responsive operation that governs the process of what you would have us understand as optimal outcome, which again must stem from the alpha model.
How did you conclude this? There is no "understood optimal responsive operation". The optimal response does not "come from" the alpha model.
For any volume of space there is some available amount of free energy. When that energy interacts with the system, there are thermodynamic consequences that can be measured in alpha.
There is a conservation law that dictates is a maximum possible alpha that can be generated for any given dG.
So what you are saying is that Eustace is not simply an object with feedback but also an object of optimal continuance, a thing concerned with responses that generate optimal processes, processes that are allowed for by a system of governance you call the alpha model.
You seem to be the only one saying this. The Eustace that has the greatest likelihood of continuing is the one that generates the greatest alpha for a given dG.
Again, Aaron, I fail to see how this is not simply a tautology, or a thing that defines itself by being itself, or that allows for itself only because of itself, or any other silly thing.
Perhaps you first need to learn the definition of ‘tautology’.
The foundation of the theory is empirical. You are free to do experiments to disprove the laws of thermodynamics. Recursive application of the process is not tautology.
The predictive ability and the relevance of such prediction that comes from an object of feedback wholly adherent to a governing system with an absolute understanding of optimal cannot be the resolution (however its mediated) that optimizes random precession, as randomness by nature is both chaotic and unknown, and therefore unable to be resolved via a strict and absolute standard and turned optimal.
Wholly adherent to what governing system?
The whole of random procession has done nothing but take a backseat to a specific system whose course is pre-charted and on spotlight at center stage because, well, you told us to place Eustace there. Eustace is the afterthought of random procession, the small and ordered composition of feedback mechanisms designed to achieve an end already understood by Eustace to be actual and achievable, an end you have labeled optimal, which is the staking claim that Eustace has against itself, Eustace, simply being a random process.
Mr. UGRP, if you can tell us more about the future of random processes, we are all ears. Us mortals are confined to working with information that we can actually observe and confirm.
However, even an optimal response and an optimal occurrence are random processes were it not, again, for the alpha model, the alpha model of a feedback system that has now supplanted random process because you told us it does.
This is completely wrong. For a given dG, there is a *fixed* maximum alpha. Again, you can verify that simply by looking into the thermodynamics.
So what you are saying is that Eustace is able to determine optimal process in light of the conflicting interactions to random processes.
What you’re saying is that you didn’t read the post. Mr. Haspel clearly stated that strong solutions are not possible.
You would have us believe this because, as per YOUR definition, Eustace has conquered all unknown chaos by establishing that an effect is an outcome and not a happening, and that through level digressions, all of course generated by a feedback response (in a humans case his brain presumably), which itself comes from the alpha model…through level digressions Eustace may achieve more or less of the absolute optimal outcome by responding to random processes in one way and not another.
There was no definition.
These consequences were derived from physical laws that you are free to contest via experiment.
All you are doing is giving the alpha model the power over all that is other.
Ok, Geronimo, if there is power of another that is greater than the power of thermodynamics, please enlighten us.
It is not evident that this is so. You are simply saying that it is because you think equations are pretty and that closed systems of logic signify absolutely and may convey this unto all systems of understanding and actuality. This, too, I believe, is a tautology.
It is not evident to you because you clearly haven’t read the posts.
Unfortunately, this stuff is pretty hard. Playing with words and definitions will not get you anywhere–you’re going to have to either ask for clarification so you understand what you’re challenging or challenge the underlying equations directly. I recommend you pick up with the definition of ‘tautology’ and check back when you’ve sorted it out.
Bourbaki,
Let me put it this way. Aaron is indeed talkin’ more than tautology, as has been claimed here, since he is discussing basic physics. This is about the world, not just "randomness," if still quite abstractly.
However, "randomness" is NOT a concept pertaining to reality apart from human consciousness — it directly refers to the contextual limitations of human knowledge. True "randomness," a state of considerable ignorance, perforce extends us to the outer limits of the range of possible outcomes. What Aaron will (at best, so far) be able to say is something like: "all other things being equal, alpha tells us that outcome X will eventually transpire." It’s that "eventually" that is the whole secret to life. It is the various "configurations," "strategies," and transitory states where all the juice is, no? The when, the how, the who, of it all. The universe will one day become a diffuse spread of micro-particles according some physicists. So what? It’s in all of the in-between states where my interests are to be found.
The idea of "agency" is just a case in point. I know that Aaron’s thinking here is "agency-neutral" (at least as much as one can be–I’ll not start THAT argument here). However, how does our Eustace-unit become a better optimizer over time, from a single cell to a walk on the moon? By developing consciousness, then by developing self-consciousness, i.e., by becoming an agent. Once an agent, things speed-up, but only on the agent’s schedule.
I will wait for the full theory and proof, and I believe that the very clever Aaron must have a oood answer to this, but it needs an answer. How much real substance can we get from all of this? Isn’t the real substance in things somewhat more certain than the merely "random"?
You did not imply it was a tautology–that was Mr. URGP.
"all other things being equal, alpha tells us that outcome X will eventually transpire."
No. Unfortunately, here both you and Mr. URGP have gotten it backwards. Alpha tells us no such thing about the future–we need a probability model to tell us whether something might happen.
And you’re right–this model can only be based information that is actually available to us (filtration). This probability model may involve a variety of distributions (binomial, Gaussian, Poisson, chi-square, etc.) that may or may not fully capture the underlying random process. I believe Mr. Haspel focused on the main ones to avoid giving a full course on probability. This probability model may involve predicting alpha.
Alpha is itself a measure of what has happened in thermodynamic terms. That’s why I used the term thermodynamic wake in the response above. The consequences of energy flux can be directed toward increasing or decreasing alpha for Eustace. We know that alpha decay leads to entropy maximization (death).
Let’s stay focused on what has been presented rather than draw premature conclusions. Mr. Haspel’s disclaimer simply stated that he doesn’t want to deal with agency at this point. So let’s hold off until it’s actually addressed.
However, how does our Eustace-unit become a better optimizer over time, from a single cell to a walk on the moon? By developing consciousness, then by developing self-consciousness, i.e., by becoming an agent.
Care to guess what happens to Eustace’s alpha as he goes through these stages of evolution e.g. molecules coupled and arranged in certain ways to propogate signals?
I would think philosophers willing to assert the ought might take a greater interest in discovering what life is. I don’t want jump ahead and spoil the ending but in the meantime, please offer any metric (eg size, weight, color, Kolmogorov complexity, Chaitin, etc.) that is a better alternative for capturing the dynamics of living systems.
I have a question: why when the change in available energy is equal to the change in heat content of a system minus the temperature when multiplied by the dispersal of energy which can be either positive or negative does this mean that the tractable result be one, and not, say, 0.8394?
Is not (a) is simply an expressed relationship to G. Also, what could make the heat content of a system entirely dissipate?
How does this make (a) the measure of coherence of a system? A coherence is a consistent relationship between separate components, but what you are speaking of is a massive convergence of interaction dependant components. Perhaps (a) is simply commentary on the interaction, but it can hardly be the standard of consistency can it? Is not consistency something always eventual? Is not a in your equation actually a reflection of outside forces and their THEN eventual interactions, assuming of course that the second law holds true, which seems sage to say, but what of the impediments like covalent bonds? Are they not also then an expression of coherence within a system, similar in most interactive ways, at least conceptually, with a?
I never even took biology let alone chemistry or physics, so im stupid at this. But these questions seem to be natural. Someone please explain
i plan on visiting a casino ever. 🙂
I wrote what i did as an attack against the language aaron used, not to attack the equations, but to show that the equations were using the language of agency to be expressed, and the flaws of agency are evident in the strictly conceptual sense of thermodynamics you keep suggesting i look into.
if i were attackin the equation i woulda, but i was attacking the language because i feel like agency is not a neccessary part of this ideas expression.
even if it makes it easier, as bourbaki said, to consider a casino, i find it makes it more difficult, because i am imagining empty space and particle relationship and energy transfer as a heads or tails flip of a coin.
am i wrong is so doing?
Mr. Whigham
The relationships are not arbitrary–they’re simply a consequence of algebra applied to the first and second law. No matter what you do you can’t make the algebra produce 0.8394. Covalent bonds are accounted for in dH (enthalphy).
Mr. UGRP
I agree that agency is not necessary to express these ideas. And you are right in that its inclusion can confuse things but the task of conveying the subtleties of probability and physics while minimizing the use of equations can lead to some pretty challenging language.
Perhaps the remaining posts will clear things up. If not, they will certainly need to be revised.
Mr. Whigham,
To be considerate to others who are actually interested in the topics presented in this post i.e. strong and weak solutions, filtrations, etc. please posts your questions in the appropriate thread.
It’s fine that you don’t know physics or chemistry but please don’t use that as an excuse to inject random off-topic questions.
Bourbaki,
Sorry for the inexact shorthand, so, putting it better: "We know that alpha decay leads to entropy maximization (death)." Wow, we can really take that one to the bank! Does it even yet explain the various life-expectancies of differing organisms? I’m much more interested in such details than in theoretical elegance of any kind.
It also seems to me that any normative advice this might offer (so far) requires a great deal more. I will repeat: How much real substance can we get from all of this?
And, yes, I can wait.
(P.S. You are still the same needlessly rude jerk you’ve always been, I see.)
a) The Red Sox hire Bill James as consultant.
b) The Red Sox then proceed to win the world series.
Was hiring James a strong solution or a weak solution?
If the Red Sox are Eustace, does hiring James increase or decrease alpha?
Crawl, walk, run. Remember the freshman’s screed:
"How does learning math and physics get me any closer to a Porsche?"
And, yes, I can wait.
But what about your assignment to offer alternative measures that we can discuss while Mr. Haspel has a chance to actually finish the argument?
Dog ate it?
Mr. Kaplan,
Quick–your own assignment.
Find the first 10-digit prime number occurring in consecutive digits (i.e., the decimal expansion) of the mathematical constant ‘e’.
Can’t figure it out (without Googling for it)? This mathematics is clearly witchcraft.
Bourbaki,
7427466391
Now find the first 11-digit prime, asshole.
When I stopped taking math in 7th grade after finishing the course in tensor mathematics and continuum mechanics, I let my skills slip a bit. It took me a good half hour to figure that one out. Because as von Neumann said, you never really understand mathematics, you just get used to it. So I guess what von Neumann was saying was that math IS witchcraft.
Are you conflating the the development of a theory with straight-up number crunching? It seems like the game is to swing for the fences and extrapolate conclusions without
(1) fleshing out the tools
(2) following through the whole argument
(3) considering simple cases
But hey, I’m certain no one else took tensor calculus and continuum mechanics in the 7th grade.
And you only drive a Saab? What went wrong?
Bourbaki,
Hint: The 11-digit prime solution takes little more time to solve than the ten-digit one. Just cut and paste into Word and use the "find" function. No calculation required. But before you can figure out what to do, you have to understand what is being asked.
And the Saab is a safe but quirky car that lulls people into thinking I’m a liberal. I love that the ignition is where the transmission should be.
Congratulations. Unfortunately, using Word as a sieve seems to have let the point pass right through.
The cycles needed to compute a solution in practice have nothing to do with its correctness.
So before calculating the alpha of AC/DC with and without Bon Scott, we might want to aim the cross-hairs a bit lower.
That way we can all lull ourselves into thinking we’re special.
"If the language is ambiguous, Mr. Haspel should clean it up. If the derivation is wrong, you’ll need to point out the offending equations and explain why?"
It was difficult for me to understand the equations because the language of the paper was confusing to me. I could not conceive of what he was trying to say becasue the words made his point, to my mind, jumbled. Here is a breakdown of what i mean.
"A strong solution is any specified trajectory for a random process. In our coin flipping game it would be the realized sequence of heads and tails. Of course Eustace can’t know such a path in advance."
A strong solution is any specified trajectory for a random process. I can specify something without knowing it will happen, and realize something and be wrong.
Eustace cant know whether the coin will flip heads or tails, but it can know heads and know tails. A random process is likely to have as many possibilities as it does few, in that it can either be this or not this, so long as its understood that this has a great degree of variables. To highlight my point, I would show you how in the coin flipping game you are minimizing the extent of all reality in favor of concise persuasion, because you are saying the coin will be either heads or tails but, just maybe, it could also land in its side facing up. Are you telling me when the coin is flipped this is not possible?
"The best he can do is to construct a distribution of possible outcomes. This distribution is a weak solution, which is defined, not by its path, which is unknown, but only by the moments of a probability distribution."
If Eustace is a volume of space how can it conceptualize an outcome?
"If Eustace knows a random process is stationary,"
What does stationary mean? The dictionary says it means not moving, or Unchanging, or Not capable of being moved, or fixed,. I guess you mean fixed, right?
"he has confidence that the moments of the process will converge to the same values every time. The coin flipping game, for instance, is stationary: its long term average winnings, given a fair coin, will always converge to zero."
Unless the coin lands standing up? I mean no one has ever flipped a coin forever to see if this actually happens. I dont understand the example, probably because I dont understand physics or chemistry, which is fine Im told.
"The actual path remains a mystery."
So what you are saying is that Eustace cannot know the path of Guassian random process, meaning what comes after each instance, or moment, but he can know for certain what will happen after the path ends? How can an example that requires a play on into infinity, like the coin, assuming it never lands on its sides, for the result of the path to be known, stand as a model of relevance for something that is not infinite, like Eustace, which can always lose all of its alpha? It cannot, so of course…
"Sooner rather than later, he must risk a strong solution."
How is it a risk? If the weak solution is nothing, then the strong solution is the only option, correct? And I dont see how you can extrapolate that someone concerned with weak solutions is a perfectionist. It seems to me that they would be an ineffectualist.
"In the coin flipping game, Eustace’s filtration includes, most obviously, the record of the previous flips. Of course in this case the filtration doesn’t help him predict the next flip, but it does help him predict his overall wins and losses. If Eustace wins the first flip (t=1), he knows that after the next flip (t=2), he can’t be negative. This is more information than he had when he started (t=0). If the coin is fair, Eustace has an equal likelihood of winning or losing $1. Therefore, the expected value of his wealth at any point is simply what he has won up to that point. The past reveals what Eustace has won. The future of this stationary distribution is defined by unchanging moments. In a fair game, Eustace can expect to make no money no matter how lucky he feels."
The coin never lands on its side. Ever. Yet, even assuming this, the idea that it MIGHT would have to be included on his filtration, just as the idea that the coin could land in a vat of acid and dissolve thus negating the final flip (so dont stand so close to that volcano) and also, of course, the filtered idea that this coin is, in fact, a FAIR one.
"To succeed he must eliminate paths that violate known constraints"
Fine but what is the known constraint, your logic for his success is wrong so far. Also, this stuff is tough. I dont see the model you are making for Eustace anymore. You have given us to many exceptions and instances of Eustace. It is no longer random space. In fact I think you are saying Eustace is a card counter that act(s) on more information the history of the cards dealt, the number of decks in the chute, the number of cards that are played before a reshuffle.
The best possible response for Eustace is the one that generates the most alpha (dH?), which is the one that, what, creates the most negative entropic process and the least positive? I dont see the turn of the card as the positive or negative process, but the exchange rate of the BETTING as the positive or negative? The idea is win the hundred or go out on the street and die from no alpha. Right? Bourbaki put the child down and help me 🙂
By the by, Bourbaki, he does not say that a strong solution is not possible, he says that it is not possible to KNOW if this strong solution is right…all a strong solution is is a realized understanding of a "fixed" process…
or am i missing the language again?
Boys, boys: put down the heavy artillery, back away slowly, and let’s review. If we had shown only that "alpha decay leads to death" Jim would be quite right in disparaging the enterprise. In fact we have established a great deal more: that alpha is the measure of any system’s sustainability, that it is a dimensionless, measurable quantity, and that the history of any Eustace can be broken down into a series of discrete trials (admittedly a very long series). Which means all actions are commensurable. Which means that ethics becomes an engineering problem. Which strikes me as one bankable proposition, for openers.
Would my critics have asserted any of this before reading this series? Would they deny it now?
An example from my own field, software, may clarify the whole "volition" business. We developers often speak of software objects as "knowing" (the data to which they have direct access). We speak of them as "behaving" (the methods that they can perform). We even, occasionally, impute to them such emotions as "envy" (object A seems to have a special yen for object B’s data). Envy bodes ill for software objects, just as it does for humans.
Now obviously we don’t believe that these objects are volitional and we don’t especially care. We know what we’re talking about and the words are the best available to describe what’s going on. Same deal in alpha theory. Eustace is not literally a card-counter at the blackjack table, and it doesn’t matter if his alpha model is a product of evolution, programming, "volition," or magic pixie dust. What counts is how effectively the alpha model uses the filtration to generate alpha. That is all.
A couple final notes: the coin flips are a metaphor too, and if you’re worried about coins landing on their sides or in volcanos, we will simply call any time the coin comes up heads a win, and any other outcome a loss. Voil, we’re back to Bernoulli trials.
When the Red Sox hired Bill James, this was a strong solution, as all actions are. I believe I was quite explicit on this point.
What were the alpha consequences? Beats me. The point is, there were some, and whether it was a wise or unwise thing to do depends entirely on what they were. You find the engineering problem difficult; it is difficult. But it is an engineering problem. And humans do better with those than just about anything else.
I’m too full to be curmudgeonly. Hopefully this is a somewhat helpful response to Mr. Whigham’s questions.
To highlight my point, I would show you how in the coin flipping game you are minimizing the extent of all reality in favor of concise persuasion, because you are saying the coin will be either heads or tails but, just maybe, it could also land in its side facing up. Are you telling me when the coin is flipped this is not possible?
There is a finite probability that the coin will land on its edge. But this possibility is exceedingly small compared to the likelihood of heads or tails so the long run average will converge to 50/50 (for a fair coin).
In terms of constraining the possible outcomes, you are correct in that most circumstances don’t have a finite number of possible outcomes. However, all of these outcomes has an associated alpha that either:
(1) increases
(2) decreases
(3) stays constant
If Eustace is a volume of space how can it conceptualize an outcome?
This is the issue of agency again. It wasn’t meant literally–rather, if Eustace is in a conformation that is adapted to the type of random disturbances in its environment, it will have a reduced probability of dissipating away.
What does stationary mean? The dictionary says it means not moving, or Unchanging, or Not capable of being moved, or fixed,. I guess you mean fixed, right?
This is a specific term from probability; it means that the long term values of the moments of a random process always converge to the same value.
Unless the coin lands standing up? I mean no one has ever flipped a coin forever to see if this actually happens. I dont understand the example, probably because I dont understand physics or chemistry, which is fine Im told.
This outcome won’t affect the long term average–it’s too rare.
So what you are saying is that Eustace cannot know the path of Guassian random process, meaning what comes after each instance, or moment, but he can know for certain what will happen after the path ends?
Careful. Moment has a specific meaning in this context–it is not an instance of time.
After anything happens, there will be thermodynamic consequences to measure. These can be calculated after each step so Eustace can know what has happened.
This information becomes part of the filtration and can be later used to build a probability model to estimate what might happen.
How can an example that requires a play on into infinity, like the coin, assuming it never lands on its sides, for the result of the path to be known, stand as a model of relevance for something that is not infinite, like Eustace, which can always lose all of its alpha? It cannot, so of course…
The coin is free to land on its side–but again, the probability of this event is exceeding small. You are free to include it if you wish.
The path is infinite. At the end of each step (any energy flux) the alpha can be calculated–so there’s a picture of what’s going on during the entire path.
Players in Vegas don’t play until infinity yet the asymptotic rules of probability still apply to their fortunes.
How is it a risk? If the weak solution is nothing, then the strong solution is the only option, correct?
It is a risk because choosing one actionable path means that you haven’t chosen any of the other actionable paths.
And I dont see how you can extrapolate that someone concerned with weak solutions is a perfectionist. It seems to me that they would be an ineffectualist.
This was mentioned very briefly so you’re right–it’s not as clear as it could be. A weak solution doesn’t define an actionable path. A weak solution defines the moments of a distribution based on information available in the filtration.
A perfectionist has trouble finishing a task–there always seems to be some piece of information or inspiration that is missing from the filtration that is vital before they will commit to a particuar path over all the other possible paths.
The coin never lands on its side. Ever. Yet, even assuming this, the idea that it MIGHT would have to be included on his filtration, just as the idea that the coin could land in a vat of acid and dissolve thus negating the final flip (so dont stand so close to that volcano) and also, of course, the filtered idea that this coin is, in fact, a FAIR one.
You are absolutely correct. And you’re touching on the reason why a filtration is critical to gauging an optimal solution.
All of these can be included in your calculations–they were excluded to keep things simple. Of course, in a practical sense, you should only consider these issues if there’s evidence to suggest they’ll have an effect and if that effect will have appreciable consequences on the outcome.
Fine but what is the known constraint, your logic for his success is wrong so far.
Mr. Haspel made a reference to a martingale betting process in which, roughly, it is possible to always win provided that you have an infinite amount of money to bet and the casino has no betting limits.
Also, this stuff is tough.
No argument here–it’s difficult and subtle. It combines recursion and non-linear processes so it’s very easy to get a headache.
I dont see the model you are making for Eustace anymore. You have given us to many exceptions and instances of Eustace. It is no longer random space. In fact I think you are saying Eustace is a card counter that act(s) on more information the history of the cards dealt, the number of decks in the chute, the number of cards that are played before a reshuffle.
The different examples of Eustace are used to illustrate the various principles need to understand the consequences of alpha: thermodynamics, probability, filtrations, etc.
But all of them are bound by one common thread: each is a system that responds to external events by using a model of the filtration.
The best possible response for Eustace is the one that generates the most alpha (dH?), which is the one that, what, creates the most negative entropic process and enthalpy and the least positive?
Right–although (dH) is enthalpy and is only a portion of the definition of alpha.
I dont see the turn of the card as the positive or negative process, but the exchange rate of the BETTING as the positive or negative? The idea is win the hundred or go out on the street and die from no alpha. Right?
Don’t mix the analogies too much–in the casino, all that matters is wealth. So to be successful at a casino you need to
(+) develop winning strategies
(+) respond to changes in the games and players
(+) reduce losing strategies
This is very simple; but carrying it out optimally depends on a great deal of information and processing power and/or the opportunity for a lot of random trials.
By the by, Bourbaki, he does not say that a strong solution is not possible, he says that it is not possible to KNOW if this strong solution is right…all a strong solution is is a realized understanding of a "fixed" process…
A strong solution is simply a realized path for a random process. If this path is created with more information from the filtration, it has a higher likelihood of being successful.
This is where agency throws us off–think of evolution as an iterative process that is constantly filtering Eustaces that have winning and losing strategies. Alpha is a flux–it’s dynamic.
Systems that are out of equilibrium are often observed in autocatalytic processes during which one of the compounds present in the reaction medium increases the rate of the reaction which gives rise to it, or, conversely, decreases the rate of the stage when it is consumed.
When the concentration of certain reagents exceeds a threshold, the reaction organizes itself spontaneously into a periodic process and proceeds alternately in one direction or the other.
The earliest observations of such processes were made in the 1950s by Belousov and Zhabotinsky. The theoretical aspects of such dynamic systems have also been worked out.
In the world of living organisms, feedback loops based on autocatalysis or inhibition (presense of substances that accelerate [catalyst] or retard [inhibitor] processes) are the primary mechanisms by which these reactions are coupled. Living things function in a domain remote from equilibrium where energy consuming processes are directed towards ordering and dissipative process are directed out of the system (as waste).
Although this was not their original goal, the work of Prigogine et al made a major contribution to the analysis of the chemistry of life, kept remote from equilibrium by a flow of energy. See Benard instability.
thankful you were full…
have any of you read douglas hoffstadter’s the eternal golden braid?
when he talks about the way memory forms in those loops of information is this similar to the kind of feedback eustace posseses, or an opposite kind?
i found what i was looking for, and yes aaron’s post and yours bourbaki were very very helpful…
"The cognitive modeling at CRCC is based on the thesis that mental activity consists of many tiny independent events and that the seeming unity of a human mind is merely a consequence of the regularity of the statistics of such large collections of events. Thus the metaphor of the "intelligent ant colony" and the image of "active concepts" (as set forth in the book "Gdel, Escher, Bach") have inspired our models for over two decades."
Aaron,
Is this it, or do you have more to go?
One more piece. We haven’t yet got to the universal maximization function. That will be next, after which I can start to derive some real cash value from these concepts, and try to make you and Jim happy.
OT, Jim, I think Reardon Metal has been invented by, of all things, The Oak Ridge National Labs. They have developed an non-crystalline steel with way better anti-corrosion and strength charateristics than regular steel. It is also non-magnetic. I want to get my wife a bracelet of the stuff. I wonder what Ms. Rand would say about Reardon Metal coming from a governement lab. (I think I know, I read her essays on the space program.)
what the hell is bill talking about? who is OT jim?
ms.rand
ayn?
everyone should eat more 3.1415
Lucid and brilliant, Aaron.
I certainly could never have guessed at the global scope of the derivations in your theory, but many of the meta-physical conclusions thus far are not at all surprising. I never needed physics to tell me that "all actions are commensurable." In a similarly general sense as you (so far), this is a premise, not a conclusion of any integrated theory of physics. (I concede that this is a discovery of inference, not an axiom, of course.) "Which means that ethics becomes an engineering problem." I already knew this, too, in a similarly general sense, without benefit of alpha. Now, I really will be astonished–floored–if you can actually reduce every ethical decision to a few formulae.
The following is where the meat is, right? "…that alpha is the measure of any system’s sustainability, that it is a dimensionless, measurable
quantity, and that the history of any Eustace can be broken down into a series of discrete trials (admittedly a very long series)." This is what I still need to really scrutinize. I could say a lot about "philosophy" and definitions and underlying premises, but we’ve been there and done most of that…and , obviously, Newtonian mechanics and volition are two incommensurable things, if only at a finer level of analysis, right? It’s simply the level of generality, so far, that concerns me. Impatient, yes, but I want the answers at the back of the textbook!! I suspect that "the universal maximization function" is what I really need, or, like Mick Jagger, I can’t get no satisfaction.
Jim,
Yes, I too can’t wait for the next part. Then I will skewer and dice the theory (which BTW is a wonderful example of failing upwards. Aaron really has advanced some important and profound ideas, even if the theory itself is bunk). I can already give you the back of the textbook answer as to where it gets screwed up: set theory.
Say Bill, much as I hate to jump ahead of things here, I admit that my curiosity has gotten the better of me and I can’t resist – any chance you might spoil us and tell us how alpha is bunk and how set theory has anything to do with it?
I know, I know; it is more prudent and certainly better scholarship to wait unitl the proof shakes out to counter it but I’ll be damned if I am not anxious as hell to see your counter and esp. the role of set theory in same.
CT,
Ripeness is all.
what if bill is alpha?
how can he counter himself.
ah, by eating even more pudding
The effort to put these tools on a solid mathematical footing is a relatively recent one beginning with the work of Kolmogorov in 1933. For example, the fundamental concepts of the theory of discrete information sources were first given precise mathematical definitions in 1953.
I am intrigued by Mr. Kaplan’s application of set theory. Perhaps we’ll see our old friends Godel and the limitative theorems? Hopefully the treatment will extend beyond quips, andecdotes and wobbly references to papers. His challenge to evolutionary theory was less than satisfying and suitable alternatives for investigation never materialized.
A sharp, focused skewering would be a welcomed change.
After all, that’s how ideas evolve.
"The bringing together of theory and practice leads to the most favorable results; not only does practice benefit, but the theories themselves develop under the influence of practice, which reveals new subjects for investigation and new aspects of familiar subjects."
—P.L. Chebyshev
just an observation but most card counters are in fact suckers.
Does this mean everything from that point on gets reclassified.
or do we need to define whether ur arguments are raster or vector?
i am so confused.