Matt Ridley asks what the coronavirus has taught us about science? It is a good question, considering that a certain senile presidential candidate is telling us to trust the science.
Then again, the basics about the question have been known for quite some time now. Around two and a half centuries ago, David Hume, the leading philosopher of the British enlightenment, a towering figure in philosophical empiricism, remarked that science can tell us about what is. It cannot tell us about should. It should not be confused with ethics. Science cannot tell us what we should or should not do.
And then, in the mid-twentieth century one Ludwig Wittgenstein, certainly one of the greatest philosophers, explained that scientific fact does not tell the future. To say that the sun will rise in the east tomorrow is a hypothesis. It will be proved or disproved tomorrow morning. If reality validates your hypothesis, it becomes a fact. Before the event in question, it is not a fact.
So much for the idiots who have been trotting out computer models explaining what must happen to the climate a decade from now. Predictions about the future of the climate may be hypotheses, if we are feeling generous. Otherwise they are prophecies, uttered by idiot prophetesses like Alexandria Ocasio-Cortez.
Among those who have more recently opined about the philosophy of science, I recommend late Oxford biology professor and Nobel laureate, Peter Medawar. His books on the subject are, The Art of the Soluble and Pluto’s Cave.
Allow Ridley his say. Beginning with the currently fashionable notion that mathematical models can produce scientific facts about the future:
Some scientists fall so in love with their guesses that they fail to test them against evidence. They just compute the consequences and stop there. Mathematical models are elaborate, formal guesses, and there has been a disturbing tendency in recent years to describe their output with words like data, result or outcome. They are nothing of the sort.
Consider the notion that we should fight the virus with lockdowns. The conclusion was based on a model developed in London. As it happened, the Swedes tested the model, and discovered that it was flawed:
An epidemiological model developed last March at Imperial College London was treated by politicians as hard evidence that without lockdowns, the pandemic could kill 2.2 million Americans, 510,000 Britons and 96,000 Swedes. The Swedes tested the model against the real world and found it wanting: They decided to forgo a lockdown, and fewer than 6,000 have died there.
Who among us was promoting lockdowns? Why, none other than Dr. Fauci. As you know the last thing he will ever admit is that he was wrong. After all, as we now know, Dr. Fauci was a great fan of Hillary Clinton-- ought that not to cause us to question his judgment?
On 6-12-2012, Fauci wrote this to Hillary's chief of staff, Cheryl Mills, regarding a speech:
Wow. Very rarely does a speech bring me to tears but this one did. Talk about telling it like it is. This one was a bases loaded home run. Please tell the Secretary that I love her more than ever.
I consider that to be unworthy of a man of science. And I also consider it a sign of seriously biased judgment.
Compare Fauci with his Swedish equivalent:
Anthony Fauci, the chief scientific adviser in the U.S., was adamant in the spring that a lockdown was necessary and continues to defend the policy. His equivalent in Sweden, Anders Tegnell, by contrast, had insisted that his country would not impose a formal lockdown and would keep borders, schools, restaurants and fitness centers open while encouraging voluntary social distancing. At first, Dr. Tegnell’s experiment looked foolish as Sweden’s case load increased. Now, with cases low and the Swedish economy in much better health than other countries, he looks wise. Both are good scientists looking at similar evidence, but they came to different conclusions.
Of course, science is conducted by human beings. And human beings can be biased. They can trot out what they consider to be science in order to promote a political agenda-- or to pretend to be serious thinkers.
In science, we also find evidence of confirmation bias. Psychologists have demonstrated that we tend to embrace the facts that prove us right while we ignore the facts that would disprove our hypotheses.
For the record, this simple observation severely damages the Freudian project-- since Freud, when he was pretending to do empirical science, happily embraced, as proving his interpretations, any thought or fantasy that his patient stated.
If the patient has been induced, as a function of confirmation bias, to search his memory bank for something that confirmed Freud’s interpretation, it should not count as a fact. Or as a proof.
This implies that there are facts and that there are facts. Considering how much our political nitwits are mewling about facts, we ought to recognize that one fact may contradict another. We might believe that Col. Mustard killed Mr. Boddy because his fingerprints are on the murder weapon-- the candlestick, of course. The presence of said prints is surely a fact. And yet, if you can establish that at the time of the murder Col. Mustard was incarcerated in the local jail-- that fact definitively discredits the first hypothesis.
Ridley explains confirmation bias thusly:
A fourth mistake is to gather data that are compatible with your guess but to ignore data that contest it. This is known as confirmation bias. You should test the proposition that all swans are white by looking for black ones, not by finding more white ones. Yet scientists “believe” in their guesses, so they often accumulate evidence compatible with them but discount as aberrations evidence that would falsify them—saying, for example, that black swans in Australia don’t count.
Of course, Ridley concludes, science today has been politicized:
The health of science depends on tolerating, even encouraging, at least some disagreement. In practice, science is prevented from turning into religion not by asking scientists to challenge their own theories but by getting them to challenge each other, sometimes with gusto. Where science becomes political, as in climate change and Covid-19, this diversity of opinion is sometimes extinguished in the pursuit of a consensus to present to a politician or a press conference, and to deny the oxygen of publicity to cranks. This year has driven home as never before the message that there is no such thing as “the science”; there are different scientific views on how to suppress the virus.
Hume Tower in Edinburgh has been renamed George Tower after a drug addicted criminal because David Hume was a racist thereby proving that racism is the greatest sin. I'm working on the formula
ReplyDeletehttps://www.google.com/amp/s/www.bbc.com/news/amp/uk-scotland-edinburgh-east-fife-54138247
Computer simulation, aka "computer modeling", is a powerful tool. I've used it myself to simulate telecommunications networks. To my knowledge, the first simulations were done by the true genius, John Von Neumann, to support the scientists involved in the Manhattan Project as they attempted to numerically solve intractable differential equations. Von Neumann went on to the Princeton Institute for Advanced Study, where he applied his simulation techniques to weather prediction with some limited success. Another brilliant MIT professor, Edward Lorenz, was interested in modeling atmospheric circulation (note that the today's climate models are called General Circulation Models, or GCMs). Lorenz got frustrated with the time required to run complex simulations, and decided to start a new simulation in the "middle" by using the printed output from a previous run. The printed output was rounded off to three decimal places, but the machine's internal floating point processor calculated to six decimal places. Lorenz was astounded by the difference in the simulator output based only on the tiny, seemingly insignificant, rounding errors. Lorenz went on to develop theory about the sensitivity of complex dynamical systems to initial conditions (i.e., tiny differences in the value of the initial parameters). His theory, which should have been called "sensitivity theory", is known today as "chaos theory" (despite the fact there's nothing "chaotic" in a plot of Lorenz attractors).
ReplyDeleteSimulations of weather and disease are interesting enough, easy enough to do, and sufficiently attention-getting that even economists have gotten in the game. Lest we forget, the demand response of US citizens to the Affordable Care Act (copyrighted and peddled at taxpayer expense by another MIT faculty member, Johnathan Gruber). Gruber's simulation was, like the Oxford epidemiological simulation, wrong.
Simulations are interesting, fun to create, and worth doing, but... they are not science.
The worst are climate models predicting weather 100 years in the future. They are no more falsifiable than Freudian theory. While we're much better at short-term weather prediction than we used to be, if you look carefully at your ten-day weather forecast, the latter days of those ten-day forecasts typically converge on the average weather conditions for a your location at a given time of year. That's about as far as we can see, given the state of the science of turbulent systems and the sensitivity of complex dynamical systems ("chaotic systems") to the measurement precision of initial conditions.
Epidemiological and economic demand simulations, given their shorter time windows, are more easily falsifiable. And they are routinely falsified.
It's really quite simple:
"It doesn't matter how beautiful your theory is, it doesn't matter how smart you are. If it doesn't agree with experiment, it's wrong."
--- Richard Feynman
I note for the record that the "settled science" of lockdowns has officially desettled (for now):
"We in the World Health Organization do not advocate lockdowns as the primary means of control of this virus."
--- Dr David Nabarro, WHO Special Envoy on Covid-19, quoted by AuBC, today
ReplyDelete"Of course, Ridley concludes, science today has been politicized:" And politics is not science.
trigger warning..."To my knowledge, the first simulations were done by the true genius, John Von Neumann, to support the scientists involved in the Manhattan Project as they attempted to numerically solve intractable differential equations. Von Neumann went on to the Princeton Institute for Advanced Study, where he applied his simulation techniques to weather prediction with some limited success."
ReplyDeleteI've been doing some historical research on the first electronic computer with general purpose capabilities, the ENIAC, which research will turn into a post someday. ENIAC's applications are instructive here. It was built under an Army contract mainly to simulate artillery and bomb trajectories. The only way to calculate these is to divide the projectile's time of flight into very small segments, and repeat the same calculations for each segment...the process was very labor-intensive, and as WWII approached, about 100 women with math degrees were hired to work at U-Pennsylvania to supplement the Army's own staff. ENIAC was intended to automate this process.
WWII ended before ENIAC was complete, and a non-trajectory problem was given priority...a simulation of the ignition of the projected hydrogen bomb. Two of the Los Alamos scientists, (Ulam and Frankel) did not believe that ENIAC's capabilities were up to doing a meaningful simulation of the ignition process, and viewed the exercise as mainly a way to gain experience with the computer, but another scientist (Edward Teller) maintained that the simulation results justified his approach to bomb ignition. It turned out, though Teller's approach would *not* actually work, and a different ignition method was ultimately used.
The machine was also used for an analysis of nuclear fission, in bombs and I believe also in nuclear reactors: this was done using the 'Monte Carlo' method, in which the paths of thousands of neutrons are initially chosen at random but statistically the results (for the right kind of problem) are replicable and meaningful. My impression is that this simulation work was successful.
There were also some experiments done in using ENIAC for weather simulation, but the machine was orders of magnitude too slow and too small (in terms of memory) to make more than illustrative progress in this area.
DF: It's a nit, but IMO, the calculations done to predict trajectories (ranging from artillery to Apollo 11) are better categorized as numerical analysis. And, in fact, numerical analysis has a rich history, a prime historical example being the Almagest. But that's a nit that I should pick elsewhere, I suppose. :-(
ReplyDeleteOn a different but related subject, a subject of interest to me that gets far too little attention, I see you (obviously correctly) refer to "Monte Carlo" methods, which use deterministic algorithms to generate pseudorandom numbers meeting various distributional criteria, algorithms that (presumably) bear no resemblance whatsoever to anything happening on a roulette table in Monte Carlo, Monaco. The term "Monte Carlo", is a term I classify with "chaos theory", the "butterfly effect", "tipping point", and other such technical nomenclature couched in popular language. Again IMO, the problem with such nomenclature is the vast associational networks that arise in non-experts when they are used. It's much like the legal term (according to my spouse, an attorney) "infant", referring to anyone younger than the current age of majority, 18. Lawyers know what it means, but most high school seniors would be insulted by the otherwise innocuous term. The most egregious (and ridiculous) example I know of was the "God particle", a name for the Higgs boson, which generated many, many impassioned and Bible-thumping sermons from the pulpits of conservative religious ministers about the horrific risks and visions of Hell associated with the construction and use of the Large Hadron Collider at CERN.
I'm no expert on language, but I hope Schneiderman, who I believe is, will someday write a post on the degradation of language. And when I say degradation, I'm not referring to the evolution of language. I note from the media that "court packing" has now been redefined to serve political interests. I'm a bystander in this fight, barely capable of expressing my own thoughts in intelligible English, but, then again, one doesn't need to be a pathologist to detect the stink of corruption. I'm a strong adherent to that brilliant description of what I call "Dumptyism" as penned by the much-maligned but utterly brilliant logician, Charles Dodgson (aka Lewis Carroll):
"'When I use a word," Humpty Dumpty said, in rather a scornful tone, 'it means just what I choose it to mean—neither more nor less.' 'The question is,' said Alice, 'whether you can make words mean so many different things.' 'The question is,' said Humpty Dumpty, 'which is to be master—that's all.'"
I'll shut up now.
trigger..there have actually been cases where a random natural process, such as radioactive decay, has been used to generate the random number inputs to a Monte Carlo simulation. Generally, though, pseudo-random numbers are used.
ReplyDeleteInterestingly, the first applications of the Monte Carlo method at Los Alamos were pioneered (by Ulam, I believe) using actual playing cards or dice and moving the simulated neutrons across a gameboard, with the movements being determined by a rulebook in combination with the card or dice results. There was even some sort of special-purpose device to facilitate this work.