– How science is being eroded as an objective agent for our species
I recently underwent a grueling exercise with an individual who is exceptionally intelligent. I describe it as grueling because he clung for the longest time to a perception of “science” that is, regrettably, common. Nevertheless, that perception is not real science.
In a textbook, he was given three descriptions of science from which I asked him to choose the option that best matched his understanding of “science”. May I humbly suggest that only one option is worthy of that label.
Proposition A: a person of science will develop a theory, then apply evidence that can be found that will support that theory.
Proposition B: observations will be seen to have apparent relationships. A hypothesis is developed that encompasses those observations with a possible explanation of why or how they relate to each other. Evidence is gathered using observations and/or controlled experiments; assessments are made as to whether the evidence supports or does not support the hypothesis. If there is found to be sufficient support, a theory may be developed. The theory is tested continually to determine if it is still supported by new evidence.
Proposition C: an authoritative person pronounces on a theory which may be based on common sense, long practice, or even logical deduction or reasoning.
Prop C actually contains several distinct propositions. I will refer to them a group.
Plato and many famous philosophers since have used logical deduction to explain the wonders of the world. Within the toolbox of science, this can be a useful method for arriving at possibilities. The main problem with that is, it may be useful but it often doesn’t use feedback from objective evidence. To offer a simple example, it is observed that a penguin is black and white. By logical deduction we know that snow is white and coal is black, so that must make a penguin equal to snowy coal. While an artificial intelligence (AI) program may produce that kind of logic, people understand it to be silly.
Another Prop C option: Aristotle was an admired and authoritative figure. Despite the prior writings of Pythagoras and others who came up with close approximations of the great size of planet Earth, Aristotle suggested with respect to the disappearance of a ship over the horizon, “…All of which goes to show not only that the Earth is circular in shape, but also that it is a sphere of no great size: for otherwise the effect of so slight a change of place would not be so quickly apparent.” (from Aristotle’s On the Heavens). No.
Prop A looks promising. This was chosen by my exceptionally intelligent friend (he is still a friend, by the way). It was also chosen by many other intelligent folks, such as Sigmund Freud (for personality development, in which he argued that personality is formed through conflicts among three fundamental structures – however, in testing that theory, the actual existence of his concepts has been fraught with partisan arguments, rather than objective evidence); John Locke (babies are born with a blank slate – which we now consider inaccurate); Aristotle (spontaneous generation of life, wherein he “observed” life starting from apparent nothingness). In essence, Prop A says that a smart person can come up with a theory and cherry-pick observations that may approximate what the theory suggests.
A theory, however, is never “proven” – merely supported by evidence, or not. A theory must be able to make predictions that can be tested. If we presume that penguins are snowy coal, observations and comparisons would quickly invalidate that “theory”.
When we look around at some of the marvels of the modern age, most of them have something to do with, or are enhanced by, digital technology. When da Vinci sketched out his plans for a helicopter, the reality of building one was stymied by rudimentary materials technology and lack of an understanding of aerodynamics (each field having recently received considerable impetus via digital technology: “computers”).
A computer, however, is merely a tool. If digital technology is relied on to be the magic bullet, depending on it as if it were the final answer usually leads one far down a garden path. When proponents of instant language translators say that they are on the cusp of a perfect solution, one would be wise to read what a professional in the field of translation has to say:
Douglas Hofstadter is a professor of cognitive science and comparative literature at Indiana University at Bloomington. He is the author of Gödel, Escher, Bach.
“I’ve recently seen bar graphs made by technophiles that claim to represent the “quality” of translations done by humans and by computers, and these graphs depict the latest translation engines as being within striking distance of human-level translation. To me, however, such quantification of the unquantifiable reeks of pseudoscience, or, if you prefer, of nerds trying to mathematize things whose intangible, subtle, artistic nature eludes them. To my mind, Google Translate’s output today ranges all the way from excellent to grotesque, but I can’t quantify my feelings about it. Think of my first example involving “his” and “her” items. The idealess program got nearly all the words right, but despite that slight success, it totally missed the point. How, in such a case, should one “quantify” the quality of the job? The use of scientific-looking bar graphs to represent translation quality is simply an abuse of the external trappings of science.”
We are inundated in the media with assertive pronouncements regarding the efficacy of certain products. Imprecise statements, cherry-picked observations, and outright fabrications are used without regard to the harm they cause. The harms extend beyond merely loss of money in buying worthless stuff. Purchasers may be conned into spending their meager resources and time on the worthless stuff to the detriment of using an approach that can be of actual value to them. This is particularly egregious in the medical and pharmaceutical fields. People who have become addicted to drugs such as opiates are dying in the thousands after being prescribed the drug and not being followed up properly, or where the prescription was for a symptom that should never have been treated with drugs in the first place. (See Anxiety: Debug It Don’t Drug It, Dr. Michael Catchpole 2019, Rutherford Press.)
One must ask, what harms are yet to be caused by AI in charge of ground and air vehicles. Analysis of the recent Boeing 737 Max 8 plane crashes will take some further work, but we understand a lot at this time (see https://avherald.com/h?article=4c534c4a). Those tragic results cannot be placed solely at the feet of artificial intelligence residing in the software, but it may turn out that a significant component could possibly be attributed to a culture of hurried development and over-dependence on the “magic bullet” of AI, as alleged by pilots and engineers at recent Congressional hearings. Perhaps that culture has been fostered by a subliminal dependence on, and shifting of responsibility to, the lines of code on a silicon chip. Getting that shift wrong with a new laptop design is an entirely different order of mistake than getting it wrong with a new airplane that can carry over 200 lives on board. (see https://www.nytimes.com/2019/03/27/business/boeing-hearings.html)
Trust in Science
Is trust in science misplaced, or is it conveniently used as a replacement for deeper understanding?
Considering the difference between denialism and skepticism, a study found evidence, yet again, that presenting a denier with objective facts was not an effective strategy:
Because this denialism springs from motivated reasoning, science advocates are scrambling to understand how to debunk misinformation in a way that motivates their target audience to accept it. [added emphasis]
Being “motivated” means that a denier is self-censoring anything that does not conform to the way the topic is stored in their mind.
A recent study of 140,000 people worldwide proved instructive. Here are the main highlights:
Trust in science and health professionals
Globally, 18% of people have a ‘high’ level of trust in scientists, while 54% have a ‘medium’ level of trust, 14% have ‘low’ trust and 13% said ‘don’t know’. This ranges from a third of people having ‘high’ trust in Australia and New Zealand, Northern Europe and Central Asia to around one in ten in Central and South America.
from: Gallup (2019) Wellcome Global Monitor – First Wave Findings
The study is both fascinating and frustrating. The breadth of the study needs to be read to be fully appreciated. Any study that includes 140,000 subjects who answered such a range of questions is to be commended as a considerable feat.
May I humbly say, however, that frustration arises in those numerous instances where the numbers being thrown at the reader elicit questions of greater depth. Take this statement in Chapter 2’s Summary:
Worldwide, more than half the people aged 15–29 (53%) say they know ‘some’ or ‘a lot’ about science, compared to 40% of those aged 30–49 and 34% of those aged 50 and older.
Is age a causal variable, or correlational, or coincidental. For instance, might it be that older folks have matured into the realization that the more they know, the less they understand? And that certainty is best left to the young blurs that pass by on their respective missions? Is there a whiff of something like the subject of Douglas Hofstadter’s article on translation: all the right words – absent depth?
The reason for my skepticism is outlined below.
Human Rights or Social Permission
Do humans have rights? Are they “inalienable”; or are they subject to the will – or lack of will – displayed by a political community? This was explored by Kenan Malik:
So, what should we do? Our starting point must be the recognition of rights neither as inalienably rooted in human nature, nor as gifts bestowed on citizens by the nation state, but as aspects of human social existence continually created through struggle and contestation. Rights are, as the political theorist Lida Maxwell has put it, ‘collective achievements rather than individual possessions’, and achievements that are ‘fragile’ and ‘imperfectly realised’.
How does the topic of human rights fit into this discussion? One way is that it shows the value of skepticism in approaching a subject for which so many people hold hard views.
The Science of Skepticism
For those who consider it “good science” to first develop a theory and then try to prove it, the field is open to cherry-pick whichever evidence can be shoehorned into the most compelling package. After all, the right words are being employed by proponents of their pet theory: science, reasoning, evidence, clinical, proven…
No. Science depends on skepticism: questioning the evidence which supports or doesn’t support a hypothesis; constant review of evidence; the belief that a belief is a blindfold…
Malik’s analysis of human rights, above, lists ideas and their proponents who wish to bestow a conceptual construct into human genes. They insist that the only way to combat discrimination is by saying that people are “born with rights”. A corollary of this approach, however, allows some to say that only certain humans have the “rights gene”, therefore discrimination against the defective elements of the population is permitted.
The more difficult approach to fighting the many forms of discrimination is to freely admit that rights originate in words; they are born in the fire of social discourse. And there, the rights may be either eroded away or strengthen for those who must depend on them the most. That fire may wane or flare, so it is incumbent on the people of a political community to keep feeding oxygen and, yes, fuel, into the fire.
Skepticism is one such fuel. A skeptic’s voice must be heard by all who wish to contribute to the discussion.
Denialism is not, however, the same as skepticism. Denialism is a soggy blob of retardant on the fire of social discourse.
The trick, then, is to find a method that distinguishes motivated reasoning from healthy skepticism.
Yes, this is hard.