Sunday, November 6, 2022

An article that calls for an apology

 


 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

At the Gem State Patriot News on October 16, 2022 there is an article by Dr. John Livingston titled An apology is due. One indeed is due – by him.

 

In his fourth and fifth paragraphs Livingston pontificates about how articles for medical and scientific magazines allegedly are carelessly reviewed before being published:

“This is the tip of the iceberg. It is a prospective ‘Mia culpa’. The scientific community has long recognized the sloppiness of its own process. It has been ‘the experts’ that have allowed themselves to be driven by a political narrative that puts money in their pockets and puts actual scientific progress that is always slow to begin with, many steps further behind. The lack of outrage is disappointing. Everyday citizens conduct their everyday lives with far more virtue than ‘expert scientists’.

One would expect from a curious and inquisitive press, that before repeating many of these stories, they would at least ask the most basic of all questions – ‘Really’. Do we all just accept everything that is written as being fact? If the press is lazy and not curious, should we not be on guard ourselves to ask the same question ‘Really’. And how about the next question – ‘prove it’. If the ‘expert scientists’ aren’t inquisitive, if the peer review organizations aren’t inquisitive, if the press isn’t curious, We the People should remain skeptical at least and we should always ask for more information before coming to conclusions about ‘settled science’.”

I do not believe the process of scientific publishing is sloppy. And, as I will discuss later, that is based on personal experience in the careful editing of a materials science journal.

 

Livingston also uses the phrase “Mia culpa” – which instead of referring to a woman named Mia should be Mea Culpa, defined by the Merriam-Webster Dictionary as:

 

“a formal acknowledgement of personal fault or error”

 

And in the very first paragraph of the article he whines that:

 

“…It is not just medicine, but in the hard sciences that we have seen intellectual integrity in publishing being criticized. Articles in prestigious ‘peered reviewed’ journals like Scientific American, The Lancet and The Journal of The American Medical Association have been retracted and issues have again been raised about ‘scientific integrity’ —is that really any different than individual integrity, and the supervision of the investigative process by the senior authors on the article.”

 

The Merriam-Webster Dictionary defines the noun peer as:

 

“One that is of equal standing with another”

 

It defines the intransitive verb peer quite differently as:

 

“To look narrowly or curiously”

 

It also explicitly defines peer review as:

 

“A process by which something proposed (as for research or publication) is evaluated by a group of experts in the appropriate field.”

 

“Peered review” is uncommonly silly terminology.  Presumably it means articles have just been looked at by an unspecified someone. When I searched at PubMed Central I found the phrase “peer reviewed” appeared 254,923 times, but “peered reviewed” appeared only 11 times. An article there by Jacalyn Kelly, Tara Sadeghieh and Khosgrow Adeli at The Journal of the International Federation of Clinical Chemistry and Laboratory Medicine in 2014 is titled Peer review in scientific publications: benefits, critiques & a survival guide.

 

Both The Lancet and The Journal of the American MedicalAssociation (JAMA) have processes for peer review, which you could learn about just by checking their Wikipedia pages. But Scientific American does not have peer review. That’s because it is a general-interest magazine - written for intelligent and curious readers rather than scientists or physicians. Articles there just have been peered at.

 

Livingston’s third paragraph again refers to peered review:

 

“Today it was announced by London based Hindawi, one of the world’s largest open-accessed journal publishers, that it is retracting more than 500 journal articles based on the discovery of ‘unethical irregularities’ in the peer review process. As a result, these discoveries 511 papers will be retracted in articles published since August of 2020. The articles appeared in sixteen journals. In the words of a Hindawi: ‘Irregularities in the peer review process in some journals that involved coordinated peer review rings and the infrastructure that supports scholarly research’ were identified during the editing process! Are peered review rings the same thing as self-serving ‘servo loops of doom’?”

 

511 articles sounds like a lot - until you get inquisitive. Then you will find out that Hindawi publishes over 220 journals (a very large iceberg), so it only amounts to 2.3 articles per journal. Also, since the retracted articles were found in just 16 journals, overall that’s less than 7.3 percent of them.

 

Livingston’s sixth paragraph begins by stating:

 

“When false information is repeated by well-intentioned people who should know better, good people can be hurt.”

 

I have discussed an article of his from October 2, 2022 where he was guilty of that. It is titled Climate change and government credibility, and I blogged about in a post on October 4 titled Fairy tales from the Gem State Patriot News about Rachel Carson and her book Silent Spring.

 

Back in the early 1980s I was part of the peer review process for a materials science magazine then called Metallurgical Transactions. I was one or their Key Readers:

 

“All manuscripts will be judged by qualified reviewers according to established criteria for technical merit. The review procedure begins as a Key Reader is assigned by an Editor. The Key Reader chooses reviewers for the manuscript and submits his or her recommendation, based on his or her own and the reviewer(s)’ judgments. The highest-level handling Editor then makes a final decision on the paper.”

 

Usually there were three reviewers. They and I looked both at the manuscript’s content and whether the four of us thought it fit in with what else generally was understood about that topic. We often called for revisions before accepting a manuscript. A few times we just rejected one.

 

On another occasion (for another publication) I rejected a manuscript about stress corrosion cracking because it had made a faulty assumption about analyzing pass-fail or binary data. I blogged about that data type in a post on March 18, 2013 titled What is your hearing threshold? – the joy of statistics. Their test program sometimes had not applied a high enough stress to get specimens to fail, so they assumed that if the stress had been just one step higher they would have failed. That is baseless statistical nonsense.

 

A cartoon was adapted from this one  at Wikimedia Commons.

 


No comments: