Jump to content

Theme© by Fisana
 

Photo
- - - - -

Many Scientists Admit to Misconduct


  • Please log in to reply
5 replies to this topic

#1 Soupie

Soupie

    @minifiguresXD

  • Legends
  • 7881 posts
  • Gender:Not Telling
  • Location:Not Telling
  • Interests:Not Telling

Posted 09 June 2005 - 10:27 AM

By Rick Weiss, Washington Post Staff Writer

Thu Jun 9, 1:00 AM ET

Few scientists fabricate results from scratch or flatly plagiarize the work of others, but a surprising number engage in troubling degrees of fact-bending or deceit, according to the first large-scale survey of scientific misbehavior.

More than 5 percent of scientists answering a confidential questionnaire admitted to having tossed out data because the information contradicted their previous research or said they had circumvented some human research protections.

Ten percent admitted they had inappropriately included their names or those of others as authors on published research reports.

And more than 15 percent admitted they had changed a study's design or results to satisfy a sponsor, or ignored observations because they had a "gut feeling" they were inaccurate.

None of those failings qualifies as outright scientific misconduct under the strict definition used by federal regulators. But they could take at least as large a toll on science as the rare, high-profile cases of clear-cut falsification, said Brian Martinson, an investigator with the HealthPartners Research Foundation in Minneapolis, who led the study appearing in today's issue of the journal Nature.

"The fraud cases are explosive and can be very damaging to public trust," Martinson said. "But these other kinds of things can be more corrosive to science, especially since they're so common."

The new survey also hints that much scientific misconduct is the result of frustrations and injustices built into the modern system of scientific rewards. The findings could have profound implications for efforts to reduce misconduct -- demanding more focus on fixing systemic problems and less on identifying and weeding out individual "bad apple" scientists.

"Science has changed a lot in terms of its competitiveness, the level of funding and the commercial pressures on scientists," Martinson said. "We've turned science into a big business but failed to note that some of the rules of science don't fit well with that model."

Scientific dishonesty has long been a simmering concern. Many suspect, for example, that Gregor Mendel, the Austrian monk whose plant-breeding experiments revealed with suspicious precision the basic laws of genetics, cooked his numbers along with his peas.

In recent decades a handful of cases have risen to the level of popular attention -- the most famous, perhaps, involving David Baltimore, the Nobel laureate who in the mid-1980s heatedly defended his laboratory's honor in a series of scathing congressional hearings led by Rep. John D. Dingell (news, bio, voting record) (D-Mich.).

The prevalence of research misconduct has been uncertain, however, in part because the definitions of acceptable behavior have shifted. For scientists working with federal grant money, that issue got settled five years ago when the Office of Research Integrity -- part of the    Department of Health and Human Services -- drafted a formal definition: "fabrication, falsification or plagiarism in proposing, performing or reviewing research, or in reporting research results."

About a dozen federally funded scientists a year are found to have breached that "FFP" standard -- a tiny fraction of the scientific workforce -- and punishment generally involves a ban on further federal grants. But no one had conducted a major survey asking scientists whether they are guilty of major misconduct or lesser, but arguably still serious, ethics lapses.

Martinson and two colleagues -- Melissa Anderson and Raymond de Vries, both of the University of Minnesota -- sent a survey to thousands of scientists funded by the    National Institutes of Health and tallied the replies from the 3,247 who responded anonymously.

Just 0.3 percent admitted to faking research data, and 1.4 percent admitted to plagiarism. But lesser violations were far more common, including 4.7 percent who admitted to publishing the same data in two or more publications to beef up their résumés and 13.5 percent who used research designs they knew would not give accurate results.

Susan Ehringhaus, associate general counsel of the Association of American Medical Colleges, which has developed programs to enhance research ethics, said she welcomed the results. Her organization does not favor redefining federal research misconduct to include the many variants included in the survey, she said. However, she said, "we fully support the development of standards that go beyond the federal definition" for internal enforcement by academic or other institutions.

A preliminary analysis of other questions in the survey, not yet published, suggests a link between misconduct and the extent to which scientists feel the system of peer review for grants and advancement is unfair. That suggests those aging systems need to be revised, the researcher said.

"Scientists say, 'This is nuts,' so they break the rules, and then respect for the rules diminishes," de Vries said. "If scientists feel that the process isn't fair and the rich get richer and the rest get nothing, then perhaps we have to think how we can reallocate resources for science."

It's hard to be surprised by this. Only someone totally naive would believe that any human could maintain complete objectivity -- even if it was in their job description -- when their dinner is on the line.

It is, however, disappointing. Though I'd prefer to have more information about the "survey" and the participants, this seems to underline the importance of the peer review process and multiple studies over time.

This in no way invalidates the integrity and validity of the scientific process or science itself. In fact, in the long run, this could help strengthen both. The line about science becoming big business is unfortunately true. Perhaps this will lead the way to some reforms or policies enabling more scientific research to take place without big business or big brother sticking their units in the pie.

In any case, this reinforces my belief that at the end of the day, the only things I can really "trust" are my own experiences and observations. (Though they too are ultimately biased and faulty.)

:lol:
  • 0
Posted Image





#2 Soupie

Soupie

    @minifiguresXD

  • Legends
  • 7881 posts
  • Gender:Not Telling
  • Location:Not Telling
  • Interests:Not Telling

Posted 09 June 2005 - 10:29 AM

Here's a link to the article: Many Scientists Admit to Misconduct.
  • 0
Posted Image

#3 Soupie

Soupie

    @minifiguresXD

  • Legends
  • 7881 posts
  • Gender:Not Telling
  • Location:Not Telling
  • Interests:Not Telling

Posted 17 March 2009 - 05:42 AM

Probe into faked studies rocks medical community

Massachusetts-based doctor Scott Reuben is accused of producing at least 21 crooked research papers, some of which talked-up drugs made by a pharmaceutical firm that gave him research grants.

"The reports contained fabricated data that was created solely by Dr Reuben," said Jane Albert, a spokeswoman for the Baystate Medical Center, where Reuben once practiced.

If proven, the discovery would be one of the biggest cases of medical research fraud on record -- spanning at least a decade and implicating potentially dozens of supposedly peer-reviewed articles.

According to journal Anesthesia & Analgesia at least 21 articles are in question dating back to 1996.

The journal's editor Steven Shafer said the discovery could prove a body blow to the field. "Doctors have been using (his) findings very widely. His findings had a huge impact on the field."

Although Baystate said there were no allegations involving patient care, Shafer cautioned against ruling out practical repercussions.

"We have to be open to the possibility there was patient injury. Nothing is without risk."

Reuben, 50, had been a high-profile proponent of anti-inflammatory drugs called COX2 inhibitors, which he claimed reduced post-surgical pain and dependence on steroids and addictive drugs like morphine.

Reuben plugged the use of one COX2 inhibitor -- Celebrex -- along with another drug called Lyrica, both manufactured by US-based pharmaceutical giant Pfizer Inc., as well as another anti-inflammatory drug made by Merck.

Anesthesia & Analgesia, where some of Reuben's work had been published, said he had received research grants from Pfizer and is a member of its speaker's bureau. ...

Trust no one. :D
  • 0
Posted Image

#4 jkaris

jkaris

    AKIA Site Owner Y/S*N*T

  • Little Rubber Guys
  • 22258 posts
  • Gender:Male
  • Location:West Sacramento, CA

Posted 17 March 2009 - 07:00 AM

Follow the money.
  • 0

#5 Donkeykid77

Donkeykid77

    Serious Collector

  • Members
  • PipPipPipPip
  • 766 posts
  • Gender:Male
  • Location:Belgium
  • Interests:Battle Beasts!

Posted 17 March 2009 - 12:03 PM

Trust no one. -_-


How did you know what the password was? :D ;)



















Dee eKsfilesfan 77
  • 0
Integre navigo, sic itur ad astra.

#6 Soupie

Soupie

    @minifiguresXD

  • Legends
  • 7881 posts
  • Gender:Not Telling
  • Location:Not Telling
  • Interests:Not Telling

Posted 31 March 2009 - 06:56 AM

Hundreds of Natural-Selection Studies Could be Wrong, Study Demonstrates

Scientists at Penn State and the National Institute of Genetics in Japan have demonstrated that several statistical methods commonly used by biologists to detect natural selection at the molecular level tend to produce incorrect results. "Our finding means that hundreds of published studies on natural selection may have drawn incorrect conclusions," said Masatoshi Nei, Penn State Evan Pugh Professor of Biology and the team's leader. ...

Nei said that many scientists who examine human evolution have used faulty statistical methods in their studies and, as a result, their conclusions could be wrong. For example, in one published study the scientists used a statistical method to demonstrate pervasive natural selection during human evolution. "This group documented adaptive evolution in many genes expressed in the brain, thyroid, and placenta, which are assumed to be important for human evolution," said Masafumi Nozawa, a postdoctoral fellow at Penn State and one of the paper's authors. "But if the statistical method that they used is not reliable, then their results also might not be reliable," added Nei. "Of course, we would never say that natural selection is not happening, but we are saying that these statistical methods can lead scientists to make erroneous inferences," he said.

The team examined the branch-site method and several types of site-prediction methods commonly used for statistical analyses of natural selection at the molecular level. The branch-site method enables scientists to determine whether or not natural selection has occurred within a particular gene, and the site-prediction method allows scientists to predict the exact location on a gene in which natural selection has occurred.

"Both of these methods are very popular among biologists because they appear to give valuable results about which genes have undergone natural selection," said Nei. "But neither of the methods seems to give an accurate picture of what's really going on." ...

"These statistical methods have led many scientists to believe that natural selection acted on many more genes in humans than it did in chimpanzees, and they conclude that this is the reason why humans have developed large brains and other morphological differences," said Nei. "But I believe that these scientists are wrong. The number of genes that have undergone selection should be nearly the same in humans and chimps. The differences that make us human are more likely due to mutations that were favorable to us in the particular environment into which we moved, and these mutations then accumulated through time."

Nei said that to obtain a more realistic picture of natural selection, biologists should pair experimental data with their statistical data whenever possible. Scientists usually do not use experimental data because such experiments can be difficult to conduct and because they are very time-consuming.


  • 0
Posted Image






Copyright © 2025 LittleRubberGuys.com