|
Post by rmarks1 on May 10, 2018 10:30:45 GMT -5
That's all well and good for issues that are fairly obvious, Fred. But in many more complex cases you need to teach people how to distinguish established knowledge from plausible speculation from kooky nonsense. You need to ground skepticism in authoritative knowledge, in other words. Yes. Is that a problem? Bob
|
|
Deleted
Deleted Member
Posts: 0
|
Post by Deleted on May 10, 2018 15:34:14 GMT -5
None of us are going to set up double blind medical tests or build advanced telescopes, or conduct complex large scale studies.
So we have to trust the people who do, which means learning to evaluate what kind of research is trustworthy to begin with.
None of this is foolproof, of course.
|
|
|
Post by faskew on May 11, 2018 7:16:11 GMT -5
Indeed. Another part of the scientific method is that results have to be repeatable. It doesn't count if only one person can do it. Several people have to follow the same steps and get the same result. It's the preponderance of evidence that indicates something is true.
The core problem with all of this - science, politics, business, whatever - is money. Sometimes people intervene with a process because they can make big money off of false information. So they bribe scientists, politicians, business owners, whoever. Which is where we get junk science. People fake results to get grant money or to keep corporate sponsors happy. 8-<
|
|
Deleted
Deleted Member
Posts: 0
|
Post by Deleted on May 12, 2018 2:57:03 GMT -5
I think you seriously overestimate the number of results that are deliberately falsified, as opposed to researchers simply being sloppy or disorganized (which itself is often just a result of them being overworked and underpaid, I reckon) and peer review being not nearly as good at catching errors as it is often claimed.
The latter part I get especially from talking to people who do the kind of lower level research that most often shows up in academic journals.
And as for money, that depends much more on networking, a good pitch, and whether you work in a popular field, than any actual results.
|
|
|
Post by faskew on May 12, 2018 9:09:05 GMT -5
I think that money may be involved in the entire process. Since so much research depends on grants, each project is sort of like piecework - they get paid by the project, so the more projects they can finish, the more money they can get. True, grant money often follows fads in science, but someone who has tried several different projects and had no success with any of them is more likely to get funded than someone who has made an exciting discovery. And the bottom of the money barrel is people who just want to verify someone else's discovery.
Peer review can indeed be flawed. Much depends on who is doing the reviewing. Once there were only a handful of publications and it was easier to review new claims, but now there are many on-line only publications and thousands of new claims per year to review. In some cases it's not clear who should be doing the reviewing. 8-<
|
|
|
Post by rmarks1 on May 12, 2018 20:17:02 GMT -5
I think you seriously overestimate the number of results that are deliberately falsified, as opposed to researchers simply being sloppy or disorganized (which itself is often just a result of them being overworked and underpaid, I reckon) and peer review being not nearly as good at catching errors as it is often claimed. The latter part I get especially from talking to people who do the kind of lower level research that most often shows up in academic journals. And as for money, that depends much more on networking, a good pitch, and whether you work in a popular field, than any actual results. Good points McAnswer. I knew a PhD botanist who quit her field because she said she couldn't stand other biologists doing an experiment dozens of times until they got a result that agreed with their theories. That would be the result they reported. The other results would be ignored. Bob
|
|
Deleted
Deleted Member
Posts: 0
|
Post by Deleted on May 15, 2018 10:25:09 GMT -5
My brother did his PhD in pharmaceutical research and told similar stories. He got into frequent arguments with his advisor because he couldn't make certain methods work, and he couldn't just publish a paper saying that certain methods were ineffective, he had to publish "results".
|
|