Science. Bad.

In my last blog I made my opinion on blue sky research pretty clear; it's great! I also suggested that sometimes you just have to let scientists work on their ideas, without pressuring them to come up with immediate economic outputs. There is definitely a case to be made, though, that scientists, now and then, need to be watched very carefully. Some science is just bad.

Trapanning. A disembodied hand goes to work on the skull of the ill. Evil spirits be gone!

Trapanning. A disembodied hand goes to work on the skull of the ill. Evil spirits be gone!

First, a little clarification on what I mean by "bad science". There are numerous examples throughout history where physicians and so-called intellectuals undertook some pretty dodgy practices, with little or no evidence that their methods would work. Blood letting springs to mind, where blood was drained from the body to balance the "four humors" (not a laughing matter). Or trepanning, where holes were drilled or scraped into the skull to release evil spirits, like Satan, or whiskey. These types of archaic practices, though, are not what this blog is about. Everyone knows the far past was just horrible - we might rabbit on about how much better things were in the '70s, but not the 1370s. No, this blog is about questionable research that has been carried out/published since the scientific revolution. Since we ought to have known better.

Isaac Newton was an interesting character. Here's a man who despite being raised a poor farm boy, went on to revolutionise our understanding of optics (spelt 'optiks' back then), and, bafflingly, was the first person to notice that gravity is a thing. He was also easily slighted by his peers, probably due to being abandoned by his mother as a child, and lived rather reclusively. He suffered two bouts of depression, and is believed to have died from mercury posioning, which would also explain his mental delusions in his final years. He was one of the first "natural philosophers" to take averages of data (back then the norm was just to pick and choose which data points you believed), and he also refined the scientific method to something very similar to that which scientists still follow today.

The scientific method can loosely be described by the following steps: make an observation about the world; hypothesise an explanation for your observation; come up with an experiment to test your idea; collect data, ignore your preconceptions, and assess the validity of your explanation. It seems an obvious way to keep science progressing and honest (to me at least), but it was a really new way of thinking at the time. You mean I have to leave my ego at the door?! Crazy talk.

This method works pretty well at keeping science from regressing to archaic practices like trepanning (though it hasn't stopped homeopaths), but if you live strictly by these steps alone, you could end up carrying out some ethically questionable experiments, particularly when they are applied to the social sciences, such as psychology and sociology. If you want a good idea of what a completely unrestrained and unethical scientific experimenter looks like, just ask GLaDOS. Let's go through a few examples of unethical science.

First there's the story of Gua the chimp. Gua was raised for a time as a human, with the idea being tested that language in primates is entirely learned; the only reason chimps don't learn to speak is because they're raised by a society of chimps who also can't speak. The experiment: raise a baby chimp alongside a baby human, and see if they develop at the same rate. While the chimp did seem to excel over the human at certain tasks (like climbing - duh!) its development hit a wall with regards to vocalising language. Their brains and vocal chords just aren't wired that way. The ethics here become troubling when you learn why the experiment was stopped: the baby started mimicking the chimp. Rather than the chimp succeeding at speaking, the baby was becoming chimp-like in mannerisms and vocalisations. Luckily the experiment was stopped while the baby's brain was plastic enough to unlearn all those dangerous chimp-thoughts. 

Next, poor little Albert. Sometimes in psychology there are very few ways to learn about how negative emotions manifest than by inducing them through questionable methods. One such extremely questionable example was an experiment carried out on a toddler called Little Albert, where psychologists wanted to learn all about the development of fear. The experiment: present a child with lots of white fluffy things, and loudly bang steel bars together behind the baby's head whenever he approached the fluffy thing. The point they were trying to make was that fears are rarely innate: they're learned situationally. If you're afraid of spiders it might be because one crawled on your face when you were very young. The saddest part of this experiment is that no effort was made to retrain the baby to find white fluffy things safe again once it was concluded. They taught him to fear cushions, and then released him into the wild.

Then there's "science" that I'm embarrassed to even acknowledge happened, such as homosexual aversion therapy. In a dark time during the 20th century, homosexuality was seen as a mental disorder, and some scientists believed it could be "cured". One technique to do so would have fit in well in the surreal, psychedelic, horrific movie A Clockwork Orange: images associated with homosexuality were flashed in front of the eyes of the "patient", simultaneously with painful electric shocks. The only thing achieved by this was severe psychological trauma ... Obviously.

Now we've covered the archaic scientific belief that homosexuality is a "mental disorder", there's also the idea that gender is learned. Obviously sex is just a thing that you're born with, but gender, such as the prevalence for boys to play with trucks rather than dolls, may be entirely cultural. Psychologists wanted to test this idea, and pounced when they learned of a baby boy who lost his penis during a circumcision gone wrong (not that there's any right way to mutilate a baby's genitals - this practice deserves a blog all of its own). Due to the loss of his penis, he was given a sex change operation, and psychologists provided hormonal treatments and surrounded the baby with girl-esque stimuli, to try and raise the baby as a girl. Despite showing all signs of behaving "boy-like" and showing signs of psychological distress, it wasn't until his teens that he was informed of what was going on. His teens! Once old enough, he opted to have the operation reversed, and became David Reimer. The damage may have been done, however, as David later committed suicide.

Even the name of this one gives me chills: Harlow's pit of despair. Here, a kind-hearted man named Harlow who wanted to find the recipe for love, had the brilliant idea of putting monkeys in a dark box, and leaving them there, with no stimuli whatsoever. Some monkeys went insane, some refused food in order to starve themselves to death. Nothing useful was learned except that Harlow was a git.

These are good examples of how the scientific method recipe isn't sufficient without ethics, but there's another factor to consider: what if scientists lie? Ideally, lies get weeded out as it should be common practice to repeat another's experiment and make sure their results were correct. This is common in particle physics, for example - no result is an island. However, in less funded areas of research, the process isn't so rigorous. In this day and age of journal impact factors, the results of a repeat experiment aren't seen as being as "exciting" as an original finding, and so new obscure results don't always get the fast testing they should. Also, some experiments just take years to set up, so you could spend a large portion of your career trying to refute already published data, all to just get a pat on the back and a publication in a very minor journal. This is one of the reasons why fraud is still a problem.

"Nano chopsticks". Shoddy photoshopping aside, they don't even look like chopsticks.

"Nano chopsticks". Shoddy photoshopping aside, they don't even look like chopsticks.

Here's an absolute corker of an example: nano chopsticks. The photo on the right is from a paper that discusses the apparent arrangement of nanorods into a vaguely "chopstick" like shapes. The photo was a complete fabrication, but luckily a few eagle-eyed (or not that eagle-eyed as it turns out) bloggers picked up on it and the paper was eventually retracted. The scary thing though, as Professor Philip Moriarty of the University of Nottingham pointed out, is that if only the authors been even slightly competent photoshoppers, it would have taken months, maybe years, of repeat experiments to show that it was complete trash. I am surprised this got past the journal editors, and astounded that it got past the referees. The fact that this slipped through the cracks shows a complete failure on the part of the peer review system, but had the fraudsters been better at frauding, how could we have expected the referees to know that they were reading a pack of lies?

Some sympathy may be in order for the authors of that utter shambles, however, as they may have been under immense pressure to publish. I've discussed before that journals are less likely to publish negative results; journals want something juicy. It could very well be that the authors spent considerable time and money chasing this idea, only to come up empty handed, and in desperation submitted something false. It's inexcusable, but understandable. This, I think, is probably what is going on with stripey nanoparticles.

Stripey nanoparticles. It just screams of being an artefact!

Stripey nanoparticles. It just screams of being an artefact!

In 2004 a group published results based on a few STM images, images that gave the impression that their nanoparticles had become "stripey". The photo alone would be fairly convincing evidence, were it not for the fact that these lines are a well known artefact - an erroneous result that appears because of the way you measured it - that crops up in STM imaging. The "stripey" nanoparticles have been resoundly debunked by numerous scientists, but the original author stubbornly held onto the idea, and has since published even more papers to explain the stripes that don't exist! It could be the case that he still believes he's right and all nay-sayers are wrong, but I think it more likely that he knows his position at MIT, and his career in general, would be at risk if all those papers were retracted. It's a pretty sorry state of affairs to be honest.

So, to sum up, support your local scientist in all his/her endeavours, but keep a close watch, citizen, for being a scientist doesn't just make you a good or honest person. It's good, honest people that makes science work.


Thumbnail attribution: By Cryteria (Own work) [CC BY 3.0 (http://creativecommons.org/licenses/by/3.0)], via Wikimedia Commons. https://upload.wikimedia.org/wikipedia/commons/f/f6/HAL9000.svg.