Students enrolled in my course BIAN 2133/6133 (Human Reproductive Strategies) in Semester 2 of 2019 at The Australian National University wrote short topical articles about the evolution of human reproductive strategies and published them on a class Wiki site.
I’m very proud of them for doing this project. I hope you find the fruits of their labour useful and interesting!
Here’s a list of articles about the evolution of human mating:
If you’re looking for my profile on ResearchGate or Academia.edu, you won’t find it. That’s because I’ve deleted my accounts.
There are two reasons for my decision.
First, these sites are for-profit. They capitalize on the labor of academics. I find this especially abhorrent given that one of them hides behind a .edu web address. With what university are they affiliated? I believe that all scientific work should be freely available. I make my work available on my own website, where possible. I’d like to publish solely in open-access journals, but that can be expensive, especially when one has no grant to cover publishing charges. Whatever is the case, I don’t want to continue supporting websites that make money off my work without giving something back.
Man holding a baby in a Karo village (picture taken by and copyright to Geoff Kushnick)
In one of the classes I teach at The Australian National University — “Evolution and Human Behaviour” (BIAN 3124) — I teach the students about the diverse evolutionary approaches that have been use to study human behaviour.
In my section on evolutionary psychology, I use a little experiment with the students to drive home some of the key concepts. I present them with a handful of questions about infidelity and we, invariably, find that the males in the class report much more concern about sexual infidelity than the females. The opposite is found for emotional infidelity. We, thus, replicate the finding of a sex difference touted by evolutionary psychology as one of their great successes (Buss 2018; Sagarin et al. 2012).
You see, the fitness payoff to males for investing in offspring is always tempered by the probability that they are being “tricked” into investing in an offspring that is not their own. Women don’t have to worry about this. One way for men to win this game is to adopt behaviours that serve to maximize the probability they have sired the offspring for which they are caring.
Jealousy is viewed as a strategy that can provide this result. Since this has been a recurring theme throughout human evolutionary history, evolutionary psychologists argue that you should see more intense jealousy evoked by sexual infidelity in men. In women, emotional jealousy is predicted to be more common. These patterns should hold up regardless of cultural background. It should be a universal.
On the face of it, the hypothesis has fared well when tested in a variety of settings, including cross-cultural ones.
As a human behavioural ecologist, I’m relatively uncomfortable with the concept of universality — and refer you to my paper “Why do the Karo Batak Prefer Women with Big Feet?” for more discussion (Kushnick 2013). People adapt to their environments via culture and biology, and since humans live in such a diverse range of socioecological settings I would expect human behaviour to be more complex than that. For most things, I expect humans to adopt flexible, conditional strategies.
According to pioneers of the field, Winterhalder and Smith (2000), said:
[Human behavioural ecology] usually frames the study of adaptive design in terms of decision rules, or conditional strategies: “In the context of X, do α; in context Y, switch to β.”
Why shouldn’t this logic apply to sexual jealousy as well? The benefits of acting jealously to stave off the chance of becoming a cuckold should vary with a number of socioenvironmental factors, not the least of which is the degree to which a man will provide investment (time, resources, etc) to his putative offspring. This is exactly the logic which led my colleague Brooke Scelza (an anthropologist at UCLA, who — like me — did her PhD under the supervision of the aforementioned Eric Alden Smith) to design a project whereby a number of us would conduct the experiment in different small-scale societies. Brooke takes you behind the scenes here.
I conducted the experiment in rural North Sumatra, Indonesia, amongst a group of people referred to as the Karo that I have been working with for over 10 years. The Karo are one of the so-called “Batak” groups who share some similarities, like a stated preferences for marriages with their matrilateral cross cousins — an aspect of their society that I have studied (Kushnick et al 2016).
The results of this research, conducted with over 1000 subjects in 11 societies, have recently been published in Nature Human Behaviour (Scelza et al 2019). The article sits, unfortunately, behind a paywall, but the publisher has made a readable, but not downloadable version of the PDF available for free.
Map of field sites in Scelza et al (2019)
We found that jealous response, indeed, varied with the degree to which males were involved in parental care in those societies, as well as with the strictness of sociosexual norms, which are correlated with the male’s familial role in those societies.
We consider this a big win for human behavioural ecology, but a small step in the understanding of human nature and behaviour.
And Brooke was chuffed to see this comment from David Buss on Twitter:
This is such a cool paper. A great demonstration of how evolutionary hypotheses can predict cultural variation as well as universal sex differences…in this case in the powerful emotion of jealousy. https://t.co/fS3eMlhpqC
Buss DM (2018). Sexual and emotional infidelity: evolved gender differences in jealousy prove robust and replicable. Perspectives in Psychological Science, 13, 155-160.
Kushnick, G. (2013) Why do the Karo Batak prefer women with big feet? Flexible mate preferences and the notion that one size fits all. Human Nature, 24, 268-79. PDFLink
Kushnick G, Fessler DMT, Zuska F (2016) Disgust, gender, and social change: Testing alternative explanations for the decline of cousin marriage in Karo society. Human Nature, 27, 533-555. PDFLinkBlogData
Sagarin BJ, et al (2012). Sex differences in jealousy: a meta-analytic examination. Evolution and Human Behaviour, 33, 595-614.
Scelza BA, Prall S, Blumenfeld T, Crittenden A, Gurven M, Kline M, Koster J, Kushnick G, Mattison SM, Pillsworth E, Shenk M, Starkweather K, Stieglitz J, Sum, C-Y, Yamaguchi K, McElreath R (2019) Patterns of paternal investment explain cross-cultural variance in jealous response. Nature Human Behaviour. PDFLink
Eugenics — the science of improving the race —was a powerful influence on the development of Western civilisation in the first half of the twentieth century. And Melbourne’s elite were among its chief proponents.
In this period all the institutions and practices of modern societies came into being and eugenics played an important role in moulding them.
As the home of the Australian federal government in the early decades of the twentieth century, Melbourne was the ideal place for activists wishing to pursue a national eugenic agenda.
The role of the University of Melbourne
An important leader of this loose alignment of like-thinking middle class academics and doctors was the Professor of Anatomy at Melbourne University from 1903 to 1929, Richard Berry. His influence extended beyond the university, which still has a building bearing his name, to some of the most important members of the city’s society.
Although there was a short-lived Eugenics Education Society, until the founding of the Eugenics Society of Victoria in 1936 eugenicists operated primarily as a pressure group within the university, the education department and various government agencies and committees.
The bill aimed to institutionalise and potentially sterilise a significant proportion of the population – those seen as inefficient. Included in the group were slum dwellers, homosexuals, prostitutes, alcoholics, as well as those with small heads and with low IQs. The Aboriginal population was also seen to fall within this group.
The first two attempts to enact the bills failed not due to any significant opposition but rather because of the unstable political climate and the fall of governments.
The third in 1939 was passed unanimously, but not enacted in the first instance because of the outbreak of war and, later, due to the embarrassment of the Holocaust.
Other state parliaments were inspired to also institute such legislation by Berry’s many town hall lectures across the nation.
Important national Royal Commissions in the 1920s also recommended a range of eugenic reforms including measures relating to child endowment, marriage laws and pensions.
It was carried out by Berry’s colleague, the Chief Inspector for the Insane in Victorian William Ernest Jones. In it, he claimed that the statistics collected showed the incidence of mental deficiency was rising, mainly due to genetics, and was more often found in the working class. He concluded that it required urgent government action along the lines previously championed by Berry. It was tabled before parliament and created a sensation in the press.
Little happened, however, as the government fell and the Great Depression hit the nation. The Director of the Department of Health, John Cumpston, claimed that the dire financial situation destroyed any chance of such a reform.
Eugenics in education
Another important influence of eugenic thinking was found in the development of post-primary education in Victoria.
The most important educationalists involved in the radical developments in the development of secondary and technical schools in Victoria were either active in eugenic circles or closely associated with Berry.
Perhaps the most influential, the first director of education, Frank Tate, was associated on most important government bodies with Berry and strongly supported his research on head size and, on occasions, introduced his public lectures.
Others, such as the first Director of the Carnegie funded Australian Council for Educational Research, Kenneth Cunningham, as well as one of the most significant early psychologists, Chris McRae, published research claiming to show that working class children were unfit for academic secondary education and the university study that it led to.
McRae replicated in Melbourne suburbs research carried out in a variety of different socio-economic suburbs of London. He subsequently reported in the Victorian Education Gazette (sent out to every state school primary teacher) that those in schools in poorer suburbs “will never go to university and should not follow the same curriculum … people live in slums because they are mentally deficient and not vice-versa”.
As a consequence, in this period the Victorian Education Department set up technical schools in the poorer suburbs of Melbourne with just a few academic high schools.
In comparison, in New South Wales the Director of Education, Peter Board, vigorously opposed such thinking and championed higher education opportunity for all. Many more state school children in New South Wales were given an academic secondary education and went on to university.
Its membership read like a who’s who of Melbourne’s elite including the Chief Executive Officer of the Council for Scientific and Industrial Research — the precursor to the CSIRO, the Vice-Chancellor of the University of Melbourne, the President of the Royal College of Physicians and the Chief Justice of the Supreme Court of Victoria.
Although the aims of the society included supporting the sterilisation of mental defectives, more and more they were involved in environmental reforms (such as slum clearance) and the birth control movement.
In Britain Richard Berry continued to preach his uncompromising theory of “rotten heredity”. In 1934 he would argue that to eliminate mental deficiency would require the sterilisation of twenty-five per cent of the population. At the same time he also advocated the “kindly euthanasia” of the unfit.
Parenting and technology interact in many ways. As a sampling of these interactions, take the illustrations in Figure 1 of my soon-to-be-published chapter titled “The Cradle of Humankind: Evolutionary Approaches to Technology and Parenting” in the upcoming book The Oxford Handbook of Evolutionary Psychology of Parenting. The thesis of the chapter is that evolutionary theory has helped — or, in some cases, can help — us to understand of the relationship between parenting and technology.
Figure 1. Technology as always been an important driver of offspring-directed parental behaviour and beliefs, and vice versa. Pictured here are some examples: (A) terracotta infant-feeding vessel from southern Italy (4th Century BC); ; (B) advertisement for a baby cage used to get fresh air for children in crowded cities (early 20th century); (C) woman getting a fetal ultrasound in a rural clinic in Brazil; (D) ingenious contraption used by a Karo mother in rural Indonesia to keep a baby safe and occupied with a chair, umbrella, and mobile phone while she tends to a tomato garden; (E) woman with her baby slung on her back using a water pump in Ghana; and, (F) Sami woman carrying a child in a Komse, or child carrier, in Lapland, Sweden (ca. 1880). Copyright information for images: (A) photograph taken by author from ANU Classics Museum, Canberra; (B) public domain image taken from Fischer (1905); (C) Agencia de Noticias do Acre, Creative Common CC BY 2.0; (D) photograph taken by author; (E) USAID, public domain; and, (F) public domain image from 1880.
Now, I had to start the chapter by sorting out that parenting can mean both childbearing (producing offspring) and childrearing (raising offspring). Further, I wanted my account to include the full range of the most frequently adopted evolutionary approaches to understanding human behaviour: evolutionary psychology, human behavioural ecology, and dual inheritance theory. With this framing, I chose six examples as illustrated with Table 1 from my chapter:
Table 1. Evolutionary approaches to the study of human behaviour: the ‘three styles’ framework (adapted from Smith 2000: 34).
If that sounds interesting to you, please check out the preprint of my chapter, which I am allowed to provide under Oxford University Press’s Author Re-Use and Self-Archiving policy. The book is slated for publication in the 4th quarter of 2018. I will update this post with a link once that occurs.
Kushnick G (in press) The cradle of humankind: Evolutionary approaches to technology and parenting. In: Weekes-Shackelford VA, Shackelford TK (eds.), The Oxford Handbook of Evolutionary Psychology and Parenting. Oxford University Press. (To appear in 2019).
Teaching is an important skill to develop when you are still a PhD student, as many academic job opportunities requires some degree of teaching experience.
But, how can one get teaching experience when opportunities are scarce? Unfortunately, I don’t have a definitive answer. I was lucky to get lots of teaching experience as a PhD student. What I can offer some advice for PhD students regarding teaching and being a teaching assistant (or Tutor in Australian academia).
The following advice is drawn from a handout I developed to pass out to PhD students attending my lecture on building a teaching portfolio that I gave at the University of Washington during my time as a Lecturer there between 2007 and 2014. It was part of the necessary training that TAs had to undergo before starting:
TAing (Tutoring) and Teaching are Different but the Same: With TAing (or tutoring), you are following a curriculum set by the instructor; with teaching, you are following your own This doesn’t mean, as a TA, you’re required to suppress your individuality. It just means that you need to teach the material the instructor deems important. This is actually a good thing: I’ve found that the best TAs are creative (within an established set of boundaries). Hey, and if you can perfect that skill, you will be a very attractive hire both within academia and beyond it! With both, you are one of the key elements of the learning experience for the students in your class. When teaching or TAing, I believe it’s equally important to take pride in your performance (e.g., by being familiar with the material, preparing ahead of time, acting professionally, etc.) and to develop a sincere interest in whether the students learn (e.g., by listening sometimes instead of talking all the time, making yourself available, etc). Of course, this is a balancing act since teaching is only one part of your academic career, and your academic career is only one part of your life.
TAing (Tutoring) is Good, but Teaching is Better: At the bare minimum, you’ll have to TA at least once to get through the PhD program (in most programs, at least). Chances are you’ll do it more than If you get a chance to teach a class—meaning act as the actual instructor of a class—take it! Some of the benefits include: (a) a taste of greater responsibility (let’s face it, as a TA, we have a lot of responsibility, but the ultimate responsibility—if something goes wrong, for instance—is in the hands of the instructor); (b) a chance to infuse the lesson plan with a larger dose of your personality and interests; (c) having a complete class ready to go when you’re called upon to do it professionally—which potentially frees a lot of time to focus on other important things (e.g., it could mean you’ll have more time to write and less time preparing for class when you get an academic job); and, (d) it looks great on your CV. I didn’t teach as a grad student (only TA’d). I thought it better to push toward a completed dissertation without distraction. Painfully unrealistic and wrong at worst. Short sighted at best.
Teaching= Learning: I once ran into a student in the library, and was dismayed by what I heard: “Dr Kushnick, what are you doing in the library? I didn’t know teachers use the library. What a bummer that you still have to read.” Teaching (or TAing) provides a great opportunity to learn, and I suggest you learn as much as you can. Use your preparation time as a chance to learn something new. Read about how to be an effective teacher. Be careful though. Some of what is out there is specific to teaching at a specific school and since every school has a different teaching (and TAing) culture and social structure, some of it won’t be all that valuable. The useful stuff, in my opinion, is the stuff that’s simultaneously both general and specific—general enough to apply to teaching anywhere, but specific in that it provides advice about specific things you might do to improve your effectiveness (see the list from Webb’s article in the sidebar is a good example). The stuff that applies to teaching at a particular institution is useful too (if you’re teaching at that institution).
Manage Your Portfolio: I’ve applied for more academic jobs than I’m willing to admit, and I count myself as lucky to have had this Lectureship for the past few I believe I’m qualified to make generalizations about what you’ll find in the job ads; less qualified to make statements about how to get the jobs being advertised. More than half of the available jobs I’ve seen ask for a detailed statement of your teaching interests and experience, and documentation of your “commitment to teaching excellence.” Usually, you’re asked to include this information as part of the application letter. Sometimes, you’re asked to send a separate teaching statement or mini‐portfolio. Other times, especially when applying to large research institutions, you’re asked for less information about your teaching. Whether the job you’ll be applying for asks for a lot or a little information, you’re better off if you’ve been keeping a portfolio of teaching materials that you can pull from to build your application. This might include: syllabi from courses you’ve taught or TA’d; evaluations from students or instructors; video or audio recordings of you in the classroom; notes regarding your teaching philosophy, if not a draft of an actual statement; etc, etc…
Twelve Easy Steps to Becoming an Effective Teaching Assistant
By Derek Webb from (2005) Political Science and Politics, v.38, pp. 757‐761.
Derek Webb was a PhD Candidate at when he published this. He was the winner of the 2003 Outstanding Graduate Student Teaching Assistant at Notre Dame University. Here are his steps to being a great Teaching Assistant for those who find the opportunity:
Learn your students’ names ASAP.
The three goals of discussion section: Nuts and bolts, challenge and excitement, fun and games.
Provide a handout or or agenda.
Provide a mini‐lecture.
Provide an opportunity for students to ask questions.
As one of the Associate Editors, I invite all Biological Anthropology students in the School of Archaeology and Anthropology at The Australian National University to submit an essay for consideration. Accepted essays will be published in the 2nd Volume of:
“The Human Voyage: Undergraduate Research in Biological Anthropology”
published by ANU e-Press. See the call for papers below.
A visual abstract to my 2010 publication in Human Nature where I found that landholding status shaped reproductive strategies, both in terms of reproductive rates and the quantity and quality of care received after they were born:
Kushnick, G. (2010) Resource competition and reproduction in Karo Batak villages. Human Nature 21: 62-81. PDFLink
A false positive is a claim that an effect exists when in actuality it doesn’t. No one knows what proportion of published papers contain such incorrect or overstated results, but there are signs that the proportion is not small.
The epidemiologist John Ioannidis gave the best explanation for this phenomenon in a famous paper in 2005, provocatively titled “Why most published research results are false”. One of the reasons Ioannidis gave for so many false results has come to be called “p hacking”, which arises from the pressure researchers feel to achieve statistical significance.
What is statistical significance?
To draw conclusions from data, researchers usually rely on significance testing. In simple terms, this means calculating the “p value”, which is the probability of results like ours if there really is no effect. If the p value is sufficiently small, the result is declared to be statistically significant.
Traditionally, a p value of less than .05 is the criterion for significance. If you report a p<.05, readers are likely to believe you have found a real effect. Perhaps, however, there is actually no effect and you have reported a false positive.
Many journals will only publish studies that can report one or more statistically significant effects. Graduate students quickly learn that achieving the mythical p<.05 is the key to progress, obtaining a PhD and the ultimate goal of achieving publication in a good journal.
This pressure to achieve p<.05 leads to researchers cutting corners, knowingly or unknowingly, for example by p hacking.
The lure of p hacking
To illustrate p hacking, here is a hypothetical example.
Bruce has recently completed a PhD and has landed a prestigious grant to join one of the top research teams in his field. His first experiment doesn’t work out well, but Bruce quickly refines the procedures and runs a second study. This looks more promising, but still doesn’t give a p value of less than .05.
Convinced that he is onto something, Bruce gathers more data. He decides to drop a few of the results, which looked clearly way off.
He then notices that one of his measures gives a clearer picture, so he focuses on that. A few more tweaks and Bruce finally identifies a slightly surprising but really interesting effect that achieves p<.05. He carefully writes up his study and submits it to a good journal, which accepts his report for publication.
Bruce tried so hard to find the effect that he knew was lurking somewhere. He was also feeling the pressure to hit p<.05 so he could declare statistical significance, publish his finding and taste sweet success.
There is only one catch: there was actually no effect. Despite the statistically significant result, Bruce has published a false positive.
Bruce felt he was using his scientific insight to reveal the lurking effect as he took various steps after starting his study:
He collected further data.
He dropped some data that seemed aberrant.
He dropped some of his measures and focused on the most promising.
He analysed the data a little differently and made a few further tweaks.
The trouble is that all these choices were made after seeing the data. Bruce may, unconsciously, have been cherrypicking – selecting and tweaking until he obtained the elusive p<.05. Even when there is no effect, such selecting and tweaking might easily find something in the data for which p<.05.
Statisticians have a saying: if you torture the data enough, they will confess. Choices and tweaks made after seeing the data are questionable research practices. Using these, deliberately or not, to achieve the right statistical result is p hacking, which is one important reason that published, statistically significant results may be false positives.
What proportion of published results are wrong?
This is a good question, and a fiendishly tricky one. No one knows the answer, which is likely to be different in different research fields.
A large and impressive effort to answer the question for social and cognitive psychology was published in 2015. Led by Brian Nosek and his colleagues at the Center for Open Science, the Replicability Project: Psychology (RP:P) had 100 research groups around the world each carry out a careful replication of one of 100 published results. Overall, roughly 40 replicated fairly well, whereas in around 60 cases the replication studies obtained smaller or much smaller effects.
The 100 RP:P replication studies reported effects that were, on average, just half the size of the effects reported by the original studies. The carefully conducted replications are probably giving more accurate estimates than the possibly p hacked original studies, so we could conclude that the original studies overestimated true effects by, on average, a factor of two. That’s alarming!
How to avoid p hacking
The best way to avoid p hacking is to avoid making any selection or tweaks after seeing the data. In other words, avoid questionable research practices. In most cases, the best way to do this is to use preregistration.
Preregistration requires that you prepare in advance a detailed research plan, including the statistical analysis to be applied to the data. Then you preregister the plan, with date stamp, at the Open Science Framework or some other online registry.
Then carry out the study, analyse the data in accordance with the plan, and report the results, whatever they are. Readers can check the preregistered plan and thus be confident that the analysis was specified in advance, and not p hacked. Preregistration is a challenging new idea for many researchers, but likely to be the way of the future.
Estimation rather than p values
The temptation to p hack is one of the big disadvantages of relying on p values. Another is that the p<.05 criterion encourages black-and-white thinking: an effect is either statistically significant or it isn’t, which sounds rather like saying an effect exists or it doesn’t.
But the world is not black and white. To recognise the numerous shades of grey it’s much better to use estimation rather than p values. The aim with estimation is to estimate the size of an effect – which may be small or large, zero, or even negative. In terms of estimation, a false positive result is an estimate that’s larger or much larger than the true value of an effect.
Let’s take a hypothetical study on the impact of therapy. The study might, for example, estimate that therapy gives, on average, a 7-point decrease in anxiety. Suppose we calculate from our data a confidence interval – a range of uncertainty either side of our best estimate – of [4, 10]. This tells us that our estimate of 7 is, most likely, within about 3 points on the anxiety scale of the true effect – the true average amount of benefit of the therapy.
In other words, the confidence interval indicates how precise our estimate is. Knowing such an estimate and its confidence interval is much more informative than any p value.
I refer to estimation as one of the “new statistics”. The techniques themselves are not new, but using them as the main way to draw conclusions from data would for many researchers be new, and a big step forward. It would also help avoid the distortions caused by p hacking.
Statistics is a useful tool for understanding the patterns in the world around us. But our intuition often lets us down when it comes to interpreting those patterns. In this series we look at some of the common mistakes we make and how to avoid them when thinking about statistics, probability and risk.
1. Assuming small differences are meaningful
Many of the daily fluctuations in the stock market represent chance rather than anything meaningful. Differences in polls when one party is ahead by a point or two are often just statistical noise.
You can avoid drawing faulty conclusions about the causes of such fluctuations by demanding to see the “margin of error” relating to the numbers.
If the difference is smaller than the margin of error, there is likely no meaningful difference, and the variation is probably just down to random fluctuations.
2. Equating statistical significance with real-world significance
We often hear generalisations about how two groups differ in some way, such as that women are more nurturing while men are physically stronger.
These differences often draw on stereotypes and folk wisdom but often ignore the similarities in people between the two groups, and the variation in people within the groups.
If you pick two men at random, there is likely to be quite a lot of difference in their physical strength. And if you pick one man and one woman, they may end up being very similar in terms of nurturing, or the man may be more nurturing than the woman.
You can avoid this error by asking for the “effect size” of the differences between groups. This is a measure of how much the average of one group differs from the average of another.
If the effect size is small, then the two groups are very similar. Even if the effect size is large, the two groups will still likely have a great deal of variation within them, so not all members of one group will be different from all members of another group.
3. Neglecting to look at extremes
The flipside of effect size is relevant when the thing that you’re focusing on follows a “normal distribution” (sometimes called a “bell curve”). This is where most people are near the average score and only a tiny group is well above or well below average.
When that happens, a small change in performance for the group produces a difference that means nothing for the average person (see point 2) but that changes the character of the extremes more radically.
Avoid this error by reflecting on whether you’re dealing with extremes or not. When you’re dealing with average people, small group differences often don’t matter. When you care a lot about the extremes, small group differences can matter heaps.
4. Trusting coincidence
Did you know there’s a correlation between the number of people who drowned each year in the United States by falling into a swimming pool and number of films Nicholas Cage appeared in?
Is there a causal link? (tylervigen.com)
If you look hard enough you can find interesting patterns and correlations that are merely due to coincidence.
Just because two things happen to change at the same time, or in similar patterns, does not mean they are related.
Avoid this error by asking how reliable the observed association is. Is it a one-off, or has it happened multiple times? Can future associations be predicted? If you have seen it only once, then it is likely to be due to random chance.
5. Getting causation backwards
When two things are correlated – say, unemployment and mental health issues – it might be tempting to see an “obvious” causal path – say that mental health problems lead to unemployment.
But sometimes the causal path goes in the other direction, such as unemployment causing mental health issues.
You can avoid this error by remembering to think about reverse causality when you see an association. Could the influence go in the other direction? Or could it go both ways, creating a feedback loop?
6. Forgetting to consider outside causes
People often fail to evaluate possible “third factors”, or outside causes, that may create an association between two things because both are actually outcomes of the third factor.
For example, there might be an association between eating at restaurants and better cardiovascular health. That might lead you to believe there is a causal connection between the two.
However, it might turn out that those who can afford to eat at restaurants regularly are in a high socioeconomic bracket, and can also afford better health care, and it’s the health care that affords better cardiovascular health.
You can avoid this error by remembering to think about third factors when you see a correlation. If you’re following up on one thing as a possible cause, ask yourself what, in turn, causes that thing? Could that third factor cause both observed outcomes?
7. Deceptive graphs
A lot of mischief occurs in the scaling and labelling of the vertical axis on graphs. The labels should show the full meaningful range of whatever you’re looking at.
But sometimes the graph maker chooses a narrower range to make a small difference or association look more impactful. On a scale from 0 to 100, two columns might look the same height. But if you graph the same data only showing from 52.5 to 56.5, they might look drastically different.
You can avoid this error by taking care to note graph’s labels along the axes. Be especially sceptical of unlabelled graphs.
Evolutionary anthropologist with expertise in human behavioural ecology. Research interests include human reproductive strategies, evolution of social norms and institutions, statistical and mathematical modeling and analysis, and the peoples of SE and the Pacific.