Category Archives: Uncategorized

Eugenics in Australia: The secret of Melbourne’s elite

Written by:
Ross L Jones
Post-Doctoral Research Fellow in History
University of Sydney

Flickr science museum london.jpg?ixlib=rb 1.1

Francis Galton pioneered the concept of eugenics in this lab in London in the late 19th century. (Flickr/Science Museum London)

Eugenics — the science of improving the race —was a powerful influence on the development of Western civilisation in the first half of the twentieth century. And Melbourne’s elite were among its chief proponents.

In this period all the institutions and practices of modern societies came into being and eugenics played an important role in moulding them.

As the home of the Australian federal government in the early decades of the twentieth century, Melbourne was the ideal place for activists wishing to pursue a national eugenic agenda.

The role of the University of Melbourne

An important leader of this loose alignment of like-thinking middle class academics and doctors was the Professor of Anatomy at Melbourne University from 1903 to 1929, Richard Berry. His influence extended beyond the university, which still has a building bearing his name, to some of the most important members of the city’s society.

Although there was a short-lived Eugenics Education Society, until the founding of the Eugenics Society of Victoria in 1936 eugenicists operated primarily as a pressure group within the university, the education department and various government agencies and committees.

Legalising eugenics

Important legislation, in the form of three Mental Deficiency Bills, was presented to the parliament in 1926, 1929 and 1939 by the Premier Stanley Argyle, a friend and colleague of Berry.

The bill aimed to institutionalise and potentially sterilise a significant proportion of the population – those seen as inefficient. Included in the group were slum dwellers, homosexuals, prostitutes, alcoholics, as well as those with small heads and with low IQs. The Aboriginal population was also seen to fall within this group.

The first two attempts to enact the bills failed not due to any significant opposition but rather because of the unstable political climate and the fall of governments.

The third in 1939 was passed unanimously, but not enacted in the first instance because of the outbreak of war and, later, due to the embarrassment of the Holocaust.

Other state parliaments were inspired to also institute such legislation by Berry’s many town hall lectures across the nation.

Important national Royal Commissions in the 1920s also recommended a range of eugenic reforms including measures relating to child endowment, marriage laws and pensions.

National survey

Perhaps the culmination of all this activity was the commissioning of a national survey of mental deficiency by the Federal Minister for health, Sir Neville Howse, in 1928.

It was carried out by Berry’s colleague, the Chief Inspector for the Insane in Victorian William Ernest Jones. In it, he claimed that the statistics collected showed the incidence of mental deficiency was rising, mainly due to genetics, and was more often found in the working class. He concluded that it required urgent government action along the lines previously championed by Berry. It was tabled before parliament and created a sensation in the press.

Little happened, however, as the government fell and the Great Depression hit the nation. The Director of the Department of Health, John Cumpston, claimed that the dire financial situation destroyed any chance of such a reform.

Eugenics in education

Another important influence of eugenic thinking was found in the development of post-primary education in Victoria.

The most important educationalists involved in the radical developments in the development of secondary and technical schools in Victoria were either active in eugenic circles or closely associated with Berry.

Perhaps the most influential, the first director of education, Frank Tate, was associated on most important government bodies with Berry and strongly supported his research on head size and, on occasions, introduced his public lectures.

Others, such as the first Director of the Carnegie funded Australian Council for Educational Research, Kenneth Cunningham, as well as one of the most significant early psychologists, Chris McRae, published research claiming to show that working class children were unfit for academic secondary education and the university study that it led to.

McRae replicated in Melbourne suburbs research carried out in a variety of different socio-economic suburbs of London. He subsequently reported in the Victorian Education Gazette (sent out to every state school primary teacher) that those in schools in poorer suburbs “will never go to university and should not follow the same curriculum … people live in slums because they are mentally deficient and not vice-versa”.

As a consequence, in this period the Victorian Education Department set up technical schools in the poorer suburbs of Melbourne with just a few academic high schools.

In comparison, in New South Wales the Director of Education, Peter Board, vigorously opposed such thinking and championed higher education opportunity for all. Many more state school children in New South Wales were given an academic secondary education and went on to university.

The spread of the movement

Richard Berry returned to England in 1929 but
others took up the mantle, founding the Eugenics Society of Victoria.

Its membership read like a who’s who of Melbourne’s elite including the Chief Executive Officer of the Council for Scientific and Industrial Research — the precursor to the CSIRO, the Vice-Chancellor of the University of Melbourne, the President of the Royal College of Physicians and the Chief Justice of the Supreme Court of Victoria.

Although the aims of the society included supporting the sterilisation of mental defectives, more and more they were involved in environmental reforms (such as slum clearance) and the birth control movement.

Berry’s legacy

In Britain Richard Berry continued to preach his uncompromising theory of “rotten heredity”. In 1934 he would argue that to eliminate mental deficiency would require the sterilisation of twenty-five per cent of the population. At the same time he also advocated the “kindly euthanasia” of the unfit.

But his legacy in Australia continued, with the Eugenics Society of Victoria operating until 1961.

Although Melbourne may wish to forget its dark past, the powerful leaders of the eugenics movement once controlled the city, and their beliefs influenced a generation.The Conversation

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Leave a comment

Filed under Uncategorized

Can Evolutionary Theory Help Us Understand the Relationship Between Parenting and Technology?

Parenting and technology interact in many ways. As a sampling of these interactions, take the illustrations in Figure 1 of my soon-to-be-published chapter titled “The Cradle of Humankind: Evolutionary Approaches to Technology and Parenting” in the upcoming book The Oxford Handbook of Evolutionary Psychology of Parenting. The thesis of the chapter is that evolutionary theory has helped — or, in some cases, can help — us to understand of the relationship between parenting and technology.

figure 1 kushnick

Figure 1.  Technology as always been an important driver of offspring-directed parental behaviour and beliefs, and vice versa. Pictured here are some examples: (A) terracotta infant-feeding vessel from southern Italy (4th Century BC); ; (B) advertisement for a baby cage used to get fresh air for children in crowded cities (early 20th century); (C) woman getting a fetal ultrasound in a rural clinic in Brazil; (D) ingenious contraption used by a Karo mother in rural Indonesia to keep a baby safe and occupied with a chair, umbrella, and mobile phone while she tends to a tomato garden; (E) woman with her baby slung on her back using a water pump in Ghana; and, (F) Sami woman carrying a child in a Komse, or child carrier, in Lapland, Sweden (ca. 1880). Copyright information for images: (A) photograph taken by author from ANU Classics Museum, Canberra; (B) public domain image taken from Fischer (1905); (C) Agencia de Noticias do Acre, Creative Common CC BY 2.0; (D) photograph taken by author; (E) USAID, public domain; and, (F) public domain image from 1880.

Now, I had to start the chapter by sorting out that parenting can mean both childbearing (producing offspring) and childrearing (raising offspring). Further, I wanted my account to include the full range of the most frequently adopted evolutionary approaches to understanding human behaviour: evolutionary psychology, human behavioural ecology, and dual inheritance theory. With this framing, I chose six examples as illustrated with Table 1 from my chapter:

table 1 kushnick

Table 1.  Evolutionary approaches to the study of human behaviour: the ‘three styles’ framework (adapted from Smith 2000: 34).

If that sounds interesting to you, please check out the preprint of my chapter, which I am allowed to provide under Oxford University Press’s Author Re-Use and Self-Archiving policy. The book is slated for publication in the 4th quarter of 2018. I will update this post with a link once that occurs.

Kushnick G (in press) The cradle of humankind: Evolutionary approaches to technology and parenting. In: Weekes-Shackelford VA, Shackelford TK (eds.), The Oxford Handbook of Evolutionary Psychology and Parenting. Oxford University Press. (To appear in 2018). Click here to view preprint

 

Leave a comment

Filed under Uncategorized

Some Advice on Teaching for TAs (or Tutors, as We Call them in Australia)

owl-teacher-clipart-6961.jpgTeaching is an important skill to develop when you are still a PhD student, as many academic job opportunities requires some degree of teaching experience.

But, how can one get teaching experience when opportunities are scarce? Unfortunately, I don’t have a definitive answer. I was lucky to get lots of teaching experience as a PhD student. What I can offer some advice for PhD students regarding teaching and being a teaching assistant (or Tutor in Australian academia).

The following advice is drawn from a handout I developed to pass out to PhD students attending my lecture on building a teaching portfolio that I gave at the University of Washington during my time as a Lecturer there between 2007 and 2014. It was part of the necessary training that TAs had to undergo before starting:

  • TAing (Tutoring) and Teaching are Different but the Same: With TAing (or tutoring), you are following a curriculum set by the instructor; with teaching, you are following your own This doesn’t mean, as a TA, you’re required to suppress your individuality. It just means that you need to teach the material the instructor deems important. This is actually a good thing: I’ve found that the best TAs are creative (within an established set of boundaries). Hey, and if you can perfect that skill, you will be a very attractive hire both within academia and beyond it! With both, you are one of the key elements of the learning experience for the students in your class. When teaching or TAing, I believe it’s equally important to take pride in your performance (e.g., by being familiar with the material, preparing ahead of time, acting professionally, etc.) and to develop a sincere interest in whether the students learn (e.g., by listening sometimes instead of talking all the time, making yourself available, etc). Of course, this is a balancing act since teaching is only one part of your academic career, and your academic career is only one part of your life.
  • TAing (Tutoring) is Good, but Teaching is Better: At the bare minimum, you’ll have to TA at least once to get through the PhD program (in most programs, at least). Chances are you’ll do it more than If you get a chance to teach a class—meaning act as the actual instructor of a class—take it! Some of the benefits include: (a) a taste of greater responsibility (let’s face it, as a TA, we have a lot of responsibility, but the ultimate responsibility—if something goes wrong, for instance—is in the hands of the instructor); (b) a chance to infuse the lesson plan with a larger dose of your personality and interests; (c) having a complete class ready to go when you’re called upon to do it professionally—which potentially frees a lot of time to focus on other important things (e.g., it could mean you’ll have more time to write and less time preparing for class when you get an academic job); and, (d) it looks great on your CV. I didn’t teach as a grad student (only TA’d). I thought it better to push toward a completed dissertation without distraction. Painfully unrealistic and wrong at worst. Short sighted at best.
  • Teaching = Learning: I once ran into a student in the library, and was dismayed by what I heard: “Dr Kushnick, what are you doing in the library? I didn’t know teachers use the library. What a bummer that you still have to read.” Teaching (or TAing) provides a great opportunity to learn, and I suggest you learn as much as you can. Use your preparation time as a chance to learn something new. Read about how to be an effective teacher. Be careful though. Some of what is out there is specific to teaching at a specific school and since every school has a different teaching (and TAing) culture and social structure, some of it won’t be all that valuable. The useful stuff, in my opinion, is the stuff that’s simultaneously both general and specific—general enough to apply to teaching anywhere, but specific in that it provides advice about specific things you might do to improve your effectiveness (see the list from Webb’s article in the sidebar   is a good example). The stuff that applies to teaching at a particular institution is useful too (if you’re teaching at that institution).
  • Manage Your Portfolio: I’ve applied for more academic jobs than I’m willing to admit, and I count myself as lucky to have had this Lectureship for the past few I believe I’m qualified to make generalizations about what you’ll find in the job ads; less qualified to make statements about how to get the jobs being advertised. More than half of the available jobs I’ve seen ask for a detailed statement of your teaching interests and experience, and documentation of your “commitment to teaching excellence.” Usually, you’re asked to include this information as part of the application letter. Sometimes, you’re asked to send a separate teaching statement or mini‐portfolio. Other times, especially when applying to large research institutions, you’re asked for less information about your teaching. Whether the job you’ll be applying for asks for a lot or a little information, you’re better off if you’ve been keeping a portfolio of teaching materials that you can pull from to build your application. This might include: syllabi from courses you’ve taught or TA’d; evaluations from students or instructors; video or audio recordings of you in the classroom; notes regarding your teaching philosophy, if not a draft of an actual statement; etc, etc…

Twelve Easy Steps to Becoming an Effective Teaching Assistant

By Derek Webb from (2005) Political Science and Politics, v.38, pp. 757‐761.

Derek Webb was a PhD Candidate at when he published this. He was the winner of the 2003 Outstanding Graduate Student Teaching Assistant at Notre Dame University. Here are his steps to being a great Teaching Assistant for those who find the opportunity:

  1. Be yourself.
  2. Be available.
  3. Be organized.
  4. Learn your students’ names ASAP.
  5. The three goals of discussion section: Nuts and bolts, challenge and excitement, fun and games.
  6. Provide a handout or or agenda.
  7. Provide a mini‐lecture.
  8. Provide an opportunity for students to ask questions.
  9. Work to stimulate discussion.

Leave a comment

Filed under Uncategorized

Attention ANU BIAN Students! Interested in Publishing an Essay?

As one of the Associate Editors, I invite all Biological Anthropology students in the School of Archaeology and Anthropology at The Australian National University to submit an essay for consideration. Accepted essays will be published in the 2nd Volume of:

“The Human Voyage: Undergraduate Research in Biological Anthropology”

published by ANU e-Press. See the call for papers below.

Email your submission to ug.BIANjournal.cass@anu.edu.au

human voyage - volume 2.JPG

Leave a comment

Filed under Uncategorized

Visual Abstract: Resource Competition and Reproduction among the Karo (Human Nature, 2010)

A visual abstract to my 2010 publication in Human Nature where I found that landholding status shaped reproductive strategies, both in terms of reproductive rates and the quantity and quality of care received after they were born:

Kushnick, G. (2010) Resource competition and reproduction in Karo Batak villages. Human Nature 21: 62-81. PDF  Link

kushnick_2010_visual_abstract

Leave a comment

Filed under Uncategorized

One Reason so Many Scientific Studies May be Wrong

Image 20161003 24819 1t75zy

Enter Statistics: if you torture the data enough, they will confess.
clemsonunivlibrary/ Flickr, CC BY-NC

Geoff Cumming, La Trobe University

There is a replicability crisis in science – unidentified “false positives” are pervading even our top research journals. The Conversation

A false positive is a claim that an effect exists when in actuality it doesn’t. No one knows what proportion of published papers contain such incorrect or overstated results, but there are signs that the proportion is not small.

The epidemiologist John Ioannidis gave the best explanation for this phenomenon in a famous paper in 2005, provocatively titled “Why most published research results are false”. One of the reasons Ioannidis gave for so many false results has come to be called “p hacking”, which arises from the pressure researchers feel to achieve statistical significance.

What is statistical significance?

To draw conclusions from data, researchers usually rely on significance testing. In simple terms, this means calculating the “p value”, which is the probability of results like ours if there really is no effect. If the p value is sufficiently small, the result is declared to be statistically significant.

Traditionally, a p value of less than .05 is the criterion for significance. If you report a p<.05, readers are likely to believe you have found a real effect. Perhaps, however, there is actually no effect and you have reported a false positive.

Many journals will only publish studies that can report one or more statistically significant effects. Graduate students quickly learn that achieving the mythical p<.05 is the key to progress, obtaining a PhD and the ultimate goal of achieving publication in a good journal.

This pressure to achieve p<.05 leads to researchers cutting corners, knowingly or unknowingly, for example by p hacking.

The lure of p hacking

To illustrate p hacking, here is a hypothetical example.

Bruce has recently completed a PhD and has landed a prestigious grant to join one of the top research teams in his field. His first experiment doesn’t work out well, but Bruce quickly refines the procedures and runs a second study. This looks more promising, but still doesn’t give a p value of less than .05.

Convinced that he is onto something, Bruce gathers more data. He decides to drop a few of the results, which looked clearly way off.

He then notices that one of his measures gives a clearer picture, so he focuses on that. A few more tweaks and Bruce finally identifies a slightly surprising but really interesting effect that achieves p<.05. He carefully writes up his study and submits it to a good journal, which accepts his report for publication.

Bruce tried so hard to find the effect that he knew was lurking somewhere. He was also feeling the pressure to hit p<.05 so he could declare statistical significance, publish his finding and taste sweet success.

There is only one catch: there was actually no effect. Despite the statistically significant result, Bruce has published a false positive.

Bruce felt he was using his scientific insight to reveal the lurking effect as he took various steps after starting his study:

  • He collected further data.
  • He dropped some data that seemed aberrant.
  • He dropped some of his measures and focused on the most promising.
  • He analysed the data a little differently and made a few further tweaks.

The trouble is that all these choices were made after seeing the data. Bruce may, unconsciously, have been cherrypicking – selecting and tweaking until he obtained the elusive p<.05. Even when there is no effect, such selecting and tweaking might easily find something in the data for which p<.05.

Statisticians have a saying: if you torture the data enough, they will confess. Choices and tweaks made after seeing the data are questionable research practices. Using these, deliberately or not, to achieve the right statistical result is p hacking, which is one important reason that published, statistically significant results may be false positives.

What proportion of published results are wrong?

This is a good question, and a fiendishly tricky one. No one knows the answer, which is likely to be different in different research fields.

A large and impressive effort to answer the question for social and cognitive psychology was published in 2015. Led by Brian Nosek and his colleagues at the Center for Open Science, the Replicability Project: Psychology (RP:P) had 100 research groups around the world each carry out a careful replication of one of 100 published results. Overall, roughly 40 replicated fairly well, whereas in around 60 cases the replication studies obtained smaller or much smaller effects.

The 100 RP:P replication studies reported effects that were, on average, just half the size of the effects reported by the original studies. The carefully conducted replications are probably giving more accurate estimates than the possibly p hacked original studies, so we could conclude that the original studies overestimated true effects by, on average, a factor of two. That’s alarming!

How to avoid p hacking

The best way to avoid p hacking is to avoid making any selection or tweaks after seeing the data. In other words, avoid questionable research practices. In most cases, the best way to do this is to use preregistration.

Preregistration requires that you prepare in advance a detailed research plan, including the statistical analysis to be applied to the data. Then you preregister the plan, with date stamp, at the Open Science Framework or some other online registry.

Then carry out the study, analyse the data in accordance with the plan, and report the results, whatever they are. Readers can check the preregistered plan and thus be confident that the analysis was specified in advance, and not p hacked. Preregistration is a challenging new idea for many researchers, but likely to be the way of the future.

Estimation rather than p values

The temptation to p hack is one of the big disadvantages of relying on p values. Another is that the p<.05 criterion encourages black-and-white thinking: an effect is either statistically significant or it isn’t, which sounds rather like saying an effect exists or it doesn’t.

But the world is not black and white. To recognise the numerous shades of grey it’s much better to use estimation rather than p values. The aim with estimation is to estimate the size of an effect – which may be small or large, zero, or even negative. In terms of estimation, a false positive result is an estimate that’s larger or much larger than the true value of an effect.

Let’s take a hypothetical study on the impact of therapy. The study might, for example, estimate that therapy gives, on average, a 7-point decrease in anxiety. Suppose we calculate from our data a confidence interval – a range of uncertainty either side of our best estimate – of [4, 10]. This tells us that our estimate of 7 is, most likely, within about 3 points on the anxiety scale of the true effect – the true average amount of benefit of the therapy.

In other words, the confidence interval indicates how precise our estimate is. Knowing such an estimate and its confidence interval is much more informative than any p value.

I refer to estimation as one of the “new statistics”. The techniques themselves are not new, but using them as the main way to draw conclusions from data would for many researchers be new, and a big step forward. It would also help avoid the distortions caused by p hacking.

Geoff Cumming, Emeritus Professor, La Trobe University

This article was originally published on The Conversation. Read the original article.

1 Comment

Filed under Uncategorized

The seven deadly sins of statistical misinterpretation, and how to avoid them

Image 20170328 21243 6xrdpk

Where are the error bars? (Shutterstock)

Winnifred Louis, The University of Queensland and Cassandra Chapman, The University of Queensland

Statistics is a useful tool for understanding the patterns in the world around us. But our intuition often lets us down when it comes to interpreting those patterns. In this series we look at some of the common mistakes we make and how to avoid them when thinking about statistics, probability and risk. The Conversation


1. Assuming small differences are meaningful

Many of the daily fluctuations in the stock market represent chance rather than anything meaningful. Differences in polls when one party is ahead by a point or two are often just statistical noise.

You can avoid drawing faulty conclusions about the causes of such fluctuations by demanding to see the “margin of error” relating to the numbers.

If the difference is smaller than the margin of error, there is likely no meaningful difference, and the variation is probably just down to random fluctuations.

Error bars illustrate the degree of uncertainty in a score. When such margins overlap, the difference is likely to be due to statistical noise.


2. Equating statistical significance with real-world significance

We often hear generalisations about how two groups differ in some way, such as that women are more nurturing while men are physically stronger.

These differences often draw on stereotypes and folk wisdom but often ignore the similarities in people between the two groups, and the variation in people within the groups.

If you pick two men at random, there is likely to be quite a lot of difference in their physical strength. And if you pick one man and one woman, they may end up being very similar in terms of nurturing, or the man may be more nurturing than the woman.

You can avoid this error by asking for the “effect size” of the differences between groups. This is a measure of how much the average of one group differs from the average of another.

If the effect size is small, then the two groups are very similar. Even if the effect size is large, the two groups will still likely have a great deal of variation within them, so not all members of one group will be different from all members of another group.


3. Neglecting to look at extremes

The flipside of effect size is relevant when the thing that you’re focusing on follows a “normal distribution” (sometimes called a “bell curve”). This is where most people are near the average score and only a tiny group is well above or well below average.

When that happens, a small change in performance for the group produces a difference that means nothing for the average person (see point 2) but that changes the character of the extremes more radically.

Avoid this error by reflecting on whether you’re dealing with extremes or not. When you’re dealing with average people, small group differences often don’t matter. When you care a lot about the extremes, small group differences can matter heaps.


4. Trusting coincidence

Did you know there’s a correlation between the number of people who drowned each year in the United States by falling into a swimming pool and number of films Nicholas Cage appeared in?

Is there a causal link? (tylervigen.com)

If you look hard enough you can find interesting patterns and correlations that are merely due to coincidence.

Just because two things happen to change at the same time, or in similar patterns, does not mean they are related.

Avoid this error by asking how reliable the observed association is. Is it a one-off, or has it happened multiple times? Can future associations be predicted? If you have seen it only once, then it is likely to be due to random chance.


5. Getting causation backwards

When two things are correlated – say, unemployment and mental health issues – it might be tempting to see an “obvious” causal path – say that mental health problems lead to unemployment.

But sometimes the causal path goes in the other direction, such as unemployment causing mental health issues.

You can avoid this error by remembering to think about reverse causality when you see an association. Could the influence go in the other direction? Or could it go both ways, creating a feedback loop?


6. Forgetting to consider outside causes

People often fail to evaluate possible “third factors”, or outside causes, that may create an association between two things because both are actually outcomes of the third factor.

For example, there might be an association between eating at restaurants and better cardiovascular health. That might lead you to believe there is a causal connection between the two.

However, it might turn out that those who can afford to eat at restaurants regularly are in a high socioeconomic bracket, and can also afford better health care, and it’s the health care that affords better cardiovascular health.

You can avoid this error by remembering to think about third factors when you see a correlation. If you’re following up on one thing as a possible cause, ask yourself what, in turn, causes that thing? Could that third factor cause both observed outcomes?


7. Deceptive graphs

A lot of mischief occurs in the scaling and labelling of the vertical axis on graphs. The labels should show the full meaningful range of whatever you’re looking at.

But sometimes the graph maker chooses a narrower range to make a small difference or association look more impactful. On a scale from 0 to 100, two columns might look the same height. But if you graph the same data only showing from 52.5 to 56.5, they might look drastically different.

You can avoid this error by taking care to note graph’s labels along the axes. Be especially sceptical of unlabelled graphs.


Winnifred Louis, Associate Professor, Social Psychology, The University of Queensland and Cassandra Chapman, PhD Candidate in Social Psychology, The University of Queensland

This article was originally published on The Conversation. Read the original article.

Leave a comment

Filed under Uncategorized