Friday, June 21, 2013

Does (department) size matter?

Not really, according to this new research by Clement Bosquet and Pierre-Philippe Combes, who looked at the effects of various departmental characteristics on academic economists' research productivity in French universities. Concentrating on French data has the advantage that although initial affiliation relates to individual publications (as in most countries), this can be captured with an individual fixed effect, while subsequent moves in France are not driven by individual performance. Instead, the authors note, they are driven by personal or family motivations, in part due to the fact that academic salaries are essentially flat across universities, while the most frequent way of becoming a full professor is via a national contest that allocates winners to departments in a largely random way. Cool.

So what did they find?

Well, firstly, size doesn't matter, much. Instead, the biggest determinants of research productivity are the diversity of fields (within economics) that your colleagues work on - higher diversity being good for productivity - and the degree of heterogeneity of publication quality within the department - more heterogeneity being bad for productivity. So, a mixed bag of apples and oranges (in terms of research interests) is good, but a few rotten apples do appear to spoil the bunch!

Apart from size, it is also interesting to note that the authors found little or no effect of proximity to other economics departments, a finding in contrast with the conventional wisdom from economic geography, which tends to find large agglomeration effects in economic productivity (e.g. comparing urban and rural areas).

Other interesting findings the authors report:
  • Contrary to common intuition, more students per academic do not reduce publication performance.
  • Women, older academics, stars in the department and co-authors in foreign institutions all have a positive externality impact on each academic's individual outcome.
So, when choosing your next career move, look for a department with a wide array of interests, and not just those with lengthy lists of faculty members.

"Oh I do like to be beside the seaside ... "

New research from my colleague Susana Mourato, and George MacKerron, shows that people are happiest in marine and coastal environments, and more generally when experiencing the great outdoors. No great surprise there, perhaps, but this is a pretty novel attempt at quantifying the effects. The study is the first to use a tailor-made smartphone app to record individuals' wellbeing in different environments, and is based on over a million observations, from 22,000 individuals. Results are interesting in and of themselves, but the method also has great potential as a new means of estimating the intrinsic value of the natural environment. Could be useful, for example, in evaluating climate adaptation measures such as flood defenses. The paper is published in Global Environmental Change. More details here.

Sunday, February 3, 2013

Gender discrimination is holding back development, but not just in poor countries

Gender inequality has been getting a lot of attention lately in the development literature as a factor that reinforces poverty dynamics. Esther Duflo provides a good summary of the complex interaction between gender inequality and under-development in this paper [PDF]. I was glad to see the concluding communique from the Post-Millenium Development Goals high level panel meeting in Monrovia [PDF] putting gender equality issues up front in their recommendations for action on global development.

But several articles over the past week have reinforced for me the idea that gender inequality has also been a big factor in holding back economic prosperity in the rich world. An article in the Irish Times (headline: "I told the interviewer I wasn't planning on having more children. I got the job") highlighted the level of job-market discrimination being faced by young women - particularly those in the so-called goldilocks years for having kids - in Ireland today. This is apparently a result of employers trying to avoid taking on the "liability" of maternity leave. This story is echoed in Italy, which has the third lowest female employment rate in the OECD, according to this article from Friday's Guardian. The current economic circumstances in these countries doesn't help, with employers being able to pick and choose between numerous candidates for any advertised job. But the problem originates in the unequal treatment of men and women in parental leave legislation and the inadequate provision of childcare. Of course, there is some biological justification for the differing treatment of men and women in relation to parental leave. But not enough to justify a system - backed by unequal legal entitlements - that creates the expectation that a woman who has a child will be off work for six months or more, while a man might miss a few days.

Friends of mine who live in Oslo and recently had a baby, explained to me that the system there allows parents to choose how they divide parental leave between them. But the system also incentivises fathers to take time off by extending the total amount of parental leave for couples who both avail of it. This means the expectation at work is that both men and women will take leave if they have children, reducing the motivation for employers to discriminate against women.

Greater gender equality in the form of higher female participation in the workforce has dual economic benefits. Higher participation rates provide the mechanistic boost to prosperity of simply shifting the ratio of workers to overall population in favour of the former. More importantly, the exclusion of women from the workforce deprives the economy of the contributions of many of its most talented members. While unpaid work in the home is inherently valuable, society as a whole benefits when families can choose who engages in that work and how much time they devote to it. I'm putting the focus here on the economic benefits, taking as given that greater gender equality is a worthy and important goal in itself. There are also, presumably, social benefits to greater gender equality, not just for women, but also for fathers and their children, who would benefit from the opportunity to spend more time together.

A special report in The Economist this week praised the Nordic countries (Sweden, Denmark, Finland and Norway) for consistently topping world rankings in both competitiveness and well-being. While the report notes the exceptionally high rates of female labour force participation in these countries, it gives relatively little attention to the systems that underpin that level of inclusion, choosing instead to focus on reductions in the size of government and Sweden's introduction of private sector competition and vouchers in its education sector (elements of the Nordics' success that The Economist is predictably keen on).

The Economist is right to point out that some of the cultural and institutional factors on which the Nordic success is built will be difficult for others to replicate. One element of that success that should be relatively straightforward for others to imitate is to legislate for greater equality between the sexes when it comes to parental leave entitlements, and to provide both women and men the basic freedom to decide for themselves how best to share their work and home life commitments.









Friday, February 1, 2013

Teaching Economics

This week I reviewed Meme Wars: The Creative Destruction of Neoclassical Economics (edited by Kalle Lasn and Adbusters) for the LSE Review of Books. (You can read the full review here). Meme Wars styles itself as an alternative ‘Econ 101′ textbook and the core of the book is a collection of short essays that question the ideas and teaching of mainstream, neoclassical economics. Its radical approach (in both design and content) might be off-putting to some, but I'd recommend it - both for experienced economists, as a challenge to think more broadly about underlying assumptions and fundamental questions about the purpose of the discipline - and especially for those who are new to economics, as an introduction to some alternative ways of thinking about the economy. 

While many of the criticisms raised here are already gaining traction within the discipline, for example through the emerging paradigms of ecological, behavioural and complexity economics, Meme Wars is right in complaining that these developments are generally not reflected in the economics taught to undergraduate students. A recent paper (available here) that tracks changes in the content of the bestselling introductory economics textbooks since the onset of the global financial crisis, would appear to confirm this view, i.e. little has changed.

However, that is not to say that 'mainstream' voices in economics are not willing to challenge the status quo. Meme Wars contains contributions from the likes of Joe Stiglitz and George Akerlof, among others. I also recently came across a collection of essays titled What's the Use of Economics? Teaching the Dismal Science After the Crisis edited by Diane Coyle (see here for details), which includes contributions from numerous established names in economics.

The currents of change and debate within economics make this an exciting time to be a part of the discipline. It would be a shame not to share this excitement with those being introduced to the subject for the first time.

Wednesday, November 7, 2012

Triumph of the 'quants'

This morning we salute an historic election victory. And no, I'm not referring to Barack Obama earning four more years in the presidential hot-seat (a victory now immortalised in the most-popular-tweet-ever). The historic victory that has the twitterati salivating with admiration*, is the triumph of the nerds--and one 'nerd' in particular; Nate Silver of the New York Times blog fivethirtyeight. Silver correctly predicted the outcome in all 50 states--as illustrated in this graphic comparison of his state-by-state prediction map with a map of actual results--beating his 2008 feat of correctly predicting 49 out of 50 (he missed out that time on Indiana, which Obama took by 0.1%).

The day before the election Silver gave Obama a 92% chance of victory, defying the 'gut-instinct' pundits who saw the election as 'too close to call', or those who only weeks ago were talking about the Romney campaign's 'momentum', while Silver still had the probability of a Romney victory at just 25%. The xkcd comic summed things up neatly as follows: "To surprise of pundits, numbers continue to be best system for determining which of two things is larger" (cartoon here).

The financial crisis did much to discredit the value of 'quants' and their stats-based analyses. But the lesson from the crisis was not so much that 'quants-are-bad', but that quantitative (predictive) modelling needs to be applied carefully, and only where the underlying model has valid statistical relevance (I've blogged about this previously here).

So does Silver's triumph herald a new dawn in the public's attitude towards statistics and quantitative analysis? There seems little doubt that the success of the fivethirtyeight formula--both in terms of its predictive power and its ability to attract a big audience--will change the nature (or at least the methods) of political punditry. But what about its potential to improve the image of stats and quantitative, data-based analysis more widely?

I saw a tweet today from a former economics lecturer of mine, who said he had his masters class running election prediction simulations in Stata (a statistical/econometric software programme) this week. Seems to me like a great way to inspire students' interest in these analytical techniques. Economics lecturers regularly lament the difficulty of trying to get undergraduates to engage with stats--many, particularly those who are more politically minded, seem to simply switch off at the sight of an equation. This appears to be almost a form of learned behaviour--evidence of a dysfunctional relationship with numbers--a deep distrust of these seemingly arcane methods of analysis and scepticism about their relevance to the 'real-world'.

The success of movements such as CoderDojo and the RaspberryPi project in promoting computer coding as a hobby for kids (big and small) proves the potential appetite for 'nerdy' pursuits, when presented as engaging, creative--as opposed to mechanistic--activities. Here's hoping the (electoral) triumph of the nerds can do something similar for getting the kids interested in numbers.

____________________________________________

*For example, @monsieurcorway tweeted: 'There's a great quote from a Romney supporter last night at Mitt's 'victory' party... He was asked if he turned up thinking Mitt would win. His response? "I read Nate Silver, that son of a bitch." Says it all. Well done, Nate - You legend.' Or this, from @alanbeattie: "Nate Silver can kill people by shooting lasers from his eyes ".

Tuesday, October 30, 2012

Expert accountability

The ruling last week by an Italian court that seven scientists were guilty of manslaughter for failing to warn citizens about the threat of a major earthquake in the city of l'Aquila, raises some difficult questions about accountability and ethics in relation to the provision of 'expert' opinion and scientific advice to governments and the wider public. The scientific community and global media have reacted with outrage, comparing the verdict and threat of lengthy jail sentences for the scientists involved with the persecution of Galileo and the burning of witches.

I agree in principle and in spirit with this sense of outrage. The ruling appears to betray a lack of understanding of the inherent uncertainty in any scientific endeavour, which is particularly pronounced when it comes to forecasting future events. Furthermore, putting science on trial in this way threatens to curtail the progress of scientific inquiry generally, and more specifically--in the nearer term--to make the position of those charged with civil protection and disaster prevention, in Italy at least, almost untenable. However, there remains an important point relating to accountability and ethics that appears to have been overlooked amidst all the righteous indignation.

In an excellent article in Nature, Stephen S. Hall outlines the sequence of events that led up to the tragedy. In an extraordinary meeting of the National Commission for the Forecast and Prevention of Major Risks, the seven scientists--who were all members of the Commission--reached the conclusion that an earthquake was "unlikely" (if not impossible). This in turn was interpreted by a government official, speaking at a press conference, as meaning the situation in l'Aquila was "certainly normal" and posed "no danger". The same official further added that the sequence of minor quakes and tremors that had been occurring in the region was in fact "favourable ... because of the continuous discharge of energy".

This interpretation is apparently contrary to the scientific evidence. The same article, by Hall, quotes Thomas Jordan, director of the Southern California Earthquake Center at the University of Southern California in Los Angeles, and chair of the International Commission on Earthquake Forecasting (ICEF), as suggesting that in the aftermath of a medium-sized shock in a seismic swarm (a sequence of tremors), the risk of a major quake can increase anywhere from 100-fold to nearly 1,000-fold in the short term, although the overall probability of a major quake remains relatively low--at around 2%, according to a study of other earthquake-prone zones in Italy (G. Grandori et al. Bull. Seismol. Soc. Am. 78, 1538–1549; 1988), also quoted by Hall.

A 1,000- (or even 100-) fold increase in the probability of a major quake, would have been a very different message for public consumption than the "anaesthetizing" reassurances given to the media at the press conference. Clearly, the most egregious error committed here was in the over-, or indeed mis-interpretation, of the scientific evidence by the government official who spoke to the press. The scientists were--apparently--correct in their assessment that a major quake remained "unlikely", although not impossible. But, was this the best way of characterizing the risks for the general public?

One of the people affected by the L'Aquila earthquake, quoted in Hall's article, admits that he feels "betrayed by science"--"Either they didn't know certain things, which is a problem, or they didn't know how to communicate what they did know, which is also a problem."

It may be that the error here was not in mis-stating the risk, but in not being specific enough about it. There is an understandable reluctance to use statistics, probabilities and scientific terminology in the public communication of scientific evidence. But at times we take this too far. The public is not stupid, and--problems with common misunderstandings in relation to probability notwithstanding--would be better served by the scientific community and public officials avoiding condescending reassurance in favour of the clear presentation of facts.

Is woolly language, such as "unlikely" really any more useful or informative than saying simply "we don't know"? Might the committee have been better to present the available statistical evidence--including details about the changes in probabilities--while acknowledging that the precise timing and location of a major quake is essentially unpredictable?

The advice could have stopped short of ordering a full-scale evacuation--which, on the basis of the best available evidence, would have been unnecessary 98% of the time--and instead, simply presented that evidence, enabling people to make their own informed decisions about what level of risk they were willing to accept.

Returning to the issue of accountability and ethics, to what extent should scientific or other experts be held accountable for the advice they give to governments or to the wider public? In the case of the l'Aquila tragedy, a more relevant question might be; should researchers be held accountable for the way in which their findings are interpreted by policy-makers and, in turn, by the media? Should researchers generally consider how their findings are likely to be interpreted before making them public? This almost certainly places an unreasonable burden on researchers.

And yet--while researchers can't be expected to control how others interpret their findings, a greater effort needs to be made in communicating the science--its achievements and its limitations--directly to the public. With the recent proliferation of sources for news, opinion and analysis, the authority of traditional media outlets and the role of journalists and editors as the gatekeepers of public information, is increasingly being challenged. This presents both a challenge and an opportunity for scientific engagement with a wider audience. It is increasingly difficult for members of the public to distinguish the signal of scientific or expert analysis from (a) noise and (b) intentionally biased or deceitful opinion emanating from thinly disguised lobbyists, portraying themselves as independent 'experts'. On the other hand, the internet enables researchers to disseminate their findings, methods and data without intermediation by journalists or politicians.

The 'science on trial' headlines may sound melodramatic, but scientists from both the social and hard-sciences are right to feel they are being challenged to justify their art as at no other time in living memory. Public confidence in "science"--in its broadest sense--has been undermined by episodes such as the 'climategate' controversy. The discipline of economics, similarly, has been widely criticised for not predicting the financial crisis--and more fundamentally, for persisting with models that appear unable to explain 'real world' phenomena. This critique certainly has some merit, and economics as a discipline is evolving to take account of the lessons from related disciplines, notably psychology, biology and epidemiology. However, it has to be recognised--both by researchers and those who would criticise their efforts--that models, by their very nature, are imperfect simplifications of the world they are trying to explain. One clear responsibility of any researcher, is to think carefully about the domain of validity of the models that they use [PDF], to define their limitations, and to communicate this in an unambiguous and honest way, along with any findings from the research.

Reflecting on the trial of the seismologists, the former president of Italy's National Institute of Geophysics and Volcanology, concludes that "scientists have to shut up". On the contrary, the lesson for scientists from this tragedy and the subsequent trial, is to be more proactive in our engagement with the public.

A good starting point might be the establishment of a voluntary code of ethics for researchers. This would include, for example, a commitment to publish annually a list of all sources of funding for one's research. Furthermore, the code might also contain a commitment to make public not only our research findings but also the data and methodology used (including relevant context, limitations and assumptions). Signing up to this code could be a prerequisite for any government advisers, and could similarly become a useful tool for the media in screening 'expert' commentators.


More generally this code would be based on three fundamental guiding principles; honesty, transparency and humility. Going back to Hall's Nature article, he quotes a man who lost his wife and daughter in the earthquake, lamenting the fact that "the science, on this occasion, was dramatically superficial, and it betrayed the culture of prudence and good sense that our parents taught us on the basis of experience and of the wisdom of the previous generations." Perhaps the greatest lesson from this tragedy is the need for a greater degree of humility when it comes to the predictive powers of even the most sophisticated scientific models. 



Saturday, August 25, 2012

Dan on crime - a lesson in the abuse of statistics

Dan O'Brien, writing in the Irish Times on Friday, claims that poverty and inequality are "not key reasons for law breaking" and that "recessions have had no discernible effect [on crime rates]". I call bullshit, and here's why (written as a direct reply to Dan's article, this is an edited version of the comment I left on the Irish Times site).

************

"Aw, people can come up with statistics to prove anything Kent. Forty percent of all people know that."
            - Homer J. Simpson

************

Wow, this is a breathtakingly ill-informed piece of lazy journalism, and an outrageous abuse of statistics!

The commenters on the site have already pointed out some of the flaws in your argument, but there are other ways in which this is simply wrong.

For anyone interested in a serious discussion of incarceration, I would highly recommend David Cole's article from the New York Review of Books from a few years back (and which I previously blogged about here).

According to Cole, “most of those imprisoned are poor and uneducated, disproportionately drawn from the margins of society” (referring to the US prison population).

The US has by far the highest incarceration rate in the world. However, somewhat inconveniently for your argument, Cole also points out that up to 1975 the US incarceration rate had been steady at about 100 per 100,000. Since then, the rate has ballooned to 700 per 100,000. If putting the crooks behind bars is really what prevents crime, it seems strange that such a massive increase in the incarceration rate apparently had no preventative effect on the 'crime waves' of the 1980s, to which you also make reference.

Incidentally, it is also slightly inconvenient for your argument that Russia, a country that you refer to as having “a very high murder rate”, also has the second highest incarceration rate in world. Huh.

But of course all of these superficial correlations are meaningless anyway (as you point out yourself!). What you are doing is taking two trends that happen to be moving in the same direction (or in some cases opposite directions), and assigning causation, in blatant disregard of your own caveat about correlation not necessarily implying causation!

Your country comparisons are also spurious. You simply can't compare crime rates and income levels across countries without at least attempting to control for some other relevant factors. Any applied economist worth their salt would know this. Two such relevant factors, which you mention in your article, are the rate of drug use (or perhaps more importantly narcotics production) and demographics. Controlling for these might lead to a very different picture of the relationship between income and crime (or it may not, the point is we simply don't know, based on the evidence you present). In any case, it seems likely that relative poverty and relative deprivation (i.e. within countries) would be more important drivers of crime than aggregate national income levels.

It is astonishing that you would make such sweeping assertions about what does or does not cause crime on the basis of so little evidence, and that the Irish Times would publish this piece seemingly without having done even the most basic fact-checking. (On that point, it is worth referring to Paul Krugman's recent article in which he outlines the fact-checking process that each of his op-ed pieces goes through before being published in the New York Times.)

You have done a disservice to economics and statistics with this article – as well as showing an almost total disregard for the other social-sciences which have produced voluminous literatures on the socio-economic causes of crime. I sincerely hope that the Irish Times will give the opportunity to someone with some expertise in this area to write a response to this article.