Wednesday, 29 June 2016

People are retiring later in life, and why that might be a good thing

Simon Collins quoted me in the New Zealand Herald on Sunday:
For thousands of years, people of all ages worked together in hunting, farming and domestic work as long as they were physically capable.
That legacy still lingers in our farming sector. Dr Michael Cameron of Waikato University says it is the only industry whose share of employment in every age group above 55 is above the national average, rising from 7 per cent across the whole workforce to 35 per cent of those still working at 85 or over.
I was a bit surprised, since I knew I hadn't talked to Simon this past week, but then I realised he was drawing on this 2014 working paper I wrote. I thought it might be worthwhile to add a bit more explanation. The working paper was a purely descriptive analysis of working among older people (aged 55 years and over), based on Census data from 1991 to 2013. There aren't too many surprises in there. However, a couple of points are worth drawing out.

First, as Simon notes Agriculture, Forestry and Fishing (AFF) stands out among other industry groupings as having the oldest age profile. Here's the corresponding figure from the working paper (where you can see AFF has the smallest proportion of workers aged under 55 years, and the largest proportion aged over 65 years):



Second, the older labour force is growing, within every age group, as this figure shows:



You might suspect that this is because the cohorts moving into the older age groups are larger (the baby boomers, for instance), but that is only part of the story. Labour force participation rates have been increasing for every successive cohort:



Now we come to the big question: why is this a good thing? If we take GDP per capita as a measure of living standards or wellbeing (yes, it's a very imperfect measure of wellbeing, but it's a place to start), we can decompose GDP per capita as follows [*]:

[Y/P] = [Y/L] * [L/WA] * [WA/P]

where Y is output, P is population, L is the labour force, and WA is the working age population. This identity simple says that GDP per capita (or output per person, Y/L) is made up of labour productivity (or output per unit labour, Y/L), labour force participation (L/WA), and the share of the working age population in the total population (WA/P).

Now consider what is happening given our ageing population. The share of the working age population in total population is falling, as the share of population older than 65 (as one example) has been increasing. Now, labour productivity is only growing slowly (New Zealand's labour productivity performance is rubbish, but that's a topic for another day), and labour force participation is fairly constant (we've had increasing labour force participation over time, but that is mainly due to increases in women's labour force participation, and that effect is probably almost tapped out). So, the last term in the equation above is decreasing (and at a faster rate), the middle term is likely to stay pretty flat, and the first term is increasing (but slowly). That means that we can expect economic growth to slow appreciably because of population ageing.

That is, unless we can increase output from outside the working age population, i.e. increase output from older workers, by increasing their labour force participation (through people working until they are older, whether that be full-time, part-time, or bridging from full-time to part-time work to full retirement). Alternatively, you might be thinking that increasing the labour force through migration is a good solution (this is what many areas of regional New Zealand are thinking). Unfortunately, even though migrants are younger on average than locals, they also age as well - Natalie Jackson and I have a forthcoming NIDEA Working Paper which demonstrates that population decline cannot be arrested by increasing migration.

Simon Collins concludes that young people might never be able to retire. That might be true, in the sense of a 'traditional' retirement, at least if we want to maintain living standards.

*****

[*] I'm grateful to Jocelyn Finlay, a colleague at the Pop Center at Harvard School of Public Health, when I was on Study Leave there last year, for an interesting discussion on this topic at a workshop in early 2015.


Monday, 27 June 2016

Can exclusivity overcome the adverse selection in dating apps?

One of my favourite tutorial examples in ECON110 is about adverse selection in dating markets. As I've written before:
Adverse selection arises when there is information asymmetry - specifically, there is private information about some characteristics or attributes that are relevant to an agreement, and that information is known to one party to an agreement but not to others. In the case of online dating, the 'agreement' is a relationship (or even a single date) and the private information is about the quality of the person as a potential date - each person with an online dating profile (the informed party) knows whether they are a high-quality date or not, but the others who might match with them (the uninformed parties) do not.
An adverse selection problem arises because the uninformed parties cannot tell high quality dates from low quality dates. To minimise the risk to themselves of going on a horrible date, it makes sense for the uninformed party to assume that everyone is a low-quality date. This leads to a pooling equilibrium - high-quality and low-quality dates are grouped together because they can't easily differentiate themselves. Which means that people looking for high-quality dates should probably steer clear of online dating.
Which brings me to The League, which Sowmya Krishnamurthy describes her experiences with here. She writes:
The League is the most exclusive dating app. Founded by Stanford grad Amanda Bradford, The League sets out to match ambitious, interesting professionals in San Francisco and New York City with other ambitious, interesting professionals.
So, can exclusivity overcome the adverse selection problem? Perhaps. There are two ways of dealing with adverse selection problems: (1) when the uninformed party tries to reveal whether the date is high-quality or not, we call this screening; and (2) when the informed party tries to reveal that they are high quality, we call this signalling. There are two important conditions for a signal to be effective: (1) it needs to be costly; and (2) it needs to be more costly to those with lower quality attributes. These conditions are important, because if they are not fulfilled, then those with low quality attributes could still signal themselves as having high quality attributes.

In the case of The League, clearly there is a screening aspect to it. Krishnamurthy explains:
You log in with your Facebook profile, but unlike any apps, The League also asks for your LinkedIn information. The almighty app lords put you on a waitlist and review your “application.” Based on your social media resume, it decides whether you’re in or you’re out (word to Heidi Klum).
So The League may be able to screen out the least desirable dates (at least in theory). Who gets screened out though? The League's "advanced screening algorithm" (their words) probably eliminates at the least those with less (or lower quality, in terms of which college they attended) education, or anyone with a lower class (or no) job.

Does that eliminate the adverse selection problem? I guess that depends on your definition of a 'high-quality' date. There are plenty of well-educated douchebags with good jobs, who could get a place in this exclusive app, which suggests not. As Krishnamurthy notes:
Guys on The League may have more education, ambition and Donald Trump’s phone number, but that doesn’t mean they’re any better at dating. Like other apps, I have several matches where the guys have yet to say anything to me. I guess they’re too busy being masters of the universe to type a message.
And there's still no way for a truly high-quality date to signal that they are high quality. Anything a high-quality date tries to do to set themselves apart can easily be copied by the low-quality dates. Maybe they replace their profile pic with a photo of their credit report?

[HT: Marginal Revolution, back in April]

Friday, 24 June 2016

Fooled by randomness

I just finished reading the Nassim Nicholas Taleb book, Fooled by Randomness. It's more than ten years old (I read the second edition), but the insights are mostly timeless. I really enjoyed the first half of the book, but the second half was a bit less entertaining (for me at least). Perhaps I was just smarting by then from the constant quips against economists, such as "the general credibility of conventional economists has always been so low that almost nobody in science or in the real world ever pays attention to them", and "...economists, who usually find completely abstruse ways to escape reality...". Ok, perhaps we resemble that latter remark, so I should let him off. He rips journalists too, as "the greatest plague we face today", and MBAs, as "devoid of the smallest bit of practical intelligence".

At the time he wrote the book, Taleb had many years of Wall Street experience. The book essentially focuses on the role of randomness, with the central thesis that much of the 'success' that we observe (not just on Wall Street but in other diverse areas as well) simply results from randomness. This is best illustrated by an example (which I paraphrase from the book):

Take 10,000 fictional investment managers. Assume that they each have a perfectly fair game, where each one has a 50% probability of making $10,000 at the end of the year, and a 50% probability of losing $10,000. Further assume that once a manager has a single bad year, they are sacked. Now, after one year, we expect 5,000 managers to be up $10,000 each, and 5,000 to be down $10,000 (and out of a job). After two years, the number of managers remaining is 2,500, and 1,250 after three years. By the end of the fifth year, 313 managers remain, all of whom have had five straight successful years. Not because of any skill on their part, but simply because of randomness. So, observing an investment manager who has been successful for several consecutive years cannot by itself tell you that they are successful - they may just have ridden luck to their results. Moreover, Taleb asserts that these "lucky fools" will be oblivious to the role of luck in their success - a form of what in behavioural economics we refer to as positivity bias (we are overly optimistic about anything to do with ourselves).

There's lots of good stuff on behavioural finance, bounded rationality, etc. in the book, and I especially liked the discussions of inductive reasoning and the work of David Hume; I must admit to not being as familiar with Hume as I probably should be. Probably there's some further reading there for me.

So, even though Taleb argues that he will never read unsolicited comments on his book, I've provided some anyway. And I would recommend this as a good read if you are interested in finance, or if you simply want some good punchlines to use against economists.

Thursday, 23 June 2016

This couldn't backfire, could it?... Stockpiled ivory sales edition

I've written a number of posts about the problems associated with trying to save endangered species (see herehereherehere, here, and here). One solution that has been suggested is to sell government stockpiles of legally obtained ivory (such as ivory from dead elephants that were not killed as a result of poaching), or stockpiles of confiscated illegal ivory. In both cases, this should increase the supply of ivory in the market, and lower the market price. Alternatively, since legitimate ivory and black market ivory are close substitutes, making more legitimate ivory available should lower the demand for black market ivory, which lowers its price. Either way, the lower price for ivory should reduce the incentives for poachers to kill elephants, decreasing the number of elephants that are killed by poachers.

Chris Blattman points to this new NBER Working Paper (sorry I don't see an ungated version anywhere) by Solomon Hsiang (UC Berkeley) and Nitin Sekar (Princeton). They clearly show that a one-time legal sale of stockpiled ivory that happened in 2008 led to a significant increase in elephant poaching. Here is the key figure from the paper:


The break in the time series is very clear - poaching increased substantially after the sale (and shows an upwards trend that wasn't apparent before the sale). So, what's going on here? Maybe the sale of the stockpiled ivory led to a legitimisation of the trade in ivory, such that potential consumers became desensitised to the negative impacts of the ivory trade? This would lead to a long run increase in the demand for ivory. So, once the one-off sale of the stockpiled ivory had passed and supply returned to 'normal', the increased demand led to an increase in the price of ivory. The higher price creates incentives for more poaching. Indeed, Hsiang and Sekar conclude:
Our results are most consistent with the theory that the legal sale of ivory triggered an increase in black market ivory production by increasing consumer demand and/or reducing the cost of supplying black market ivory, and these effects dominated any competitive displacement that occurred. In markets where supply and demand interact directly with legalization, price data are relatively uninformative as to the effects of legalization. The data indicate illegal ivory suppliers anticipate opportunities to sell and/or smuggle illegal ivory, consistent with qualitative reports...
Our findings demonstrate that partial legalization of a banned good can increase illegal production of the good because the existence of white markets may influence the nature of black markets. 
It's probably fair to say that the ivory sale was a colossal failure. There are better ways of dealing with poaching. See my other posts (linked below) for some examples.

[HT: David McKenzie at Development Impact]

Update: Development Impact covers the subsequent debate about the results of the Hsiang and Sekar paper.

Read more:



Tuesday, 21 June 2016

Why we need tradeable fresh water useage rights

I've been asked by a couple of students this semester about what I thought about water allocation or a market for fresh water. Interestingly, Basil Sharp (Professor of Energy and Resource Economics at the University of Auckland) wrote a good piece in the New Zealand Herald on that topic today. Sharp wrote:
Creating a market for water use rights would be good for the environment and the economy. It's true to say New Zealand has no overall water shortage. True, but meaningless in practice.
Yes, we are blessed with six times the water per person compared to Australia and 16 times that compared to the US. But rainfall varies geographically and seasonally. Droughts are expected to get longer and more frequent. Ask farmers in Canterbury and Otago, and increasingly in Waikato, if water is scarce.
If water is scarce then it has value. Yet, here's the paradox: water is one of our most valuable natural assets but we don't know its economic value. Successive governments have failed to provide a workable framework that reveals value and enables the exchange necessary for efficient use...
The current system works like this: farmers hold a permit to irrigate their land and can exercise this use right for the duration of the permit. They also have to follow usage rules. Councils can and should monitor use rights and recover compliance costs.
If farmers and other users could transfer their use rights - trade them for money - then the price of water would be revealed. It's already common for water permits to be transferred when land is traded. But what if the opportunity to transfer was freed up? Farmers within a catchment could trade their use rights and trades could occur across industries. They could occur for the duration of the permit, or be leased. This would all take place within sustainability limits.
Essentially, we have a fresh water allocation problem. The problem is that water has traditionally been allocated on a first-come-first-served basis. So, whoever applied for consent to use water first gets to use the water. They cannot transfer that right to others, who may be able to make better (i.e. more value-creating) use of the water. So not only do we have valuable water being used for low-value activities (hands up if you use drinking-quality water to wash your car), but we have no idea how to price water to extract its value (leading to situations like this).

An efficient water rights system (e.g. permits that allow the permit-holder to extract and use a stated quantity of water per year) would need to have a few key properties. The water permits should be: (1) universal; (2) exclusive; (3) transferable; and (4) enforceable.

Universality means that all fresh water use would need to be included in the system (so municipal water supply, irrigation schemes, industrial use, etc. would all have to have permits to extract and use water). There can be few exceptions to this - although hydro power (where the water is not used up or degraded - that is, its use is not rival, as it doesn't deprive others of also using the same water) may be one.

Exclusivity means that all of the benefits and costs associated with extracting and using the water must accrue to the permit-holder. This essentially means that there can be no free riders - no one benefiting from water who does not have a permit to extract and use that water.

Transferability means that the permits can be freely traded voluntarily. So, if you have a permit to extract and use water from a given river, and you find someone else who is willing to pay more for that permit than whatever you value it at (presumably, whatever value it provides to you), then you should be able to sell (or lease out) your permit. This ensures that water will be used in the highest value activities, and means that water has a price (representing by the price of the permits). Failing to sell (or lease out) a permit entails an opportunity cost (foregone income for the permit holder), so selling (or leasing out) a permit to someone else might actually be the best use of the permit.

Potentially, a system like this provides not only for improvements in water allocation, but also improvements in water quality. If your actions are degrading the water quality for permit-holders, you are reducing the value they can extract from the water source, and presumably that loss in value for the permit-holders would be actionable through the courts. So, as well as government-mandated water quality standards, the market could start to provide its own (potentially higher) standards, with failing to meet those standards leading to compensation for other users of the water (which raises the costs of polluting waterways).

However, setting up a system of water use rights or permits does not come without significant challenges. One major issue is how should we decide how many permits to allocate? Do you allocate a number of permits based on the average river flows or aquifer replenishment (of course allowing for the fact that it's unrealistic to take 100% of water)? What happens if flows are below average? Which permit holders will miss out? Do you create some prioritised system of permits, where permits with higher priority can be fulfilled first? However, if you instead take a more conservative approach to permit allocation (based on some lower level of flows), then potentially lots of valuable water simply flows out to sea. Do you instead allocate some additional time-limited permits in years with substantially higher-than-permitted flows?

A second major issue to overcome is how to allocate the initial set of rights. Municipal water supply and other existing users should probably be allocated rights first. But what about other users? What about tangata whenua? Do the remaining rights get auctioned? Who receives the proceeds from the auction (local government, central government, iwi, some other group, or some combination)?

It is clearly time to have a serious conversation about water allocation in New Zealand, because we can do much better. As Sharp concludes:
We have lost 30 years of opportunity. The cost is obvious: water is over-allocated in numerous catchments, patterns of use can't readily adapt to changing economic conditions, and water quality has deteriorated. We can do better for the generation that follows.

Sunday, 19 June 2016

The diminishing marginal returns to classroom experiments

I'm a big fan of using classroom experiments to illustrate concepts to my students. When I taught managerial economics, I specifically organised the workshops and assessment to incorporate many experiments. I use a number of experiments in ECON110, which are worth extra credit for students.

So I was interested to read this recent paper by Tisha Emerson and Linda English (both Baylor University), published in the Papers and Proceedings issue of the American Economic Review (sorry I don't see an ungated version anywhere). In the paper, Emerson and English look at 28 principles of microeconomics classes from Baylor between 2002 and 2013, some of which used experiments (between 6 and 11 experiments) and some of which had no experiments (and act as a control group). Looking at overall performance in the course, they find:
...a statistically significant positive relationship between the number of experiments in which a student participates and their course score. The impact is diminishing, however, as the number of experiments increases.
The impact of experiments is different for different groups:
While older students outperform younger ones, the youthful disadvantage is partially offset by participation in experiments. Similarly, our findings suggest that an achievement gap exists between whites and ethnic minorities, but that experiments serve to help bridge this gap as well. However, experiments do not appear to differentially impact student performance by gender, aptitude, or attendance.
So, the positive effect of classroom experiments is largest for the first experiment, and then diminishes. We can use the coefficients from Table 2 in the paper (shown below) to calculate the optimal number of experiments (see [*] for one example of the calculus if you're feeling adventurous) - that is, the number of classroom experiments that would maximise grades for a student with a given set of characteristics. The optimum differs by group (because of the interactions between the number of experiments and gender, age, nonwhite, SAT total, and absences from class).


For a 19 year old white male (with the mean SAT of 1171 and the mean absences of 2.59), the optimal number of classroom experiments is 5.3, while for a 19 year old nonwhite male (with mean SAT and absences) the optimum is 7.5, and for a 19 year old nonwhite female (with mean SAT and absences) the optimum is 8.0. Interestingly, the optima are much lower for older students. For example, for a 23 year old white male (with mean SAT and absences), the optimum is -0.4. So, most of the gains from classroom experiments in this sample accrue to younger and nonwhite students.

Anyway, overall this tells us that classroom experiments are likely a good thing, and are likely an even better thing for traditionally disadvantaged groups. Indeed, the authors conclude:
...experiments may be providing market experiences that these groups (younger and minority students) may not otherwise have had and thus greater understanding of economic concepts that can’t be gleaned simply from class discussion. As such, the use of classroom experiments may help reach these otherwise disadvantaged groups.
So I guess I should persist with classroom experiments, just not too many!

*****

[*] Using the coefficients for a 19 year old white male (with the mean SAT of 1171 and mean absences of 2.59):

G =  (9.948 - 0.171 - 0.444 * 19 - 0.000186 * 1171 + 0.201 * 2.59) x - 0.154 x^2 + z
    = 1.644 x - 0.154 x^2 + z

where G is the student grade (percentage), x is the number of experiments, and z is all of the other parts of the regression equation that don't include the number of experiments.

Differentiate the function G with respect to x:

dG/dx = 1.644 - 0.308 x

Set this equal to zero, and solve for x:

1.644 - 0.308 x = 0
x = 5.3

So, the percentage grade is maximised for students with these characteristics when they participate in 5.3 experiments. Similar calculations can be made for other combinations of characteristics.

Saturday, 18 June 2016

The (small) effect of immigration on wages

 A few weeks ago, Treasury warned of the impact of historically high immigration on jobs and wages. Here's what the New Zealand Herald reported:
The Treasury warned that record levels of immigration could push New Zealanders out of low-skilled jobs, depress wages and increase housing pressures...
Migrants were increasingly working in low-wage industries where there is no strong evidence of a skills shortage, Treasury noted in a briefing, released under the Official Information Act...
Last week's Budget forecasts show net long term migration peaking at a net 70,700 inflow in the year to June 2016, dropping back to the long term average of 12,000 by 2019.
So, net migration is at an all-time high. It's worth taking a look at how (theoretically) that should affect jobs and wages. Let's start with the basic labour market model (shown in the diagram below). Before the large increase in immigration, the equilibrium wage is W0, and the equilibrium number of jobs is Q0. Immigration increases the supply of labour from S0 to S1 (since there are more people looking for jobs - the same people who would have been looking anyway, plus the new immigrants). This decreases the equilibrium wage to W1 (which means locals receive lower wages, as well as the new immigrants), but increases the number of jobs to Q1. Since wages are lower, at least some employers will be willing to hire more workers, raising overall employment. However, since some of those additional jobs go to the new immigrants, the number of jobs for natives (and previous immigrants) might increase a little, or might decrease.


But wait. If, as per the quote above, new immigrants are increasingly working in low-wage industries, then we need to consider how the minimum wage affects our analysis, since wages can't fall below the minimum wage. Consider instead the diagram below, where the minimum wage (WMIN) is set above the equilibrium wage. This leads to some job rationing (unemployment) because the quantity of labour supplied (essentially the number of people willing to work), QS, is greater than the quantity of labour demanded (essentially the number of available jobs), QD. With the large increase in immigration, the supply curve again increases from S0 to S1. This doesn't affect the wage, but it does increase the quantity of labour supplied (to QS1), and increases unemployment. Employers aren't willing to hire any more people, but there are more people looking for work (new immigrants, plus natives and previous immigrants), leading to higher unemployment.


There's one more thing to consider though. Having more people in the country increases the demand for goods and services. Firms will need to produce more in order to satisfy the increased demand - so at least some employers will need to hire more workers, increasing the demand for labour. So, let's make that adjustment (in the diagram below). In addition to the increase in supply of labour from S0 to S1 (from high immigration), the demand for labour increases from D0 to D1. So, at the minimum wage the quantity of labour supplied increases (from QS to QS1), but so does the quantity of labour demanded (from QD to QD1). The wage is not affected (since the minimum wage is still binding), but the increase in unemployment is much less.


If instead you think about labour markets that are not constrained by the minimum wage (like the first diagram in this post), the decrease in wages from high immigration is likely to be much less than that diagram shows (because of increased demand for labour).

So, that is the theory. What does the evidence say? My colleague Jacques Poot (along with Simonetta Longhi and Peter Nijkamp from Vrije Universiteit Amsterdam) conducted a meta-analysis combining 348 estimates (from 18 different studies) of the percentage change in the wage of a native worker with respect to a 1 percentage point increase in the ratio of immigrants over native workers, which was published in the Journal of Economic Surveys in 2005 (ungated earlier version here). They found that:
Overall, the effect is very small. A 1 percentage point increase in the proportion of immigrants in the labour force lowers wages across the investigated studies by only 0.119%.
The literature remains far from settled on the effect of immigration on labour markets (whether you are looking at jobs or wages). This 2015 NBER Working Paper (ungated version here), by Ethan Lewis and Giovanni Peri, reviews the literature to date (including many more recent studies than the meta-analysis cited above) and asserts that there are positive effects of immigration on native-born workers, acting through increasing productivity. However, given the uncertainty in the literature (between a positive or negative net effect), we can be fairly certain the effect of immigration on the labour market is pretty small (whether positive or negative).

Coming back to the original story, Treasury argues that "at the margin, we believe that there are benefits to making changes to immigration policy". No doubt that is true, but the gains are likely to be very small.

Thursday, 16 June 2016

Teaching with music videos might improve learning

As covered in Phys.org, a new paper published in the International Journal of Science Education (sorry I don't see an ungated version anywhere) by Gregory Crowther (University of Washington), Tom McFadden and Jean Fleming (both University of Otago), and Katie Davis (University of Washington) looks at whether science learning can be facilitated through using music videos. It probably isn't much of a stretch to suggest that if it works for science, it would probably work for economics as well.

Crowther et al. conducted three experiments. In the first experiment, they had 568 volunteers at STEM outreach events in the U.S. complete a short test, then watch a short science-related music video, then complete a post-test. Here's what they found:
Overall, 13 of the 15 science music videos led to statistically significant gains in student test performance. These gains were found across all age groups and for both male and female students. Moreover, students improved their scores on the more complex ‘comprehension’ questions as well as the straight ‘knowledge’ questions. Scores on the unrelated ‘bonus’ questions did not show any change, suggesting that the gains were attributable to watching the science music video rather than simply the repetition of the question.
So, the music videos led to robust immediate gains in science knowledge, and suggested some deeper learning (since 'comprehension' questions require a bit more thinking than simple 'knowledge' questions).

In the second experiment, they completed a similar exercise comparing music videos and non-music videos in a separate sample of 403 volunteers, again at U.S. STEM outreach events. Unfortunately this second experiment only had a post-test, and no pre-test, so gains in science knowledge cannot be tested. However, assuming perfect randomisation between the music video treatment and the non-music-video treatment, it probably isn't too serious of an issue. In this experiment they found:
Test scores were similar after musical and non-musical videos, with or without intervening ‘distractor videos’... Thus the modality of the video (musical or non-musical) did not strongly impact short-term test performance.
Right. So maybe it's simply watching videos, rather than watching music videos, that provides gains in science knowledge.

In the third experiment, they used a single music video (this one) to test whether the gains were persistent. In this experiment they compared a music video [MUSIC] with a non-music video [FACTS] (with both a pre-test and post-test so they can measure gains in science knowledge) for students in two Dunedin schools, but also included a further post-test 28 days after the students watched the video. Here's what they found:
Students in both the MUSIC and FACTS groups showed statistically significant pre-test to post-test gains. Though the FACTS group showed possibly greater immediate improvements than the MUSIC group, their gains appeared to be short-lived, whereas the MUSIC group tended to maintain their post-test improvements for 28 days.
The gains only persisted in the group that was shown the music video. Furthermore, the students shown the music video enjoyed their video more than the other group.

What should we take away from this? In teaching economics, perhaps there is something to be said for making more use of alternatives to chalk-and-talk lectures. As we do at Waikato, in ECON100 and ECON110 especially. Maybe we should make more use of music videos. Can I suggest someone starts by making a video for this song?

[HT: eSocSci]

Sunday, 12 June 2016

What adult colouring books tell us about rationality

Economics has been widely criticised for relying on assumptions of rational behaviour. Rationality requires decision-makers to consider the universe of possible alternatives and compute the expected outcomes before coming to a decision. Part of this implies that decision-makers can anticipate and act on the best available profit opportunities - we often use the phrase that there are no $100 bills lying on the footpath (or sidewalk if you prefer).

However, obviously since humans cannot possibly process all of the available information (some of which is not only unknown, but unknowable), pure rationality is not possible, so as a theoretical foundation rational behaviour is pretty shaky. A recent Joshua Gans post about adult colouring books highlights this. He writes:
We don’t speak of it very often but economists face a fundamental challenge with respect to innovation: if innovation is something no one has anticipated, then the (Savage) axoims [sic] upon which we base our rational choice decision-making cannot apply.
Let me explain. Decision-making is all about actions and their consequences. Leonard Savage created the framework by which economics deals with this by assuming that all agents ‘look before they leap.’ That is, an agent would choose amongst actions available taking into account all possible states of the world and the consequences in each state. This requires agents to have complete knowledge of the state-action space...
And so we arrive at the adult colouring book. In 2015, three of the best selling books of the year were adult colouring books. In Canada it was 5 of the top 10 including the top 2. These sell for around $10.00 and have complex designs that would be challenging for a kid to colour between the lines. They are highly rated and sometimes seen as challenging and sometimes as relaxing.
So, here was a large profit opportunity that sat untapped for a long time (a multi-millions dollar note on the footpath). Colouring books have been around for decades, so the profit opportunity from adult colouring books is unlikely to have suddenly arisen in the last couple of years. So because nobody noticed this opportunity until recently ably demonstrates that people cannot be purely rational.

Which should come as no surprise to us. Herbert Simon introduced us to the concept of bounded rationality in his 1955 article "A behavioural model of rational choice" - the rationality of decision-makers is limited by the information that they have available to them, or that they can obtain at low cost (and note that Friedrich Hayek made a similar point (ungated here) a decade earlier, arguing that people make decisions based on local information). Which suggests that at least some profit opportunities might remain unexploited until some entrepreneur thinks of them, and that economics hasn't totally ignored the obvious.

Friday, 10 June 2016

Alcohol, momentary happiness, and life satisfaction

Last month, Honor Whiteman reported in Medical News Today on this study (sorry I don't see an ungated version anywhere, but it looks like the article is open access anyway) by Ben Baumberg Geiger (University of Kent) and George MacKerron (University of Sussex), published in Social Science & Medicine. Whiteman writes:
Study leader Dr. Ben Bamburg [sic] Geiger, from the University of Kent in the United Kingdom, found that while drinking alcohol makes us momentarily happy, it fails to offer long-term life satisfaction and well-being.
This sounded like an interesting study, so I sought it out. The authors analysed two U.K. data sources to investigate the relationship between wellbeing and alcohol consumption. I'll start with their second analysis, which used data on momentary happiness ('Mappiness' - see here). The authors describe the data as follows:
Rather than the traditional method of recruiting a sample and providing them with a diary or computer, Mappiness uses a sample of existing iPhone users who chose to download the Mappiness app. Users are then beeped at regular but random moments (the default is twice/day between 08:00-22:00), giving them a brief questionnaire about how happy they are, who they are with, and what they were doing just prior to their response...
After excluding non-UK responses, the total sample here contains 2 million individual responses, collected from 31,302 individual users across 2010-2013... Given the self-selection into the sample and the restriction to iPhone users, it is unsurprising that the sample is unrepresentative of the UK population: participants are more likely to be young (two-thirds are under 35) and wealthy (median income is £48,000, almost twice the UK median)...
The last point is important, but we won't worry about that for now. Importantly, the dataset is large and contains multiple measurements for each person, both when they are drinking and when they are not. The longitudinal nature of the data allows them to use fixed effects models, which essentially strip out the effects of anything specific to the individuals that doesn't change over time.

They find that momentary happiness is greater when individuals are drinking, by about 3.6 points (on a 0-100 scale). It's hard to evaluate whether that is a large effect because they don't tell us the descriptive statistics (mean, standard deviation, etc.) for the happiness scale, either in the text or in the online appendix, but we know it is statistically significant (and probably small). They are largely able to rule out reverse causation (people drink when they feel happier), because even when controlling for happiness earlier in the same day, drinking is associated with 3.3 points higher happiness. So, within this sample (noting its' unrepresentativeness), drinking is associated with higher happiness in the moment.

The first analysis took a more long-term view. They used longitudinal data from the British Cohort Study 1970 (BCS70), which included 17,000 babies born in the same week in 1970 in the U.K. The authors used data from the 1999/2000, 2004, and 2012 waves of the study, when the study participants were aged 30, 34, and 42 respectively. They essentially looked at the effect of alcohol consumption on life satisfaction (measured on a 0-10 scale). They found (emphasis is theirs):
In the unadjusted FE model, increases in drinking beyond 1 unit/wk are associated with reduced life satisfaction... In the final models that control for both unobserved time-invariant (via fixed effects) and observed time-varying factors (via controls), we see a similar pattern, but this is weaker and not statistically significant...
In other words, alcohol consumption is not associated with life satisfaction (and they also found that problem drinking (measured by CAGE) was associated with lower life satisfaction). So, this study is telling us that while alcohol consumption might be associated with increased happiness in the moment, it doesn't increase longer-term life satisfaction (and reduces life satisfaction for problem drinkers).

It's hard to say what to take away from this study. The two samples, while both from the U.K., are not directly comparable. The Mappiness sample is both younger and richer than the BCS70 sample. So the Mappiness results don't tell us whether poorer people experience the same gains in momentary happiness (and I don't have a prior as to the likelihood of the effects being the same or different). The difference in age between the samples may be important too. The authors found in supplementary analyses that the effects on momentary happiness were largest among the youngest respondents. Maybe the positive momentary happiness effects spill over into life satisfaction, but because the increases in momentary happiness only occur for young people, we don't see any positive impact on the BCS70 sample only because they are too old to experience the positive gains. It's hard to say - there's definitely room for more research like this.

And before we rush out proclaiming that all is well with alcohol because it increases momentary happiness, we need to remember that promoting people's momentary happiness may not be consistent with promoting people's long-term wellbeing. I'm sure we can all think of things we have done that made us happy in the moment, but which we later regretted. It's also worth noting that there is not necessarily any expectation that momentary happiness and longer-term life satisfaction be closely correlated (see for example Daniel Kahneman on the difference between experienced happiness and life satisfaction).

Overall, understanding the impacts of alcohol use on wellbeing is important, and under-studied. The authors note in their conclusion:
Policymakers currently have a choice between overestimating the wellbeing gains of alcohol policies (by valuing alcohol's negative wellbeing impacts and ignoring positive impacts), underestimating them (by using implausibly naive versions of the consumer surplus approach), or ignoring them altogether. Yet policymakers and the public are often concerned about the wellbeing impacts of alcohol policies - and in the absence of any considered debate from academic researchers, they will be left clutching at the naive approaches used by those outside of academia.
Which is why rubbish research like this gains a disproportionate share of attention. We can do better.

[HT: Marginal Revolution]

Thursday, 9 June 2016

Don't expect new antibiotics anytime soon

The most read post on my blog is this one from 2014 on the economics of drug development and pricing. Maybe it's time for a follow-up, but this time with a more specific focus, on antibiotics.

The Economist had an excellent (albeit somewhat scary) article last month about antibiotic resistance:
A thorn scratch today seems a minor irritant, not a potential killer. But that may be too sanguine. A study by America’s Centres for Disease Control (CDC) found that the number of cases of sepsis rose from 621,000 to 1,141,000 between 2000 and 2008, with deaths rising from 154,000 to 207,000. One reason for that is the emergence of MRSA (pictured being attacked by a white blood cell)—a variety of Staphylococcus aureus that cannot be killed with methicillin, one of penicillin’s most effective descendants. This could just be a taste of things to come. Three years ago the CDC produced a list of 18 antibiotic-resistant microbes that threaten the health of Americans (see table). Five of them (including MRSA) cause sepsis.
Microbes are increasingly exhibiting antibiotic resistance, not only to first-line antibiotics but to their alternatives as well. Without effective antibiotics, infections that were previously treatable become potentially life-threatening. That makes the development of new antibiotics important and increasingly urgent. However, as I noted in my 2014 post, the cost of developing new pharmaceutical drugs is very large. While pharmaceutical companies hope to recoup that cost through holding a natural monopoly over the drug and charging a relatively high price for it, in the case of antibiotics that is not necessarily assured, because there are already antibiotics that are effective in most cases, which means new antibiotics would only need to be used (hopefully) rarely. The Economist notes:
There are reasons for drug firms not to invest in antibiotics. Such companies increasingly prefer treatments for chronic diseases, not acute ones; the customers stick around longer. And despite the growing problem of resistance, most antibiotics still work for most things most of the time. Given that the incumbents are also cheap, because they are off-patent, new drugs cannot earn back their development costs. Even if they could, it would be poor public policy to let them; much better for new drugs to be used only sparingly, to forestall the development of further resistance. That further puts the kibosh on sales.
This figure (source here) also effectively demonstrates the case against developing new antibiotics:


It takes 23 years for a new antibiotic to break even. By then, the patent has almost expired and the drug will soon need to compete with generic competitors, which severely limits the profits available to the pharmaceutical firm. Most pharmaceutical firms are looking to invest in research that leads to quicker payoffs than that, and they have plenty of other research opportunities that they can invest in.

There are alternatives to the 'traditional' development-patent-monopoly approach. This is what I wrote in 2014, and it still stands:
A better option was laid out several years ago by Nobel Prize winner Joseph Stiglitz. Stiglitz argues convincingly that an alternative to the current intellectual property (patent) based regime is to offer prizes for firms that develop medicines, cures or vaccines for diseases that affect the poorest countries. Stiglitz says:
"A solution to both high prices and misdirected research is to replace the current model with a government-supported prize fund. With a prize system, innovators are rewarded for new knowledge, but they do not retain a monopoly on its use. That way, the power of competitive markets can ensure that, once a drug is developed, it is made available at the lowest possible price - not at an inflated monopoly price."
The trade-off for firms collecting the prize money (which would be contributed to mainly by developed country governments), is that their drug would have to be made available in the public domain (i.e. not patented). Then, generic drug manufacturers would be able to produce the drugs at low cost to provide to poor countries and rich countries alike. One promising example of this approach is the Health Impact FundAdvance market commitments are a similar idea.
Unless we apply some of these alternatives, the low incentive for pharmaceutical companies to develop new antibiotics will remain. So, don't expect new antibiotics anytime soon. It might be best to try and avoid infections.

Wednesday, 8 June 2016

A/B testing in the wild

In ECON100 we briefly discuss A/B testing - where you provide different versions (of a website, an advertisement, a letter, etc.) to different people, then evaluate how those different versions affect people's interactions with you. A common example is online shopping, where different pictures of products on the website might be more effective at inducing sales. Or different formats for a direct mail letter might induce more donations for an NGO. And so on. It's hard to say how prevalent A/B testing is, but I would imagine that most online businesses should be engaging in it at least at some level. It's a pretty low-cost and evidence-based way of increasing sales, revenues, profits, etc.

This week, I was pointed to this blog piece by Dillon Reisman about A/B testing in the wild. He describes using a web crawler to unveil how websites are making use of A/B testing (or to be more specific, how websites using the A/B testing provider Optimizely are using A/B testing). Here's one example:
A widespread use of Optimizely among news publishers is “headline testing.” To use an actual recent example from the nytimes.com, a link to an article headlined:
“Turkey’s Prime Minister Quits in Rift With President”
…to a different user might appear as…
“Premier to Quit Amid Turkey’s Authoritarian Turn.”
The second headline suggests a much less neutral take on the news than the first. That sort of difference can paint a user’s perception of the article before they’ve read a single word. We found other examples of similarly politically-sensitive headlines changing, like the following from pjmedia.com:
“Judge Rules Sandy Hook Families Can Proceed with Lawsuit Against Remington”
…could appear to some users as…
“Second Amendment Under Assault by Sandy Hook Judge.”
 Here's another:
Many websites target users based on IP and geolocation. But when IP/geolocation are combined with notions of money the result is surprising. The website of a popular fitness tracker targets users that originate from a list of six hard-coded IP addresses labelled “IP addresses Spending more than $1000.” Two of the IP addresses appear to be larger enterprise customers — a medical research institute a prominent news magazine. Three belong to unidentified Comcast customers. These big-spending IP addresses were targeted in the past with an experiment presented the user a button that either suggested the user “learn more” about a particular product or “buy now.”
Connectify, a large vendor of networking software, uses geolocation on a coarser level — they label visitors from the US, Australia, UK, Canada, Netherlands, Switzerland, Denmark, and New Zealand as coming from “Countries that are Likely to Pay.”
Non-profit websites also experiment with money. charity: water (charitywater.org) and the Human Rights Campaign (hrc.org) both have experiments defined to change the default donation amount a user might see in a pre-filled text box.
I guess the take-away is that we should probably assume that all websites are experimenting on us, pretty much all the time. And it's not just limited to Facebook's secret psychological experiments.

[HT: David McKenzie at Development Impact]

Monday, 6 June 2016

Why Aucklanders should pay more for their electricity

Brian Fallow wrote an interesting piece in the New Zealand Herald a couple of weeks back, about the draft changes in the way the costs of the national electricity grid are allocated. Fallow wrote:
Unwinding a cross-subsidy is never popular among those who have been on the bludger's end of one.
But Aucklanders grumpy about the prospect of having to pay about $1 a week more per household for their power need to get over it.
The draft changes to the way the costs of the national grid are allocated, which the Electricity Authority released this week, are intended to make the system fairer and more efficient, by getting a better alignment between those who benefit from upgrades to the grid and those who pay for them.
Transpower has spent billions in recent years on the grid, mainly to serve Auckland's growing population.
Under the draft changes, Aucklanders face the prospect of paying an additional $58 per year per household for their electricity. Households in other parts of the country (especially in the South Island) will face lower electricity charges.

How can this be? Electricity is not transmitted from where it is generated to where it is used without cost. The further the electricity needs to be transmitted, the higher the costs of transmission. So, it is only natural that the customers who impose a higher cost on the supplier face higher prices. To be clear, the total cost of electricity transmission will not have changed - only the way those costs are distributed among consumers of electricity will change. The previous (current) system, which did not take proper account of the differences in transmission costs, was essentially subsidising Auckland electricity users (who did not face the full cost of their electricity use) at the expense of South Island electricity users (who paid too much for their electricity, relative to its cost). I can't see any argument for why such a cross-subsidy would be worthwhile or necessary.

As an aside, usually when different consumers face different prices for the same product, we consider this to be price discrimination. However in this case, this is not an example of price discrimination because the price differences are related to underlying cost differences. It is simply a case of the lines companies (and electricity generators/retailers) using their market power to pass additional costs (in the case of Auckland, or lower costs in the case of the South Island) onto consumers.

As a result, this change in the distribution of costs should (at the margin) encourage more modest electricity use in the north, and more electricity use in the south (perhaps as a winter heating source, reducing the smog problems in Christchurch and Timaru).

Sunday, 5 June 2016

Try this: The Tragedy of the Bunnies

Want to better understand the tragedy of the commons? Try The Tragedy of the Bunnies - the game takes about two minutes to play, and provides a useful example of the tragedy of the commons in action. Here's how the game is described on the site:
You are a bunny merchant, and the way you make a living is to sell adorable bunnies to little children each year (you sell bunnies by clicking on them).
The "tragedy of the bunnies game is played in two rounds (two consecutive seasons). After the first round, the bunnies triple in number (they are bunnies, after all).
In the game, you compete against two other bunny merchants. Your goal is to bring the most bunnies to market that you can.
There are two versions of the game -- public bunnies, and private bunnies. After playing both versions of the game, you will understand the "tragedy of the bunnies".
Enjoy!

[HT: This 2014 paper (ungated version here) by Brandon Sheridan (North Central College), Gail Hoyt (University of Kentucky), and Jennifer Imazeki (San Diego State University)]

Saturday, 4 June 2016

Student performance and legal access to alcohol

I recently read an interesting 2013 paper (ungated earlier version here) by Jason Lindo, Isaac Swensen, and Glen Waddell (all from University of Oregon), published in the Journal of Health Economics. In the paper the authors investigated the effect of attaining the minimum legal drinking age (which is 21 in the U.S.) on college students' academic performance. This is a fairly important question, since we'd like to know if drinking makes young people worse off (and if so, by how much), so knowing if it interferes with their studies (and if so, by how much) is one way of getting an answer to the broader question.

Lindo et al. used student-level academic transcript data from the University of Oregon, and compared "a student's grades after turning 21 to what would be expected based on his average prior performance and accumulated experience". Here's what they found:
The results from our preferred approach indicate that students' grades fall below their expected levels by approximately 0.03 standard deviations upon being able to drink legally, a modest amount compared to the 0.06 to 0.13 standard-deviation effect estimated in earlier research. The effect is statistically significant, manifests in the term a student turns 21, and persists into later academic terms. In addition, we find that the effects on academic performance are especially large for females, low-ability males, and males who are most likely from financially disadvantaged backgrounds.
It is worth repeating their main result with some emphasis added: Being legally able to drink is associated with lower academic performance by 0.03 standard deviations. In other words, this is one of those cases where the effect is statistically significant, but the size of the effect means that it is economically meaningless.

Having said that we don't know what the mean GPA or standard deviation of GPA are as they aren't reported in the paper, so we can't really evaluate how 'large' the effect is. However, the authors note that this is "the equivalent of causing a student to perform as if his or her SAT score were 20 points lower". Given the SAT has a standard deviation of about 100, this is equivalent to lowering their SAT scores by 0.2 standard deviations, i.e. the difference between being in the 61st percentile and the 64th percentile of the SAT distribution. That is, a pretty small effect.

It would be interesting to replicate this sort of analysis for New Zealand though, but in the context of the drinking age changing from 20 to 18 in the 1990s (if the academic data are available). It may be that there are greater effects on attaining the minimum legal drinking age for younger people, but I wouldn't bet too much on it.

Overall, file this paper under 'nothing much to see here'.