Does the #Brexit referendum franchise matter?

May 25, 2015

The government has announced that EU citizens will not be eligible to vote in the referendum on Britain’s membership of the EU. However, because the electoral roll will be the same as that used in the general election, citizens of certain Commonwealth countries will be able to vote.

The question about who ought to be able to vote in a referendum like this is a normative one. From what I can tell, normative theorists, often building on the "all affected interests" principle, tend to support broader franchises, and would typically support extending the franchise to EU citizens. Certainly, I’m not aware of any principled argument why Maltese citizens should be able to vote but Greeks shouldn’t.

There is, however, a separate empirical question: would extension of the franchise make any difference to the outcome?

According to wave 4 of the British Election Study (fieldwork: March 2015), 50.2 percent of British citizens supported Britain staying in the EU.

In that same survey, 81.5 percent of EU citizens resident in the UK supported Britain staying in the EU.

According to the latest ONS Population by Country of Birth and Nationality Report, there are 57678 thousand resident Britons, and 2507 thousand resident EU citizens, or roughly 23 British citizens for every EU citizen.

If we assume that the two groups vote at similar rates,1 the difference between a Britons-only referendum and a Britons-plus-EU-citizens referendum is the difference between:

  • 50.2%
  • (57678)/(57678+2507) * 50.2 + (2507)/(57678+2507) * 81.5, or 51.5%

Therefore the votes of EU citizens, although they will make Brexit less likely, will only matter in a very close referendum. The choice of electoral roll is thus in a very real sense a matter of principle rather than political practice.


  1. This assumption is likely false. On the one hand, a slightly greater proportion of EU citizens resident will be eligible to vote in virtue of being over 18. On the other hand, turnout amongst the eligible is likely to be lower amongst EU citizens. EU citizens were eligible to vote in the Scottish independence referendum — but whilst 90.5% of British respondents said that it was "very likely" that they would vote, only 63% of EU respondents said the same.

Did Labour lose because it was too left-wing?

May 19, 2015

A number of commentators have suggested that Labour lost the election because it was too left-wing — or, what is equivalent, enough commentators have suggested this as to make its denial necessary for some.

Usually, this is accompanied by an implicit invocation of median voter theorem (“elections are won from the centre”).

There’s just one problem with these arguments: in the early part of the year, Labour was perceived to be closer to the median voter than was the Conservative party.

Wave 3 of the British Election Study asks respondents to position themselves on a left-right scale which runs from 1 to 11.

On this scale, the (weighted) median voter is at 6, equidistant from either end of the spectrum. For what it’s worth, the (weighted) mean is very slightly to the right of this position, at 6.21.

When asked to position the parties, the (weighted) mean position for Labour is 4.18, or 1.82 points to the left of the median voter.

The (weighted) mean position for the Conservative party is 8.67, or 2.67 points to the right of the median voter.

This does not mean that Labour’s positioning was not a contributory factor in the party’s defeat. Perhaps if Labour had been closer still to the median voter, it would have won more votes. But spatial politics and the median voter theorem alone can’t explain the party’s defeat. Other factors – like having a leader who is rated as competent — matter, and probably matter more. After all, if closeness to the median voter was the exclusive determinant of parties’ vote shares, we’d be basking in the bright new dawn of a Liberal Democrat government (weighted mean left-right position: 5.77).

P.S. This conclusion (which is similar to the conclusion that Ed Fieldhouse arrives at, and which would have pre-empted me writing this blogpost had I seen it earlier) seems robust to different survey weights. It’s possible that there are different scaling issues to do with respondents in different party systems, but these would have to be very severe to affect the conclusion. It’s also possible that this finding will change when the campaign wave of the BES is available.

A constituency poll by any other name?

May 4, 2015

ICM has just released a constituency poll of Sheffield Hallam. Before a spiral-of-silence adjustment, Nick Clegg is on 40%, and his Labour challenger Oliver Coppard is on 36%. These figures are based on responses from 336 individuals, and so have a margin of error of 5.3%.

This compares with a recent Lord Ashcroft poll, which showed Clegg on 36%, and Coppard on 38%, but without naming the candidates. These figures were based on responses from 733 individuals.

Given these numbers and their associated margin of error, it’s not clear to me that we can conclude much about the relative merits of naming candidates versus not naming candidates. Suppose Clegg wins by 38% to Coppard’s 37% — we would be none the wiser.

Of course, there are other reasons for thinking that constituency polls which name candidates should be more accurate than constituency polls which do not name candidates.

The argument for naming is simple: voters in ballot booths are given a list of names; polls estimate what happens in the ballot booth; therefore polls should use the same prompts found in the ballot booth.

The argument against naming is more complicated, and may trade (entirely on in part) on the survey mode.

Suppose you receive a phone call from a pollster. Being a frequent consumer of polls, you oblige. You are given a list of names, but the list is quite long. You can’t quite remember the name of the Labour candidate, but you’ve heard of that Nick Clegg. Unwilling to look foolish in the eyes of the interviewer, you say, “Clegg, he’s the one”.

In other words, naming candidates in phone surveys biases responses to the most well-known candidate.

This name recognition effect probably doesn’t make sense to people with strong party identification. But there are fewer and fewer such people around, and there’s a long history of public opinion research which suggests that public opinion doesn’t really exist outside of a particular measurement context.

Ideally, all parties which conducted named-candidate polls (I’m looking at you LibDems!) will release these polls after the election so that they can be compared to the nearest (in time) Ashcroft poll. Only then will we be able to get some idea of the relative accuracy of named versus unnamed candidate polls across a range of constituencies.

Comments welcome below!

The Rentoul Questions

May 4, 2015

John Rentoul has put together a very helpful flow-chart intended to help us navigate the thickets of post-electoral coalition formation.

In it, he asks a number of questions about different configurations of parties. I thought I’d try and indicate what the probabilities of these different configurations are, according to the forecasts from electionforecast.co.uk.

Q1: Have the Conservatives plus DUP and UKIP won 323 seats or more?

Very unlikely. The probability of these parties winning 323 seats or more is just 3%.

Q2: Have the Conservatives plus LibDems, DUP and UKIP won 323 seats or more?

Moderately unlikely. The probability of these parties winning 323 seats or more is almost one-third (32%).

Q3: Are the four parties on 321 or 322?

Very unlikely on its own (5%), although obviously this number can be added to the above number.

Q4: Have Labour plus the LibDems and SDLP won 323 seats or more?

Unlikely. The probability of these parties winning 323 seats or more is a little over 12%.

If you follow the most likely outcome at each branching point, these probabilities imply a minority Labour government, with a second election described as “possible”.

Incumbency matters in government formation

April 24, 2015

Suppose that the forecasts are right, and that the Conservatives will be the largest party in the next parliament, but that parties opposed to the Conservatives will have a majorityplurality.

If you want specific numbers, suppose you have the Conservatives on 283, and Labour + SNP + Plaid + Greens + SDLP on 270 + 47 + 4 + 1 + 3 = 325. Note that the SNP has to join with Labour to ensure that anti-Conservative block have a plurality.

In this situation, what does David Cameron do?

  • He might realize that he does not command the support of a majority in the Commons and resign, allowing the Queen to call on Ed Miliband to form a government.
  • Or, he might go the House and force a no-confidence vote, which (ex hypothesi) he will lose, allowing the Queen to call on Ed Miliband to form a government.

Under the Fixed Term Parliament Act, it matters which route Cameron chooses.

If he resigns straight away, Ed Miliband becomes Prime Minister, and there is no constitutional requirement that a majority of the Commons expresses its confidence in the government. In other words (and here I disagree with Tom Louwerse), the United Kingdom is an example of negative parliamentarism.

If Cameron forces the Commons to vote his government down, then under the FTPA, the Commons must formally vote its confidence in a Miliband-led government (section 2(3)).

This means that the SNP would be forced to vote in favour of a Labour government, and could not merely abstain.

If Miliband then uses the debate before the confidence motion to lay out a fairly explicit policy programme, he could then use this as a stick with which to beat the SNP, arguing “you voted for it” whenever disagreement arose.

For the Conservatives, who seem hell-bent on linking Labour and the SNP, it might be useful to ensure that the SNP explicitly votes for a Labour government.

So if the forecasts are right, Cameron might have reason to go to the Commons only to be voted down.

Sunday polls tell us nothing about the impact of the “debates”

March 28, 2015

Polls for the Sunday newspapers will start coming out shortly.

Some will be tempted to interpret any changes in these polls as a consequence of Thursday’s “debates”.

That’s stupid. These polls can tell us almost nothing about the impact of these debates. Here’s why.

First, we know that not many people watched the debate.

Second, we know that relatively few of those who watched the debate were undecided. The ICM poll suggests that 8% of watchers fell into that category. I have no reason to believe that’s an under-estimate. Of those 8%, that same ICM poll said that 56% though Ed Miliband won the debate, 30% David Cameron.

Now, those numbers might be wrong — but that doesn’t materially affect the calculations that follow. Suppose, unrealistically, that all of the undecided voters decided on the basis of Thursday’s “debates”. That means Miliband has won 8% * 56% = 4.45% of the audience, and Cameron 8% * 30% = 2.4%.

That translates to 120,150 new voters for Miliband, and 64,800 new voters for Cameron.

How big are these numbers as a fraction of the voting population (approximately 30 million people)? 120k Miliband switchers equals 4/10ths of a percentage point; 65k Cameron switchers equals 1/5th of a percentage point.

It’s impossible accurately to detect changes this small unless you have huge, huge samples. A sample of 200,000 people might be enough.

Any got a Sunday poll with a sample of 200,000 people?

No?

Thought not.

(PS: I’m open to counter-arguments in the style of Lazarsfeld).

Did the debates in 2010 increase political engagement?

March 17, 2015

British politics has reached an impression level of recursion. Broadcasters and politicians are now having a debate about debates – and, on some shows, debates about the debate about the debates.

These meta-debates feature a lot of cant, and a lot of bullshit. I mean that in the Frankfurtian sense: lots of people are making claims without particularly caring whether they are true or not. One (potentially) bullshit claim is the claim that the 2010 debates mattered for political engagement (see, for example, Adam Boulton’s tweet to this effect).

Whether or not the debates improved engagement is, of course, an empirical question. So I thought I’d dig out the 2010 http://bes2009-10.org/ panel data, to see whether the debates did in fact improve turnout.

The wrong way of proceeding is to look at

  1. stated turnout intention amongst people who watched the debates, and
  2. stated turnout intention amongst people who didn’t watch the debates,

and compare the two. People who watched the debates are weirdunusual: they care about politics. So they’re much more likely to turnout and vote.

A better way of proceeding is to look at

  1. stated turnout intention amongst people after the debates, minus
  2. stated turnout intention amongst people before the debates

and compare the differences between

  1. people who watched the debate and
  2. people who didn’t watch the debates

We can do this thanks to the design of the BES. There’s one variable which measures turnout intention in the pre-campaign survey (0-10 scale, where higher values indicate the respondent is almost certain to vote: mean value across respondents with values for both waves: 9.82 (SD: 2.51)), and one variable which measures turnout intention in the campaign survey (mean value: 5.17 (SD: 3.42)).

You’ll notice stated turnout intention is absurdly high in the pre-campaign period, because PEOPLE LIE. (Sorry, they respond inaccurately given a prevalent social desirability bias). But then, people lie about turnout all the freaking time, and that doesn’t stop people writing about it.

Again, we can’t just compare the change over time across these two groups. We’ve got to compare for people’s pre-existing levels of political interest, and other features. The way I do that is through exact matching – creating matched sets of people, who are alike in terms of

  • political interest in general
  • interest in this election
  • whether they’d been contacted by parties
  • whether they lived in a safe, ultra-safe, marginal, or ultra-marginal seat
  • whether they read a newspaper every day, sometimes, or not at all

When I match respondents in this way, I find that (after removing people who responded to the campaign period questionnaire before any of the debates took place) watching any of the debates is associated with an increase of 0.18 points on that 0-10 scale (p value: 0.04).

Is 0.18 points a little or a lot? One way of judging this effect is to compare it to the effects of other media consumption. For example: we can ask what effect “sometimes” reading a daily newspaper has, compared to never reading one. That effect, at 0.19, is slightly bigger than the effect of watching the debates. But the effect of debate-watching stands up well. So although some of the comment surrounding the debate-about-the-debates might have been bullshit, it might also (accidentally) be true.

Replication code is available at GitHub. Please do get in touch if you can improve the analysis — or suggest why turnout is so high in the pre-election wave.

Subjective economic judgements != what actually happened

March 16, 2015

Like most elections, this election will be fought on the basis of the economy. The Conservatives and the Liberal Democrats will argue that the economy is growing. Labour will argue that living standards are stagnant or declining.

As a result, many people will be asked during the course of this election campaign whether the economic position of their household has improved or worsened – or whether they expect it to improve or worsen over the coming twelve months. It matters how people answer these questions. Economic optimism is known to be associated with positive polling for incumbents (but doesn’t have an independent effect on election outcomes).

Unfortunately, the answers that people give to these questions do not always march in lock-step with more objective measures of households’ economic position. To show that, I’m going to compare two things:

  • first, respondents’ answers to the question “How does the financial situation of your household now compare with what it was 12 months ago” (possible answers: a lot worse / a little worse / stayed the same / a little better / a lot better), from Wave 3 of the BES (October 2014)
  • second, changes in respondents’ levels of household income, recorded by YouGov in (a) February 2014 and (b) February 2015 (possible answers: fifteen separate income bands, starting from under £5,000, and going up in £5,000 or £10,000 increments). I’ve taken the mean points of each bracket, and calculated the difference in household income. The median difference is zero, but 25% of respondents had a change larger than £5000, or one income band.

These two pieces of information are collected at separate points in time: when respondents are asked about the financial situation of their household, they’re not thinking about the answer they gave to the question on household income they gave five moments ago.

There are good reasons to think that people from households which have seen their income rise will say that their household’s financial position has improved. That’s true, but only in the smallest, most grudging way, as Figure 1 shows.

tableout

The percentage saying that their household’s financial position got worse (the red area) decreases as we move from households whose income did in fact decline, to households whose income did in fact increase. But the effect is very small, and the absolute figures still indicate that the link between objective and subjective evaluations isn’t that tight. 35% of people whose household income increased said that their household’s financial position got worse.

Now, you might object that

  • I take nominal income instead of real income, and that it’s possible for nominal increases in income to be wiped out by cost increases in certain bundles of good; or that
  • household financial position includes wealth as well as income, and it’s possible for income to change even as households draw down wealth; or that
  • the timing of the income measures (February 2014 -> February 2015) don’t match up with the timing of the responses (fieldwork: October 2014)

but still, this suggests strongly that subjective evaluations of economic conditions should be used as indicators of mood rather than as a proxy for what actually happened to respondents.

Google search trends and the #indyref

February 11, 2015

Ronald MacDonald and Xuxin Mao, of the University of Glasgow, have published a working paper looking at Google search activity and the Scottish independence referendum.

The paper has got media attention, in particular because it claims

  1. that search trends can be used to predict election outcomes, and
  2. that the “Vow” had no effect on the vote.

It’s rather unfortunate that this paper has received so much media attention, because it’s a very, very bad paper. It

  • is poorly written (Ipsos Mori features as “Dipso Mori”: clearly a pollster who has had a bit too much to drink)
  • misrepresents the state of the literature on election forecasting using search activity
  • bandies around concepts like “clear information”, “rationality”, and “emotion” with scant regard for the meaning of those words.
  • does not attempt to examine other sources of information like the British Election Study

Let’s take the first main claim made by the paper: that search activity can be used to predict elections. How?

The first thing to note is that the authors are attempting to use information on search activity to try and predict a variable which they label “Potential Yes votes”. Those who read the paper will realize that “potential Yes votes” is actually a rolling average of polls. So the authors are using search activity to try and predict polling numbers. Without some polling data, you cannot use search trends to predict elections.

There are situations where using search activity to predict polling numbers is useful. Some countries (Italy, Greece) ban the publication of polls in the run up to elections. I can imagine “predicting” polling would be useful in these contexts.

But any exercise in forecasting will ultimately rely on polling data. If polling data suffers from a systemic bias, forecasting based on search activity will also suffer from systemic bias.

The second thing to note is that the authors are attempting to use searches for particular terms to predict polling numbers. In the paper, they try two terms: “Alex Salmond” and “SNP”. Their assumption is that searching for these terms will be correlated with voting yes — or equivalently, weeks in which there are more searches for Alex Salmond will be weeks in which the Yes campaign is doing better in the polls.

Unfortunately, the authors themselves show that in the latter part of the period under study, there is in fact no correlation between the volume of searches for Alex Salmond and the Yes vote. The authors write

“the effect of Google Trends on Potential Yes Votes became insignificant after 15th March 2014. Based on the testing criteria on clear information, voters in the Scottish referendum encountered difficulties in finding enough clear information to justify a decision to vote Yes”.

In other words, because the authors assume that there is a relationship between particular types of searches and Yes voting, the fact that that relationship breaks down becomes evidence not that this was a poor assumption to begin with, but rather that voters faced difficulty in finding information supporting a Yes vote.

I struggle to accept this reasoning. The only justification I can see for assuming that searching for these terms will be correlated with voting yes is the significant correlation during the first period under study. But it seems more likely that this correlation is entirely epiphenomenal. During the early part of the campaign, the Yes campaign’s polling numbers improved. During the early part of the campaign, search activity increased. But the two are not linked. Search activity is hardly likely to fall during this period.

So, Google search trends can be used to forecast elections if you have polling data, and can identify a search term which correlates with the polling data over the entire period — but these authors couldn’t.

Let’s turn to the second main claim of the paper — that the Vow had no effect on the referendum outcome. This claim is supported by a vector autoregressive model of polling averages, with different dummy variables for different days of the campaign. This is a complicated way of saying that the authors tried to see whether the Yes figure in the polls was higher on particular days.

Unless I have misunderstood the authors’ model very badly, in order to make a difference, the Vow had to produce effects on the day it was published. It does not seem to me to make any sense to assume that the effects produced by an event like this must take place on the day of the event itself.

For what it’s worth, I don’t think we have enough information to judge whether the Vow made a difference. I’m not aware of any polling questions which ask specifically about the Vow, though I’m happy to be corrected on this point. But I’m afraid that this paper doesn’t help us forecast elections, or answer substantive questions about the determinants of voting in the independence referendum.

What does a book chapter count in #REF2014?

January 28, 2015

UPDATE: The sub-panel report provides a more useful breakdown of the percentage of work by output type that was assessed as 4*. Thanks to Jane Tinkler for tweeting the link.

HEFCE’s data on REF submissions identifies a number of different submission types.

For politics, four submission types dominate:

  • Authored books
  • Edited books
  • Chapters in books
  • Journal articles

If we just knew the composition of a department’s REF2014 submission, how would we estimate its eventual GPA? Received wisdom suggests that journal articles are the gold standard, and everything else — particularly chapters in books or edited volumes — are just making up the weight.

We can regress departmental GPAs on the percentage of outputs falling into each of these categories.

Here’s the output of that regression model for Politics and International Relations, with journal articles as the baseline category, and ignoring complications due to books double counting.

Dependent variable:
GPA
PropBooks 0.091
(0.643)
PropEdited -2.985*
(1.581)
PropChapter -1.733***
(0.591)
PropOther -3.904*
(2.165)
Constant 2.863***
(0.146)
Observations 55
R2 0.306
Adjusted R2 0.250
Residual Std. Error 0.298 (df = 50)
F Statistic 5.510*** (df = 4; 50)
Note: *p<0.1; **p<0.05; ***p<0.01

The results suggest that books and journal articles achieve parity, but that a submission composed entirely on chapters or edited volumes would achieve a lowly GPA indeed.

 
Powered by Wordpress and MySQL. Theme by Shlomi Noach, openark.org