What does a book chapter count in #REF2014?

January 28, 2015

UPDATE: The sub-panel report provides a more useful breakdown of the percentage of work by output type that was assessed as 4*. Thanks to Jane Tinkler for tweeting the link.

HEFCE’s data on REF submissions identifies a number of different submission types.

For politics, four submission types dominate:

  • Authored books
  • Edited books
  • Chapters in books
  • Journal articles

If we just knew the composition of a department’s REF2014 submission, how would we estimate its eventual GPA? Received wisdom suggests that journal articles are the gold standard, and everything else — particularly chapters in books or edited volumes — are just making up the weight.

We can regress departmental GPAs on the percentage of outputs falling into each of these categories.

Here’s the output of that regression model for Politics and International Relations, with journal articles as the baseline category, and ignoring complications due to books double counting.

Dependent variable:
GPA
PropBooks 0.091
(0.643)
PropEdited -2.985*
(1.581)
PropChapter -1.733***
(0.591)
PropOther -3.904*
(2.165)
Constant 2.863***
(0.146)
Observations 55
R2 0.306
Adjusted R2 0.250
Residual Std. Error 0.298 (df = 50)
F Statistic 5.510*** (df = 4; 50)
Note: *p<0.1; **p<0.05; ***p<0.01

The results suggest that books and journal articles achieve parity, but that a submission composed entirely on chapters or edited volumes would achieve a lowly GPA indeed.

Politics journals and #REF2014 GPAs

January 26, 2015

HEFCE has recently released full details on the submissions for the 2014 REF.

This allows us to start drawing conclusions — sound or ill-founded — about the relationship between each unit of assessment’s grade point average, and the nature of their submissions.

Following the example of Matthias Siems, who has carried out this exercise for law, I list below pseudo-GPAs for each journal. These pseudo-GPAs are taken by matching each journal article with the GPA of the submitting institution, and averaging over all articles from that journal.

I’ve excluded journals which featured fewer than ten times, and journals which featured only in one university’s submissions.

  1. Journal Of Conflict Resolution – 3.22 [11 appearances]
  2. American Political Science Review – 3.16 [21 appearances]
  3. American Journal Of Political Science – 3.08 [22 appearances]
  4. Journal Of Political Philosophy – 3.05 [17 appearances]
  5. World Politics – 3.04 [13 appearances]
  6. British Journal Of Political Science – 3.03 [64 appearances]
  7. Journal Of Politics – 2.99 [33 appearances]
  8. Comparative Political Studies – 2.99 [30 appearances]
  9. International Studies Review – 2.93 [10 appearances]
  10. European Union Politics – 2.93 [10 appearances]
  11. Governance – 2.93 [14 appearances]
  12. Ethics And International Affairs – 2.93 [12 appearances]
  13. Journal Of Elections, Public Opinion And Parties – 2.91 [12 appearances]
  14. Electoral Studies – 2.9 [38 appearances]
  15. Journal Of Peace Research – 2.89 [16 appearances]
  16. European Journal Of Political Research – 2.88 [36 appearances]
  17. Political Research Quarterly – 2.88 [13 appearances]
  18. Journal Of International Relations And Development – 2.87 [14 appearances]
  19. European Journal Of International Relations – 2.86 [41 appearances]
  20. International Theory – 2.86 [11 appearances]
  21. International Studies Quarterly – 2.85 [41 appearances]
  22. Review Of International Political Economy – 2.85 [27 appearances]
  23. Political Studies – 2.84 [109 appearances]
  24. Journal Of Social Philosophy – 2.83 [11 appearances]
  25. West European Politics – 2.82 [37 appearances]
  26. Economy And Society – 2.82 [12 appearances]
  27. Journal Of European Public Policy – 2.82 [51 appearances]
  28. Millennium – 2.82 [25 appearances]
  29. Public Administration – 2.81 [25 appearances]
  30. New Political Economy – 2.8 [41 appearances]
  31. Environmental Politics – 2.78 [17 appearances]
  32. Journal Of Strategic Studies – 2.78 [24 appearances]
  33. Party Politics – 2.78 [31 appearances]
  34. Critical Review Of International Social And Political Philosophy – 2.78 [23 appearances]
  35. European Journal Of Political Theory – 2.77 [24 appearances]
  36. Political Geography – 2.77 [12 appearances]
  37. International Political Sociology – 2.77 [24 appearances]
  38. Millenium – 2.77 [13 appearances]
  39. East European Politics – 2.76 [10 appearances]
  40. Cambridge Review Of International Affairs – 2.76 [16 appearances]
  41. Review Of International Studies – 2.76 [97 appearances]
  42. International History Review – 2.74 [13 appearances]
  43. Journal Of Common Market Studies – 2.73 [33 appearances]
  44. Contemporary Political Theory – 2.73 [13 appearances]
  45. International Affairs – 2.73 [65 appearances]
  46. History Of Political Thought – 2.73 [11 appearances]
  47. Intelligence And National Security – 2.73 [13 appearances]
  48. International Relations – 2.72 [12 appearances]
  49. Political Quarterly – 2.72 [10 appearances]
  50. Government And Opposition – 2.71 [20 appearances]
  51. Policy And Politics – 2.71 [17 appearances]
  52. Globalizations – 2.7 [13 appearances]
  53. Development And Change – 2.7 [10 appearances]
  54. Democratization – 2.7 [20 appearances]
  55. British Politics – 2.69 [25 appearances]
  56. Journal Of Legislative Studies – 2.68 [11 appearances]
  57. International Peacekeeping – 2.67 [12 appearances]
  58. Security Dialogue – 2.67 [27 appearances]
  59. International Politics – 2.67 [24 appearances]
  60. Europe-asia Studies – 2.66 [22 appearances]
  61. Res Publica – 2.66 [11 appearances]
  62. Third World Quarterly – 2.65 [38 appearances]
  63. Studies In Conflict And Terrorism – 2.65 [11 appearances]
  64. Journal Of European Integration – 2.65 [13 appearances]
  65. Cooperation And Conflict – 2.64 [13 appearances]
  66. British Journal Of Politics And International Relations – 2.64 [71 appearances]
  67. Geopolitics – 2.63 [10 appearances]
  68. Journal Of International Political Theory – 2.58 [11 appearances]
  69. Journal Of Military History – 2.56 [10 appearances]
  70. Parliamentary Affairs – 2.55 [38 appearances]
  71. Contemporary Security Policy – 2.55 [14 appearances]
  72. Perspectives On European Politics And Society – 2.54 [10 appearances]
  73. Critical Studies On Terrorism – 2.51 [13 appearances]

Interested in law, courts, and methodology?

January 22, 2015

As part of the ECPR General Conference in Montreal (26 – 29 August 2015), the ECPR Standing Group on Law and Courts is organizing a panel on Data and Methods in Court research.

I’d like to invite papers to be submitted as part of this panel. I’ve pasted the description of the panel below, but let me add that this is an excellent opportunity for all those who are doing research on judicial texts-as-data, particularly in languages other than English, or for researchers dealing with large (several thousand/year) volumes of court decisions.

If you are interested in presenting, please email me, Chris Hanretty, at c.hanretty@uea.ac.uk.

The deadline for panel submission is February 16th. I’d therefore be grateful if you could let me know if you would like to submit a paper by February 14th.

Panel description:

“The methodology of comparison is a key factor for research into law and courts. We need to carefully explore the various ways of analysing and comparing judicial politics. Beyond the traditional qualitative and quantitative divide we wish to underline the challenge of analysing judicial decisions written in different languages. Data collection and standardisation is an essential condition for successful comparative research. Papers dealing with these issues are invited.”

Election forecasting: some due credit

January 5, 2015

One characteristic which academia shares with rap music (and occasionally with house music) is the care it places on giving proper credit. The forecasting site that I’ve built with Ben Lauderdale and Nick Vivyan, and which is featured in tonight’s edition of Newsnight, wouldn’t have been possible without lots of previous research. I’ve put some links below for those that want to follow up some of the academic research on polling and election forecasting.

(1) “Past elections tell us that as the election nears, parties which are polling well above the last general election… tend to drop back slightly”.

In the language of statistics, we find a form of regression toward the mean. We’re far from the first people to find this pattern. In the British context, the best statement of this tendency is by Steve Fisher, who has his own forecasting site. Steve’s working paper is useful for more technically minded readers.

(2) “…use all the polling data that’s out there…”

As Twitter constantly reminds us, one poll does not make a trend — we need to aggregate polls.

Most political scientists who aggregate polls are following in the footsteps of Simon Jackman, who published some very helpful code for combining polls fielded at different times with different sample sizes. We’ve had to make a fair few adjustments for the multiparty system, but there’s enough of a link to make it worth a shout out.

(3) “By matching… [subsamples of national polls] with what we know about each local area we can start to identify patterns”

Again, to give this insight its proper statistical name, this is a form of small area estimation. In political science, a lot of small area estimation is done using something called multilevel regression and post-stratification, which can be quite slow and fiddly (these are non-technical terms). Although we’ve used MRP in the past (for example, to generate estimates of how Eurosceptic each constituency is), we’ve found that you get similar results using simpler regression models. See our technical report for the gory details.

On research intensity in #REF2014

December 19, 2014

Times Higher Education has published an alternative ranking of REF results, which is based on “research intensity”.

This measure is calculated by taking the percentage of full-time equivalent eligible staff submitted to the REF, and multiplying this by the university’s grade point average.

It seems to me that “research intensity” is a metric in search of a definition.

One way in which research intensity is being interpreted (at least on Twitter!) is as an alternative way of reading the REF, “correcting for” strategic decisions of departments concerning who to submit.

I think the proper way of responding to this is not to say that decisions regarding submission were not strategic — most of them most likely were strategic.

Rather, I think we have to think about what interpreting “research intensity” in this way means.

Suppose a university has the option of submitting 100% of its 20 eligible staff, but is concerned about the likely GPA of the 20-th staff member, ranked in descending order of likely GPA. Suppose that this staff member has a likely GPA (over their REF-able items) of 2. So they decide not to submit this individual. They subsequently receive a GPA of 2.50. Their research intensity score is 0.95 * 2.50 = 2.375. Is this an accurate measure of their GPA, “correcting for” strategic submission? No. By construction, their true GPA is 0.95 * 2.5 + 0.05 * 2 = 2.475. In practice, “research intensity” assumes that the GPA of the non-submitted staff member is equal to zero.

A metric which “corrects” for strategic submission would recognize that, under certain assumptions (those who make decisions regarding submission are unbiased judges of likely GPAs of items submitted; no submitting unit is at the margin where an additional staff member means an additional impact case study) the level of quality of non-submitted staff members is below the GPA actually obtained.

In this instance, we knew by construction, that the GPA of the 20th member of staff, set at two, was less than 2.5. Generally, however, it is not clear at what level non-submitted staff lie. Given an upper bound (the GPA actually obtained) and a lower bound (zero), we can say that the level of non-submitted staff is equidistant from each bound. This means taking the arithmetic mean. Or, as Rein Taagepera has argued, we can take the geometric mean, which is generally better for rates, and which is equal to the square root of the upper bound when the lower bound is zero.

Here’s a worked example. Let’s take Anglia Ruskin’s submission for Allied Health Professions (oh, the tyranny of alphabetical order!).

  • For this UoA, Anglia Ruskin got a GPA of 2.92. They submitted 11.3 FTE staff members, out of 122 eligible staff members. Their research intensity score is therefore 0.27.
  • Our best guess about the quality of non-submitted staff is equal to the geometric mean of zero and 2.92. In this instance, that’s just the square root of 2.92, or 1.71.
  • Our best guess about the corrected GPA is therefore 11.3 / 122 * 2.92, plus (122-11.3)/122 * 1.71, or 1.82.

I have a spreadsheet containing all these “corrected” GPAs, but I’m not sufficiently confident of the HEFCE data at the unit of assessment level to release it. There are several instances of units of assessment having submitted more than 100% of their eligible staff, even after accounting for multiple submissions.

For politics, however, the data on submissions all seem to make sense.

This table shows how the rankings change. The GPAs, of course, are very similar — the Pearson correlation between the raw and “corrected” GPA scores is 0.95 or so. But the rank correlation is smaller, because, y’know, ranks are rubbish.

HE provider Rank On GPA “Corrected” Rank
The University of Essex
1
1
University College London
5
2
The University of Oxford
4
3
London School of Economics and Political Science
2
4
The University of Warwick
6
5
The University of Sheffield
3
6
The University of Strathclyde
11
7
The University of Edinburgh
10
8
The University of Reading
19
9
The University of St Andrews
17
10
Aberystwyth University
7
11
The University of Cambridge
13
12
The University of Southampton
14
13
The University of Sussex
12
14
The University of York
8
15
Royal Holloway and Bedford New College
27
16
The University of Glasgow
26
17
Brunel University London
28
18
University of Nottingham
28
19
Queen Mary University of London
24
20
The University of Bristol
34
21
Birkbeck College
16
22
The University of East Anglia
24
23
The University of Kent
31
24
The University of Manchester
18
25
The University of Exeter
9
26
The University of Birmingham
30
27
The School of Oriental and African Studies
22
28
University of Durham
20
29
The City University
36
30
King’s College London
14
31
University of Ulster
38
32
Goldsmiths College
40
33
The University of Leeds
22
34
University of Newcastle-upon-Tyne
37
35
The University of Leicester
39
36
The University of Keele
31
37
The University of Aberdeen
41
38
Swansea University
20
39
Oxford Brookes University
43
40
The University of Westminster
35
41
The University of Hull
44
42
The University of Dundee
46
43
The University of Bradford
33
44
University of the West of England, Bristol
48
45
The University of Surrey
47
46
The University of Liverpool
42
47
The University of Lincoln
45
48
Coventry University
49
49
Liverpool Hope University
51
50
Canterbury Christ Church University
52
51
London Metropolitan University
50
52
St Mary’s University College
53
53

Big winners are Bristol, Royal Holloway, Brunel. Big losers are Swansea, Exeter and King’s College — sorry, King’s, London. The degree to which institutions win and lose, is, however, an entirely misleading impression created by the use of rank information.

I don’t want to imply that these rankings should be taken seriously: these are all parlour games, and I’ve now written far too much on how to construct alternative rankings of universities. Time to enjoy the rest of this train ride north.

#REF2014 spin generating spreadsheet!

December 18, 2014

Update The original HEFCE spreadsheet hid rows 5600 — 7645. When I copied across, I missed these rows. Revised rankings below.

tl;dr I made a spreadsheet which shows twelve different ways to rank your department. You can download it here.

One of the many invidious features of the REF is the way that REF results are often presented as ranks. As Ruth Dixon and Christopher Hood have pointed out, rank information both conceals information (maybe Rank 1 uni was miles ahead of Rank 2 uni), and creates the perception of large differences when the underlying information is quite similar (maybe Ranks 7 through 17 were separated only by two decimal places).

The combination of rank information with multiple assessment and weighting criteria makes this even more invidious. The most commonly seen metrics this morning have been grade point averages, or the average star rating received by each submission. However, I have also seen research power scores (grade point average times number of full-time equivalent staff submitted) and “percentage world-leading” research (that is, percentage of submissions judged 4-star).

Some of these metrics have been calculated on the basis of the overall performance, but some have been calculated on the performance in outputs. It’s also possible to imagine calculating these on the basis of impact, or on environment.

This means that universities can pick and choose between 12 different rankings (some of which don’t really make sense):

  • Rank in impact, measured by GPA
  • Rank in environment, measured by GPA
  • Rank in outputs, measured by GPA
  • Rank overall, measured by GPA
  • Rank in impact, measured by “research power”
  • Rank in environment, measured by “research power”
  • Rank in outputs, measured by “research power”
  • Rank overall, measured by “research power”
  • Rank in impact, measured by percentage world-leading
  • Rank in environment, measured by percentage world-leading
  • Rank in outputs, measured by percentage world-leading
  • Rank overall, measured by percentage world-leading

As I say, not all of these make sense: “7th most world-leading environment” is a difficult stat to make sense of, and so many might be tempted just to abbreviate to “7th best place for research in politics”, or some other bending of the truth.

In order to aid the dark arts of university public relations, I’ve produced a handy spreadsheet which shows, for each university and each unit of assessment, the different ranks. You can download it here.

For my own university (University of East Anglia), and my own unit of assessment (Politics and IR), this works as follows.

  • The best rank is our rank on outputs: as judged by the percentage of research that is world-leading, we rank twelftheighth.
  • Our second worst rank is our rank on the same criterion(!), measured by a different metric: as judged by the research power of our outputs, we rank 31st30th. This really is a reflection of the size of our department and the number of staff we submitted for REF.

This goes to show that quantifying research excellence can give you very equivocal conclusions about which are the most excellent universities. It does not show — but I suspect this will be readily granted by the reader — that such equivocality lends itself very easily to misleading or partially truthful claims about universities’ performance.

How good were my #REF2014 predictions?

December 18, 2014

Two and a half years ago, I made some predictions regarding the outcome of the 2014 REF for Politics and International Relations.

This was a foolhardy exercise. It was based on grants funded by the ESRC, panel membership, and institutional effects. It was not based, for example, on any bibliometric analysis, or any analysis of the number of FTE staff submitted. It did not attempt to account for the new “impact” component of the REF compared to the RAE.

Notwithstanding this: how accurate were those predictions?

One way of assessing accuracy is to calculate the (rank) correlation of my predicted ranks and the actual ranks. There are some caveats here. A number of universities for which I had made predictions did not submit. Likewise, a number of universities for which I had not made predictions did submit. Nonetheless, the correlation between my predictions and actual ranks is high, at r=0.74.

However, it would be wrong to be too impressed by this. As an example of a “no-information” model, take the 2008 rank of each institution. The correlation between the 2008 rank and the actual rank is slightly higher, at r=0.76.

As with many things REF-related, it seems that departmental performance is a noisy draw from a relatively consistent long-term average.

Public trust in the @UKSupremeCourt

November 27, 2014

I’ve written a paper which uses a little-noticed question in the Continuous Monitoring Surveys of the 2009/10 British Election Study. It’s on trust in the Supreme Court.

I’ve pasted the abstract below. You can find the article here [PDF].

Abstract: I investigate the levels of public trust in the Supreme Court of the United Kingdom (UKSC). Despite some skepticism regarding the very existence of public opinion regarding the court, public trust turns out to be influenced by many of the same factors that influence trust in the supreme courts of other jurisdictions. Those who trust others more, and trust political institutions more, also place more trust in the UKSC. Those who have received a university-level education, and who are familiar with the work of the court, also trust the UKSC more. Politically, those who identify with parties that supported the court’s creation (Labour, Liberal Democrats) trust the court more than those without any party identification, whilst those who identify with the SNP – which opposed the court’s creation and which has publicly quarrelled with the court – are less likely to trust the court.

Replication code is available at GitHub.

I’d be grateful for any comments you may have.

Permission to appeal decisions in the UKSC

November 13, 2014

Over the past day and a half, I’ve been collecting information on the Supreme Court’s permission to appeal (PTA) decisions. I’ve now been collecting them for their own sake – my interest rather lies in whether sitting on the PTA panel makes judges more likely to write a dissent or separate opinion in the resulting judgment.

Nevertheless, the PTA decision is interesting for its own sake. If this were the US, and we were talking about decisions to grant certiorari instead of leave to appeal, we would immediately interpret the decision to grant certiorari in terms of judges’ desire to overturn the appealed decision. I think this line of thinking is unlikely to apply in the UK. Rather, I think the PTA decision may reflect judges’ differing propensities to charitably interpret the appellants’ case, and make it the strongest, most arguable case possible.

I do not think that these differing propensities are a major part of the decision. The PTA documents are full of fairly withering put-downs (“The Court of Appeal was plainly right”). But at the margin, judges’ differing propensities can make the difference.

As a first approximation, let’s look at the percentage success rate by judge: that is, the number of PTA decisions this judge heard which were granted permission to appeal either in whole or in part:

Judge Success rate (pct.) Decisions
Lord Toulson 26 78
Lord Carnwath 30 143
Lord Collins 30 119
Lord Kerr 31 289
Lord Sumption 31 127
Lord Phillips 32 177
Lord Brown 32 130
Lord Hodge 33 54
Lord Kirk 33 3
Lord Saville 33 24
Lord Hope 34 235
Lord Neuberger 34 140
Lord Wilson 34 172
Lord Reed 34 157
Lord Walker 34 180
Lord Clarke 35 281
Lord Mance 36 244
Lord Dyson 36 130
Lord Rodger 37 89
Lady Hale 41 291
Lord Hughes 43 65

The table shows that most judges are tightly clustered around the average success rate (a little over one-third), but that the gap between the top and bottom judges (seventeen percentage points) seems considerable.

Since, however, these percentages are derived from different numbers of cases, we might wonder how much credence to give to these differences. Perhaps if we were to see more PTA decisions from Lord Toulson, we would find that the next run of decisions would bring greater success for applicants.

To find out whether these differences are statistically significant, we can we can try and model the decision to grant permission to appeal as a function of the judges who sat on the panel. To do so, we use a logistic regression model, which features dummy variables for each judge. Were we to include a dummy variable for each judge, we would not be able to identify their separate effects, and so we have to specify a reference level. Here, I set Clarke of Stone-cum-Ebony as the reference level (though technically he is lumped together with another judge referenced in the March 2011 PTA decisions, one Lord Kirk, who seems neither to be a Senator of the College of Justice nor a LJA).

Alongside dummy variables for the judges, the model includes dummy variables for Scottish and Northern Irish cases, both of which are less likely to be granted appeal.

paper-modout

The results of the model are shown graphically above. The red dots indicate our best guess concerning the effect on the decision of participation by each named judge. Thus, the estimate of 1.2 for Lord Phillips means that a PTA panel with Lord Phillips is 1.2 times more likely to grant permission to appeal than a PTA panel which had Lord Clarke in place of Lord Phillips.

The lines extending to either side of the dot show confidence intervals. Confidence intervals which do not cross the grey line indicate that the relevant judge has an effect which is statistically significantly different from Lord Clarke. Notice that the confidence intervals surrounding Lord Saville are very large, as a result of the relatively few PTA decisions he heard.

From this, we can see that only two judges have effects which are statistically significantly different from Lord Clarke: Lady Hale and Lord Toulson. Generally, comparing across pairs of judges, it is difficult to identify judges for whom the estimates are significantly different.

Three points about these estimates are worth emphasising:

  • These estimates can act as a guide to action. If you see that your PTA is being heard by a panel composed of Toulson, Carnwath, and Kerr, advice your client accordingly.
  • These estimates may not reveal characteristics of the judges, but only characteristics of the panels with these judges. It’s possible that Lord Toulson is actually very willing to grant PTA, but provokes the opposite reaction in the judges with whom he sits. That is, we are technically committing a fallacy of division here.
  • These estimates may not reveal characteristics of the judges, but merely the characteristics of the PTA decisions they hear. Note that the participation of Presidents and Deputy Presidents of the court tends to be associated with a higher success rate. Maybe these judges get assigned the “slam dunk” cases.

I’d be grateful for any other thoughts or caveats which you think worth mentioning.

Would 4 Tory MPs do better by defecting?

October 12, 2014

tl;dr version. If you think Tory defectors will bring 45% – 55% of their vote with them (alternately: that the Conservative retention ratio is less than 55%), then four Tory MPs would do better by defecting. But this is unlikely.

In today’s Financial Times, research by Matt Goodwin is quoted as providing support for the claim that

four Conservative MPs [those in Amber Valley, Cleethorpes, Bury North, and Dudley South] would be more likely to retain their seats at the next election if they defected to the UK Independence party

I had a strong negative reaction to this claim. I find it implausible. This is despite my prior belief that anything Matt says about UKIP is generally worth credence. Matt probably knows more about UKIP than anyone outside the party, and possibly more than some within the party as well.

Part of the reason for my negative reaction is that it’s very hard to provide good evidence-based estimates of the likelihood of something happening, if that thing has never before happened. No MP has defected to UKIP and subsequently contested a general election under that party’s banner. (Bob Spink did defect, but contested Castle Point in 2010 as an “Independent Save Our Green Belt” candidate). Given this, I’m not sure how you can calculate either

  1. the MP’s current probability of winning under the Conservative banner, or
  2. the counterfactual probability of winning under the UKIP banner.

As I’ve learned doing electoralforecast.co.uk, it’s very difficult to calculate the first of these even given a reasonable amount of information.

But this is a philosophical objection. I think there are more important practical objections. Let’s assume a number of things to get some numbers on the table.

First, instead of talking about the probability of retaining one’s seat, we can talk about the expected gap between the incumbent and the nearest challenger (in all cases, Labour). This is really just to avoid talking about probability, which is… tricky.

Second, assume that we can gauge the expected gap between the incumbent and the nearest challenger, in the situation where the incumbent fights under the Conservative banner, by applying a simple uniform swing based on what is currently reported by UK Polling Report. This suggests that the Conservatives will lose 4.1%, that Labour will gain 5%, that the Liberal Democrats will lose 15%, and that UKIP will gain 12.9%.

If you do this, you could expected outcomes like this:

Constituency Cons. Lab. LDem UKIP
Amber Valley 34.5 42.4 -0.6 14.9
Cleethorpes 38 37.6 3.2 20
Bury North 36.1 40.2 2 15.8
Dudley South 39 38 0.7 21.1

If you make this assumption, then these four MPs are indeed in trouble. I am making this assumption for the sake of argument : I do not think that it is very plausible. I do not think that the polls reflect the likely outcome in May. In particular, I disagree with Matt when he says

I can see no reason why Ukip’s average poll rating will fall significantly this side of the election.

Now, there’s wiggle room in that quote. But there are reasons why UKIP’s average poll ratings might fall significantly — and that’s simply that, looking at elections since 1979, parties have generally performed closer to their previous general election share than the polls would suggest. So in assuming away any significant decline in UKIP vote share, I’m being charitable to Matt’s argument.

Of course, Matt could equally well argue that the swing towards UKIP in these four constituencies will be greater than the swing nation-wide, because UKIP will concentrate effort in these four constituencies were there a defection. So, depending on your beliefs about concentration of the vote and UKIP fade-off, this assumption may be either favourable or unfavourable to the argument.

Third, we can speculate about what might happen given a defection by keeping the Labour and LibDem votes constant, and transferring increasingly larger shares of the Conservative vote to UKIP. It will be helpful to think of this as the Conservative retention ratio (CRR). If the CRR is zero, then defection is perfectly efficient, and there are no deadweight losses of any kind. If the CRR is one, then defection is a simple substitution. It should be clear that the CRR is unlikely to be zero: that would mean, in effect, that the Conservatives didn’t stand a candidate, something they are hardly likely to do.

Again, this assumption is unrealistic. Even absent a defection, the Labour vote share in any given constituency is not simply what you would expect given a uniform national swing. The Lib Dem vote in Amber Valley can’t be what you would expect given a uniform national swing, since it’s zero! But this is a simplifying assumption.

Given these assumptions, I’m going to plot the expected benefit of defection on the vertical axis, as a function of the CRR. That will be in a solid line. To the right of the figure, where the CRR is high, defection becomes a simple substitution. To the left of the figure, where the CRR is low, defection approaches perfect efficiency. As the CRR increases, the expected gap to Labour under defection becomes greater than the expected gap to Labour without defection. The expected gap to Labour without defection is shown with a dotted line.

retention_ratio
Note: No-defection baseline is shown using a dotted line.

These trade-offs are plotted in the figure above. For all four constituencies, there’s a break-even point. It’s 44% in Amber Valley and Bury North; 53% and 54% in Cleethorpes and Dudley South respectively. If the Conservatives retain a greater share of their vote than this, then defection hurts defectors.

So the question then becomes: is it reasonable to suppose that the Conservatives will retain less than 45 or 55% percent of their vote?

Imperfect evidence comes from three sources:

  • Bob Spink’s re-election campaign in 2010
  • the Clacton by-election
  • SDP defections in 1981

Let’s take these one by one.

The Spink spat

In 2005, Bob Spink won Castle Point with 48.3% of the vote. In 2010, the Conservatives polled up 3.7% on their 2005 performance. Thus, we might have expected that had Spink not defected, the Conservatives would have polled 52%. In fact, with Rebecca Harris they polled 44%, which equals a retention rate of 84.6%.

The Carswell coup

In 2010, Douglas Carswell won Clacton with 53% of the vote. At the time of writing, the Conservatives are down 4.1% on their 2010 vote share. Thus, we might have expected that had a by-election been forced for reasons other than defection, the Conservatives would have polled 48.9%. In fact, with Giles Watling they polled 24.6%, which equals a retention rate of just over 50%.

SDP defections

Thanks to Wikipedia, I identified twenty-eight Labour defectors to the SDP. For eighteen of these, I was able to calculate the Labour share of the vote in 1979 and 1983. Note that in 1983, Labour’s share of the vote dropped by 9.3 percentage points. The average Labour retention rate was 87.2%. The lowest Labour retention rate was achieved in Caithness and Sutherland, where Robert MacLennan (SDP) did rather well. The next-best retention rate was 55%, in David Owen’s seat of Plymouth Devonport.

Now, for the reasons noted above, none of these is an exact comparator. Generally, retention rates amongst parties that suffer a defection are north of 55%. It therefore seems that the above claim — that four Conservative MPs would do better electorally by defecting — is only true if Carswell is the rule rather than the exception. I rather think that it would be foolish to take either (a) a generally respected constituency MP, or (b) the most Eurosceptic constituency in the country, as the rule for Conservative defections.

 
Powered by Wordpress and MySQL. Theme by Shlomi Noach, openark.org