#REF2014 spin generating spreadsheet!

December 18, 2014

Update The original HEFCE spreadsheet hid rows 5600 — 7645. When I copied across, I missed these rows. Revised rankings below.

tl;dr I made a spreadsheet which shows twelve different ways to rank your department. You can download it here.

One of the many invidious features of the REF is the way that REF results are often presented as ranks. As Ruth Dixon and Christopher Hood have pointed out, rank information both conceals information (maybe Rank 1 uni was miles ahead of Rank 2 uni), and creates the perception of large differences when the underlying information is quite similar (maybe Ranks 7 through 17 were separated only by two decimal places).

The combination of rank information with multiple assessment and weighting criteria makes this even more invidious. The most commonly seen metrics this morning have been grade point averages, or the average star rating received by each submission. However, I have also seen research power scores (grade point average times number of full-time equivalent staff submitted) and “percentage world-leading” research (that is, percentage of submissions judged 4-star).

Some of these metrics have been calculated on the basis of the overall performance, but some have been calculated on the performance in outputs. It’s also possible to imagine calculating these on the basis of impact, or on environment.

This means that universities can pick and choose between 12 different rankings (some of which don’t really make sense):

  • Rank in impact, measured by GPA
  • Rank in environment, measured by GPA
  • Rank in outputs, measured by GPA
  • Rank overall, measured by GPA
  • Rank in impact, measured by “research power”
  • Rank in environment, measured by “research power”
  • Rank in outputs, measured by “research power”
  • Rank overall, measured by “research power”
  • Rank in impact, measured by percentage world-leading
  • Rank in environment, measured by percentage world-leading
  • Rank in outputs, measured by percentage world-leading
  • Rank overall, measured by percentage world-leading

As I say, not all of these make sense: “7th most world-leading environment” is a difficult stat to make sense of, and so many might be tempted just to abbreviate to “7th best place for research in politics”, or some other bending of the truth.

In order to aid the dark arts of university public relations, I’ve produced a handy spreadsheet which shows, for each university and each unit of assessment, the different ranks. You can download it here.

For my own university (University of East Anglia), and my own unit of assessment (Politics and IR), this works as follows.

  • The best rank is our rank on outputs: as judged by the percentage of research that is world-leading, we rank twelftheighth.
  • Our second worst rank is our rank on the same criterion(!), measured by a different metric: as judged by the research power of our outputs, we rank 31st30th. This really is a reflection of the size of our department and the number of staff we submitted for REF.

This goes to show that quantifying research excellence can give you very equivocal conclusions about which are the most excellent universities. It does not show — but I suspect this will be readily granted by the reader — that such equivocality lends itself very easily to misleading or partially truthful claims about universities’ performance.

How good were my #REF2014 predictions?

December 18, 2014

Two and a half years ago, I made some predictions regarding the outcome of the 2014 REF for Politics and International Relations.

This was a foolhardy exercise. It was based on grants funded by the ESRC, panel membership, and institutional effects. It was not based, for example, on any bibliometric analysis, or any analysis of the number of FTE staff submitted. It did not attempt to account for the new “impact” component of the REF compared to the RAE.

Notwithstanding this: how accurate were those predictions?

One way of assessing accuracy is to calculate the (rank) correlation of my predicted ranks and the actual ranks. There are some caveats here. A number of universities for which I had made predictions did not submit. Likewise, a number of universities for which I had not made predictions did submit. Nonetheless, the correlation between my predictions and actual ranks is high, at r=0.74.

However, it would be wrong to be too impressed by this. As an example of a “no-information” model, take the 2008 rank of each institution. The correlation between the 2008 rank and the actual rank is slightly higher, at r=0.76.

As with many things REF-related, it seems that departmental performance is a noisy draw from a relatively consistent long-term average.

Public trust in the @UKSupremeCourt

November 27, 2014

I’ve written a paper which uses a little-noticed question in the Continuous Monitoring Surveys of the 2009/10 British Election Study. It’s on trust in the Supreme Court.

I’ve pasted the abstract below. You can find the article here [PDF].

Abstract: I investigate the levels of public trust in the Supreme Court of the United Kingdom (UKSC). Despite some skepticism regarding the very existence of public opinion regarding the court, public trust turns out to be influenced by many of the same factors that influence trust in the supreme courts of other jurisdictions. Those who trust others more, and trust political institutions more, also place more trust in the UKSC. Those who have received a university-level education, and who are familiar with the work of the court, also trust the UKSC more. Politically, those who identify with parties that supported the court’s creation (Labour, Liberal Democrats) trust the court more than those without any party identification, whilst those who identify with the SNP – which opposed the court’s creation and which has publicly quarrelled with the court – are less likely to trust the court.

Replication code is available at GitHub.

I’d be grateful for any comments you may have.

Permission to appeal decisions in the UKSC

November 13, 2014

Over the past day and a half, I’ve been collecting information on the Supreme Court’s permission to appeal (PTA) decisions. I’ve now been collecting them for their own sake – my interest rather lies in whether sitting on the PTA panel makes judges more likely to write a dissent or separate opinion in the resulting judgment.

Nevertheless, the PTA decision is interesting for its own sake. If this were the US, and we were talking about decisions to grant certiorari instead of leave to appeal, we would immediately interpret the decision to grant certiorari in terms of judges’ desire to overturn the appealed decision. I think this line of thinking is unlikely to apply in the UK. Rather, I think the PTA decision may reflect judges’ differing propensities to charitably interpret the appellants’ case, and make it the strongest, most arguable case possible.

I do not think that these differing propensities are a major part of the decision. The PTA documents are full of fairly withering put-downs (“The Court of Appeal was plainly right”). But at the margin, judges’ differing propensities can make the difference.

As a first approximation, let’s look at the percentage success rate by judge: that is, the number of PTA decisions this judge heard which were granted permission to appeal either in whole or in part:

Judge Success rate (pct.) Decisions
Lord Toulson 26 78
Lord Carnwath 30 143
Lord Collins 30 119
Lord Kerr 31 289
Lord Sumption 31 127
Lord Phillips 32 177
Lord Brown 32 130
Lord Hodge 33 54
Lord Kirk 33 3
Lord Saville 33 24
Lord Hope 34 235
Lord Neuberger 34 140
Lord Wilson 34 172
Lord Reed 34 157
Lord Walker 34 180
Lord Clarke 35 281
Lord Mance 36 244
Lord Dyson 36 130
Lord Rodger 37 89
Lady Hale 41 291
Lord Hughes 43 65

The table shows that most judges are tightly clustered around the average success rate (a little over one-third), but that the gap between the top and bottom judges (seventeen percentage points) seems considerable.

Since, however, these percentages are derived from different numbers of cases, we might wonder how much credence to give to these differences. Perhaps if we were to see more PTA decisions from Lord Toulson, we would find that the next run of decisions would bring greater success for applicants.

To find out whether these differences are statistically significant, we can we can try and model the decision to grant permission to appeal as a function of the judges who sat on the panel. To do so, we use a logistic regression model, which features dummy variables for each judge. Were we to include a dummy variable for each judge, we would not be able to identify their separate effects, and so we have to specify a reference level. Here, I set Clarke of Stone-cum-Ebony as the reference level (though technically he is lumped together with another judge referenced in the March 2011 PTA decisions, one Lord Kirk, who seems neither to be a Senator of the College of Justice nor a LJA).

Alongside dummy variables for the judges, the model includes dummy variables for Scottish and Northern Irish cases, both of which are less likely to be granted appeal.


The results of the model are shown graphically above. The red dots indicate our best guess concerning the effect on the decision of participation by each named judge. Thus, the estimate of 1.2 for Lord Phillips means that a PTA panel with Lord Phillips is 1.2 times more likely to grant permission to appeal than a PTA panel which had Lord Clarke in place of Lord Phillips.

The lines extending to either side of the dot show confidence intervals. Confidence intervals which do not cross the grey line indicate that the relevant judge has an effect which is statistically significantly different from Lord Clarke. Notice that the confidence intervals surrounding Lord Saville are very large, as a result of the relatively few PTA decisions he heard.

From this, we can see that only two judges have effects which are statistically significantly different from Lord Clarke: Lady Hale and Lord Toulson. Generally, comparing across pairs of judges, it is difficult to identify judges for whom the estimates are significantly different.

Three points about these estimates are worth emphasising:

  • These estimates can act as a guide to action. If you see that your PTA is being heard by a panel composed of Toulson, Carnwath, and Kerr, advice your client accordingly.
  • These estimates may not reveal characteristics of the judges, but only characteristics of the panels with these judges. It’s possible that Lord Toulson is actually very willing to grant PTA, but provokes the opposite reaction in the judges with whom he sits. That is, we are technically committing a fallacy of division here.
  • These estimates may not reveal characteristics of the judges, but merely the characteristics of the PTA decisions they hear. Note that the participation of Presidents and Deputy Presidents of the court tends to be associated with a higher success rate. Maybe these judges get assigned the “slam dunk” cases.

I’d be grateful for any other thoughts or caveats which you think worth mentioning.

Would 4 Tory MPs do better by defecting?

October 12, 2014

tl;dr version. If you think Tory defectors will bring 45% – 55% of their vote with them (alternately: that the Conservative retention ratio is less than 55%), then four Tory MPs would do better by defecting. But this is unlikely.

In today’s Financial Times, research by Matt Goodwin is quoted as providing support for the claim that

four Conservative MPs [those in Amber Valley, Cleethorpes, Bury North, and Dudley South] would be more likely to retain their seats at the next election if they defected to the UK Independence party

I had a strong negative reaction to this claim. I find it implausible. This is despite my prior belief that anything Matt says about UKIP is generally worth credence. Matt probably knows more about UKIP than anyone outside the party, and possibly more than some within the party as well.

Part of the reason for my negative reaction is that it’s very hard to provide good evidence-based estimates of the likelihood of something happening, if that thing has never before happened. No MP has defected to UKIP and subsequently contested a general election under that party’s banner. (Bob Spink did defect, but contested Castle Point in 2010 as an “Independent Save Our Green Belt” candidate). Given this, I’m not sure how you can calculate either

  1. the MP’s current probability of winning under the Conservative banner, or
  2. the counterfactual probability of winning under the UKIP banner.

As I’ve learned doing electoralforecast.co.uk, it’s very difficult to calculate the first of these even given a reasonable amount of information.

But this is a philosophical objection. I think there are more important practical objections. Let’s assume a number of things to get some numbers on the table.

First, instead of talking about the probability of retaining one’s seat, we can talk about the expected gap between the incumbent and the nearest challenger (in all cases, Labour). This is really just to avoid talking about probability, which is… tricky.

Second, assume that we can gauge the expected gap between the incumbent and the nearest challenger, in the situation where the incumbent fights under the Conservative banner, by applying a simple uniform swing based on what is currently reported by UK Polling Report. This suggests that the Conservatives will lose 4.1%, that Labour will gain 5%, that the Liberal Democrats will lose 15%, and that UKIP will gain 12.9%.

If you do this, you could expected outcomes like this:

Constituency Cons. Lab. LDem UKIP
Amber Valley 34.5 42.4 -0.6 14.9
Cleethorpes 38 37.6 3.2 20
Bury North 36.1 40.2 2 15.8
Dudley South 39 38 0.7 21.1

If you make this assumption, then these four MPs are indeed in trouble. I am making this assumption for the sake of argument : I do not think that it is very plausible. I do not think that the polls reflect the likely outcome in May. In particular, I disagree with Matt when he says

I can see no reason why Ukip’s average poll rating will fall significantly this side of the election.

Now, there’s wiggle room in that quote. But there are reasons why UKIP’s average poll ratings might fall significantly — and that’s simply that, looking at elections since 1979, parties have generally performed closer to their previous general election share than the polls would suggest. So in assuming away any significant decline in UKIP vote share, I’m being charitable to Matt’s argument.

Of course, Matt could equally well argue that the swing towards UKIP in these four constituencies will be greater than the swing nation-wide, because UKIP will concentrate effort in these four constituencies were there a defection. So, depending on your beliefs about concentration of the vote and UKIP fade-off, this assumption may be either favourable or unfavourable to the argument.

Third, we can speculate about what might happen given a defection by keeping the Labour and LibDem votes constant, and transferring increasingly larger shares of the Conservative vote to UKIP. It will be helpful to think of this as the Conservative retention ratio (CRR). If the CRR is zero, then defection is perfectly efficient, and there are no deadweight losses of any kind. If the CRR is one, then defection is a simple substitution. It should be clear that the CRR is unlikely to be zero: that would mean, in effect, that the Conservatives didn’t stand a candidate, something they are hardly likely to do.

Again, this assumption is unrealistic. Even absent a defection, the Labour vote share in any given constituency is not simply what you would expect given a uniform national swing. The Lib Dem vote in Amber Valley can’t be what you would expect given a uniform national swing, since it’s zero! But this is a simplifying assumption.

Given these assumptions, I’m going to plot the expected benefit of defection on the vertical axis, as a function of the CRR. That will be in a solid line. To the right of the figure, where the CRR is high, defection becomes a simple substitution. To the left of the figure, where the CRR is low, defection approaches perfect efficiency. As the CRR increases, the expected gap to Labour under defection becomes greater than the expected gap to Labour without defection. The expected gap to Labour without defection is shown with a dotted line.

Note: No-defection baseline is shown using a dotted line.

These trade-offs are plotted in the figure above. For all four constituencies, there’s a break-even point. It’s 44% in Amber Valley and Bury North; 53% and 54% in Cleethorpes and Dudley South respectively. If the Conservatives retain a greater share of their vote than this, then defection hurts defectors.

So the question then becomes: is it reasonable to suppose that the Conservatives will retain less than 45 or 55% percent of their vote?

Imperfect evidence comes from three sources:

  • Bob Spink’s re-election campaign in 2010
  • the Clacton by-election
  • SDP defections in 1981

Let’s take these one by one.

The Spink spat

In 2005, Bob Spink won Castle Point with 48.3% of the vote. In 2010, the Conservatives polled up 3.7% on their 2005 performance. Thus, we might have expected that had Spink not defected, the Conservatives would have polled 52%. In fact, with Rebecca Harris they polled 44%, which equals a retention rate of 84.6%.

The Carswell coup

In 2010, Douglas Carswell won Clacton with 53% of the vote. At the time of writing, the Conservatives are down 4.1% on their 2010 vote share. Thus, we might have expected that had a by-election been forced for reasons other than defection, the Conservatives would have polled 48.9%. In fact, with Giles Watling they polled 24.6%, which equals a retention rate of just over 50%.

SDP defections

Thanks to Wikipedia, I identified twenty-eight Labour defectors to the SDP. For eighteen of these, I was able to calculate the Labour share of the vote in 1979 and 1983. Note that in 1983, Labour’s share of the vote dropped by 9.3 percentage points. The average Labour retention rate was 87.2%. The lowest Labour retention rate was achieved in Caithness and Sutherland, where Robert MacLennan (SDP) did rather well. The next-best retention rate was 55%, in David Owen’s seat of Plymouth Devonport.

Now, for the reasons noted above, none of these is an exact comparator. Generally, retention rates amongst parties that suffer a defection are north of 55%. It therefore seems that the above claim — that four Conservative MPs would do better electorally by defecting — is only true if Carswell is the rule rather than the exception. I rather think that it would be foolish to take either (a) a generally respected constituency MP, or (b) the most Eurosceptic constituency in the country, as the rule for Conservative defections.

Potentially risky departures from the status quo: the #indyref and #Brexit

October 7, 2014

This weekend I attended the Pontignano conference in Siena. The conference (theme: Better Together, or Better Apart?) neatly encapsulated two British concerns — namely, Scottish independence referendum and the possibility of Brexit, alongside sadly perennial concerns about Italy’s economic position within the Eurozone.

Scottish independence and Brexit were both discussed in the public session on the Friday night (the remaining sessions were held under Chatham House rules). In that session, David Willets suggested (and I paraphrase) that

  1. the Scottish referendum outcome was largely a result of uncertainty concerning the possible consequences of independence, and that
  2. insofar as the Scottish referendum campaign had any lessons for a Brexit referendum campaign, it would be to teach campaigners not to try and create an emancipatory groundswell of popular political participation, but rather to play (and prey) on people’s doubts.

I think Willets is absolutely right. I think that referendum voting behaviour can largely be understood by thinking about how much people tolerance risk and uncertainty when considering departures from the status quo, and that this is a more useful way of understanding behaviour than simply stating that negative advertising works.

Some evidence for this comes from Wave 2 of the British Election Study, conducted in May and June of this year. The BES included a module on the Scottish referendum alongside a further module on financial literacy. One question in the financial literacy module asked respondents about their risk profile — whether they would describe themselves as very willing, somewhat willing, somewhat unwilling or very unwilling to take risks. That’s important information if you’re trying to decide how to invest someone’s money, but it also means that we have a measure of risk that we can cross-tabulate with support for independence. Here’s the breakdown by risk profile:


As you can see, although there’s a slight exception for the most risk-averse category, more risk-averse respondents were much less likely to support independence — suggesting that risk and uncertainty mattered, at least considering stated vote intention in the first half of the year.

The size of the gaps (north of 20%) is comparable to the size of the gaps between age groups and social classes.

Understanding the independence referendum by thinking about people’s willingness to tolerate risk and uncertainty when considering departures from the status quo — helps us in two ways.

First, it helps to explain some of the demographic differences in vote choice. Generally, men were more likely to support independence than were women, and younger age groups were more likely to support independence than were older age groups. Younger people, and younger men in particular, are known for tolerating higher levels of risk — not just concerning financial investments, but across a range of activities (think fast cars, drugs, and sex). These groups might have benefited from independence in other ways, or might have been more receptive to certain positive arguments put across by the Yes campaign — but it wouldn’t be a surprise if the BES post-election wave showed that these groups had been less scared by some apocalyptic warnings before the vote.

(Note however that the above differences with respect to risk remain after controlling for these demographic characteristics).

Second, focusing on departures from the status quo it helps us to partially explain the Yes campaign’s surge in the week before the vote. Alan Renwick had explained, quite some time in advance of the vote, that referendums for change can be won if campaigners redefine the status quo. Better Together talked repeatedly about one status quo — sterling. Departure from sterling was described as risky, and a No vote avoided that risk. In the two weeks running up to the campaign, the Yes campaign tried to redefine the status quo in terms of a fully public NHS and continued membership in the European Union. Departure from this status quo was described as unappealing — but here, crucially, a Yes vote avoided this risk by severing Scotland from a Conservative-UKIP coalition.

What does this imply for a Brexit referendum campaign? It suggests three things:

  • First, creating fear, uncertainty and doubt (FUD) about the consequences of Brexit may be an effective strategy, even if it is not an attractive or appealling strategy
  • Second, a Brexit campaign will have to minimize the perceived departure from the status quo by emphasizing bits of the EU that we will keep — through, for example, EFTA membership.
  • Third, status quo claims will be perceived differently by different age groups. Voters old enough to cast a vote in the 1975 referendum (now 57+) remember the status quo ex ante. For them, leaving the EU is a return to something they have already known. A FUD strategy will be much more effective on younger voters.

Each of these is subject to qualification. Emphasizing continuity may require acknowledging that most EU regulation will stay in place. A FUD strategy might work on older voters if Brexit could be (imaginatively) tied to lower pension payments. But thinking about the campaign in this way already suggests that it may not inspire voters in the same way that the Scottish independence referendum did.

Voter switches, 2005 to 2010

October 6, 2014

James Kirkup asked:

The following table shows proportions from the post-election BES data, questions PostQ12_2 and PostQ46_1. 2005 vote is down the rows; 2010 vote across the columns.

Making the heroic assumptions that (a) people perfectly recall 2005 vote choice, and (b) people perfectly recall 2010 vote choice, just over 11% of 2005 Labour voters voted Conservative.

BNP Conservatives Don’t Know Green Party Labour Liberal Democrats Other Plaid Cymru Refused SNP UKIP Total
BNP 94.04 0.00 0.00 0.00 0.00 5.96 0.00 0.00 0.00 0.00 0.00 100.00
Conservative 0.15 83.81 0.13 0.10 6.36 6.49 0.35 0.00 0.35 0.05 2.21 100.00
Did not vote 4.27 39.45 4.10 0.00 20.18 19.21 0.00 0.35 5.36 2.72 4.37 100.00
Don’t Know 10.49 37.65 1.63 1.02 16.29 10.54 0.00 0.00 18.80 3.58 0.00 100.00
Green Party 0.00 42.09 0.00 15.98 4.68 27.17 0.00 0.00 10.07 0.00 0.00 100.00
Labour 0.15 11.14 0.03 2.45 68.62 15.18 0.00 0.00 0.66 0.70 1.08 100.00
Liberal Democrat 0.00 13.08 0.00 0.00 8.89 75.16 0.00 0.45 0.00 0.77 1.65 100.00
Not eligible/too young to vote 1.61 15.05 0.00 6.84 38.71 31.82 0.00 0.00 5.97 0.00 0.00 100.00
Other 0.00 0.00 0.00 0.00 1.80 24.89 61.17 0.00 12.15 0.00 0.00 100.00
Plaid Cymru 0.00 0.00 0.00 0.00 0.00 0.00 0.00 100.00 0.00 0.00 0.00 100.00
Refused 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 100.00 0.00 0.00 100.00
SNP 0.00 4.79 0.00 0.00 13.82 1.74 1.93 0.00 4.64 73.09 0.00 100.00
UKIP 0.00 17.49 0.00 8.14 0.00 14.72 0.00 0.00 9.30 0.00 50.34 100.00
Total 1.28 36.02 0.46 0.99 30.74 22.51 0.38 0.44 3.65 1.62 1.90 100.00

Update: Note that the base for this table is 2010 voters. Any claim based on this table should probably begin with “Of those who voted in 2010…”. Thanks to Anthony Wells for the prompt.

Update (2): Note also that some of these percentages are based on very small numbers of respondents. The unweighted number of respondents who voted Green in 2005, for example, is 14.

Italian polling update, September 2014

September 26, 2014

Graphs below. Click on the links to show trend-lines for different parties.

Forza Italia (ex PDL in data) | PD | M5S | Lega | SEL | NCD

Infrequently asked questions

Where do these polling figures come from? Here and here.

What are these trend lines? They’re estimates from a model which treats latent party support as something which evolves smoothly over time, and is made manifest through particular potentially biased polling snapshots.

How are the effects of the different polling companies identified? By assuming that on average, polling companies are unbiased.

Campaign contact in the #indyref

September 22, 2014

@loveandgarbage speculated that the Yes campaign’s much-vaunted ground game wasn’t in fact that good.

Until the 3rd wave of the British Election Study comes out, it’ll be difficult to tell. But here’s some data from their second wave, conducted between May and June of this year. Note that this data is based on weighted sample means, rather than some raw sample means I quoted on Twitter. (The effect of weighting is to reduce rates of contact for both campaigns, by roughly similar amounts).

  • 21.5% reported contact from Better Together in the past 4 weeks; 28.3% reported contact from any pro-Union group
  • 28.5% reported contact from the Yes campaign in the same period; 34% contact from any pro-independence group

The graphs below show rates of contact according to local authority, ordered from authorities with highest rates of contact to authorities with lowest rates.

On the face of it, there’s no obvious relationship with past turnout, or with the ratio of Yes to No contacts, plotted in the next graph.


The high ratio of independence to union contacts in Aberdeenshire and the Shetlands comes as a bit of a surprise, given the large No votes in those areas. This data comes from June, and it’s very likely that campaign activity changed (in level, if not in distribution) between June and September.


September 19, 2014

Although the No campaign won a clear and decisive victory, it didn’t win everywhere. Important parts of Scotland voted for independence.
As Naomi O’Leary put it:

It would take very large shifts of votes to swing the result — or very large selective counting. The map below shows, in blue, the most populous contiguous area that would have voted by a majority for Yes. In other words, if you just count the blue area, there’s a majority for independence (50.04%). Let’s call it Salmondland (those who dislike Salmond will wish to interpret this as a reference to Rick Perlstein’s book).

Areas in Salmondland:

  • Dundee City (57.3% Yes)
  • Glasgow City (53.4%)
  • Inverclyde (49.9%)
  • North Lanarkshire (51%)
  • Perth & Kinross (39.7%)
  • Renfrewshire (47.2%)
  • West Dunbartonshire (53.9%)
  • Highland (47%)
  • Argyll & Bute (41.4%)
  • North Ayrshire (51%)

Areas outside Salmondland:

  • Aberdeen City
  • Aberdeenshire
  • Angus
  • Clackmannanshire
  • Dumfries & Galloway
  • East Ayrshire
  • East Dunbartonshire
  • East Lothian
  • East Renfrewshire
  • Edinburgh, City of
  • Eilean Siar
  • Falkirk
  • Fife
  • Midlothian
  • Moray
  • Orkney Islands
  • Scottish Borders
  • Shetland Islands
  • South Ayrshire
  • South Lanarkshire
  • Stirling
  • West Lothian

Here, areas like Argyll & Bute and Perth & Kinross are being drowned out by the much larger number of voters in Glasgow. Perth & Kinross in particular is necessary to create a land bridge to Dundee.

Technical details:

I generated random contiguous agglomerations of local authorities of random size between two and 32 areas. I subsetted the analysis to the agglomerations with a majority vote in favour of Yes, and then identified the agglomeration with the largest total number of votes (serving as a proxy for population). Drop me a line if you’d like the code.

Powered by Wordpress and MySQL. Theme by Shlomi Noach, openark.org