Independent Catherine Connolly is right now fighting tooth and nail to overhaul FG’s Sean Kyne, who beat her to the last seat in Galway West by 17 votes in the first recount last night, prompting her to call a second full recount which is continuing as I type. But a thought struck me in the car on the way home this evening – STV as practised in the RoI is not deterministic, as there is a random element in the distribution of elected candidates’ surpluses. Surely 17 votes is less than a standard deviation? I had to find out.

STV in NI is deterministic: all the second preferences of an elected candidate’s votes are counted and then scaled down by the surplus fraction before being transferred, resulting in fractional votes for the remaining candidates. For example, say that the quota is 900, and candidate A is elected with 1000 votes. The next preferences of all A’s votes are counted and the totals scaled by a factor of (1000-900)/1000 = 1/10 before being added to the appropriate candidates’ totals. A is left holding the balance, which equals the quota, and the total number of votes in play at any stage thus remains constant. In this way, each vote for A is treated identically.

By contrast, in RoI general elections only surpluses attained on the first count are scaled. Subsequent surpluses are transferred using random selection. Instead of counting all votes and scaling down, a random sample of votes equal to the surplus is counted and then distributed at full value. Furthermore, only the last batch of votes given to the candidate is eligible for selection. For example, say candidate A has 800 of the necessary 900 quota, and candidate B with 200 votes is eliminated. A is elected with 1000 votes, a surplus of 100. To distribute this surplus, 100 of the 200 votes which were transferred from B are randomly selected (A’s other 800 votes are ignored). These are then counted and transferred accordingly. Again, A retains 900 votes (the quota) and the total votes in play are constant, however not all of A’s votes are used.

This random element introduces sampling errors – a different choice of 100 random ballots may well produce a different result, and even get a different candidate elected. We can use standard statistical methods to estimate the errors in these processes and determine how much the candidates owe to the voters, and how much to chance.

Consider count 11, the distribution of Nolan’s surplus of 326. We pick 326 ballots from O’Clochartaigh’s transfers to him of 1015, as those transfers were the ones that pushed Nolan over quota. Now, the p=.95 error in a random sample of 326 out of 1015 is 4.47%, and 4.47% of 326 is approximately 15. Therefore we can expect a 15-vote variation either way in the distribution of Nolan’s surplus. The equivalent error for O’Cuiv’s surplus is (1034 of 2101) -> 22 and for Walsh it is (116 of 2706) -> 10. Assuming that each random choice of ballots is independent, the expected error in the final count is sqrt(15^2+22^2+10^2) =~ 28. We can see that a victory in the final count by 17 votes is not a statistically significant result, and therefore has more to do with what order the ballot papers fell out of the boxes than how many went into them in the first place.

What does this mean for the candidates? Not much, as the legal method has been followed. It does however show that haggling over low double-digit margins of victory has nothing to do with the will of the electorate. They might as well just toss a coin for it.

The vote totals for eliminated candidates are assumed to be error-free, even though prior surplus transfers will introduce small errors. These errors make little difference to the results as the error bar formula is relatively insensitive to population size.

Numbers were taken from the first recount data in @misteil‘s spreadsheet here. Error bars were calculated using the utility here. The rules for STV in RoI general elections are here. Thanks also to @garygillanders for pointing out a mistake in my original calculation.

Very interesting. I was always worried that the use of sampling introduced an unfair – and unnecessary – element of chance into the electoral process. It could be done deterministically. Or if they really think it’s too much trouble for all elections, perhaps recounts like these over tight margins should be done deterministically.

More ammunition for the electronic voting argument I guess.

(I’m worried too about them only redistributing a sample taken form the last batch, a fact I’d forgotten. That surely introduces another unnecessary element of chance.)

A question: When they recount, do they use the same samples of transferred votes as they did the previous time, or do they resample?

The sampling method is both error-prone and volatile. I have concentrated in this post on the sampling error, but you are right to point out the deterministic effects. I’m not sure these can be called ‘errors’ so much as ‘design flaws’.

For example, nobody got elected on the first count, so all the subsequent surpluses were from people who voted first for an unsuccessful candidate. The danger is that this can introduce a systemic bias, but one could conceivably claim this is deliberate.

However, this approach leads to volatility. Say there are two candidates who are eliminated in consecutive counts, and together they push a third candidate over quota. The order in which they are eliminated determines which ballots are then distributed as the successful candidate’s surplus. In a tight race, this could mean that a single vote could effectively control a block of votes as large as the second candidate’s total. If the subsequent transfers are not independent of which candidate they originally came from (which we would expect in a real instance) then changing a single vote could alter all the subsequent results, even if only the two eliminated candidates were separated by a single vote margin.

But that’s for another time.

On the issue of resampling, yes it is essentially the same sample that is recounted each time, barring the odd ballot here or there to make up the numbers if a surplus changes. Once the random sample is taken, it is kept physically separate even if a full recount is ordered.

The solution is to count all the second preferences in the RoI as well. The hassle, cost and time delay of doing so is the price of having such a complicated voting scheme.

Very interesting blog – and a great start to what may be a much longer, and broader, conversation about electoral reform. Correct me if I’m wrong, but my understanding – from your blog – is that when all the recounts take place they do so on the basis of reexamining the same set of ballot papers previously counted. This includes the “randomly selected” bundle of transferred or surplus votes ? If these sample bundle of papers were drawn from a last batch transferred or indeed from any particular box or batch of votes, how can this sample be deemed to be random ? Surely one would – under a manual voting system anyway – need to literally shuffle the total population of votes before drawing the requisite sample ?

The Seanad election rules for STV-PR remove the random selection element by implementing fractional vote transfers but without fractional calculations (as recommended by JB Gregory in 1880). The Seanad election rules, like the Dáil rules and the NI rules, keep the “last parcel” provision for the transfer of consequential surpluses. There is an underlying logic to this approach, but it can produce anomalies.

To remove those anomalies you could use the Weighted Inclusive Gregory Method (WIGM) of transferring surpluses, when ALL ballot papers are transferred, each weighted correctly. This version of STV-PR is used in Scotland for local government elections. In Scotland the preferences are recoreded on conventional ballot papers, but the votes are counted electronically. WIGM STV-PR could be counted manually, but in Scotland at least, the time taken would not be acceptable.

Yes, entering all the data from every ballot paper and counting electronically seems like the obvious solution to me. The party reps overseeing the process can enter the data at the same time into their own laptops and run their own software to check the result if they wish.

Not sure how that could be done. In Scotland the STV ballot papers were scanned and OCR software used to prepare the ‘vote vector’ for each ballot paper (subject to later correction and adjudication). So in our system no-one ever saw the face of every ballot paper. in fact, very few people saw the face of any ballot paper.

The preference data could be captured by key-punch data entry, but I doubt if the tally-men (and tally-women) could keep pace with professional key-punch operators.

Once the count is over, the full ballot data (as anonymous preference profiles) should be published on the website along with the full results. In 2007 the Returning Officer for the Glasgow City did that the 21 wards in the Glasgow LGA. The Scottish Government plans to do that for all wards in all 32 council areas, and down to polling station level (subject to a minimum number of voters, when adjacent areas would be amalgamated).

Doing each ballot by hand (by tally-folk or professional key-punch operators) would be slow and tedious, but the task itself is simple enough.

“There is an underlying logic to this approach, but it can produce anomalies.”What is that logic?

If you look at the whole spectrum of STV-PR counting rules (at least seven versions), especially with regard to the method of transferring surpluses, you will see there are underlying philosophical differences in the approaches to “representation” and in the assumed significance of different preferences, especially the first preferences. At one end of the spectrum (original version of STV) the system maximises the diversity of representation – by keeping the voters in discrete groups and moving the smallest possible numbers of ballot papers to effect a transfer. At the other end of the range (Meek STV), the system maximises the consensus of representation – by bringing as many voters as possible together and moving the greatest possible numbers of ballot papers to effect a transfer.

The earliest version of STV-PR counting rules survive for elections in Cambridge, Massachusetts, closely followed by the election rules for the Dáil Éireann. In these rules the primacy of the first preference is very evident and everything possible is done to keep a ballot paper with the voter’s first preference candidate for as long as possible.

Thus, under Dáil rules, when a surplus of first preferences is transferred, the non-transferable papers are excluded from the calculations. They stay where the voter said: “For my first preference and ONLY my first preference”.

It is by an extension of that logic that when a consequential surplus has to be transfered, only the ballot papers that gave rise to that surplus are examined and transferred, i.e. teh last parcel received. The logic is that the other voters supporting that candidate were “more attached” to the candidate (by showing a higher preference) and so they did not want their votes taken away from that candidate.

Of course, others point to the anomalies those rules create, and some consider the “last parcel” approach to be “unfair”. Hence the development of progressively “inclusive” rules, culminating in the approach of Meek STV where votes are transferred to already elected candidates and the quota is reduced every time more non-transferable votes are encountered.

Thanks for that! the explanation brought me right back to the lectures of Michael Laver, over twenty years ago. Perhaps I should have paid more attention…

Very interesting post. We’ll never know if Kyne would have edged Connolly if the ni system applied but there’s no doubting that he was blessed In the way the votes tumbled out on Saturday morning. Even taking Into account the good transfer rates from fg candidates to their colleagues in earlier counts, (ballpark 80%) am I right in thinking that the percentage he got from such a small surplus is amazing? 92 votes out of 122 on a 13th count, pushing him ahead for the first time was a good break to catch. Factor in the earlier fg predictions that Walsh wouldnt make the quota and it’s clear they got a serious rub of the relic in galway west

Andrew

I am currently working on a research paper on the phenomenon of political blogging in Northern Ireland to be presented at a conference in September. As part of my research I’m conducting interviews with bloggers across the political spectrum. Further details are available at this link: http://www.ccsr.cse.dmu.ac.uk/conferences/ethicomp/ethicomp2011/abstracts/ethicomp2011_35.php

If you would like to participate in the survey can you drop me an e-mail? Many thanks.