Log in

View Full Version : An Exit Poll Primer? The Tag Team in action.. beDeviled by some who AskQuestions



TruthIsAll
03-27-2009, 10:53 AM
http://www.dailykos.com/story/2006/11/4/135126/905

Story Updated. How to read exit polls: a primer
by Febble
Sat Nov 04, 2006 at 11:48:26 AM PDT
(From the diaries - I was pleased to see how many posters here recognized an instant classic when they saw one - DemFromCT)

We will all be avidly watching the exit polls on Tuesday night. Some of us will simply be avid to know what they can tell us about who won. Others will be avid to parse them for evidence of skulduggery. This is an attempt to sort out fact from fiction, and help all of us understand what is going on.

First of all: there will be more than one exit poll exercise on Tuesday, and some of the smaller independent exit polls will be specifically designed to shed light on the integrity (or otherwise) of the vote-counting process. But the big one will be the Edison-Mitofsky poll for the NEP (National Election Pool], so this diary is about that.

Febble's diary :: ::
Purpose of the NEP exit poll

The NEP exit polls are designed primarily to answer these questions:


http://www.exit-poll.net/...

* WHO voted for each candidate
* WHY voters in your area made critical choices
* WHERE geographical differences on candidates and issues were a factor

They are also designed to allow members to "call" state results (Senate and Governor in 2006), once it is unambiguously clear (with 99.5%) confidence who is ahead.

Note that "verifying the integrity of the election" isn't one of the goals of the survey, and this is important, because, whether we wish it were otherwise or not, the exit pollsters assume that the vote count is correct.

The poll questions are addressed by means of a substantial questionnaire completed by what is designed to be a representative sample of voters. By cross-tabulating characteristics of the voters (age, race, sex, etc) with their answers to questions regarding their vote, and their reasons for their vote, a picture emerges as to who, where, voted for whom and what, and why. This is extremely interesting information.

Getting a representative sample: sampling error

But the accuracy of the information depends on how truly representative the sample is. And, unfortunately, it is remarkably difficult to get a truly representative sample of anything, let alone people. Ideally, one would put every voter into an enormous barrel, shake the barrel, and get a designated toddler to pull out a large number of voters at random, and administer the questionnaire under duress, and with the aid of a truth drug. If this was done, the only "error" in the poll would be "sampling error" - the variability you'd get if you repeated the exercise several times with several toddlers. And from this variability, the "Margin of Error" (MoE) would be computed. The MoE is the range of results you'd expect to get, say, 95%, or 99.5% of the time if your sample each time was completely random (the MoE will be wider for greater degrees of "confidence; e.g. the MoE for 99.5% confidence will be greater than the MoE for 95% confidence).

But, clearly, pollsters can't select that way. They can't even randomly sample every voter, as that would involve being, potentially, everywhere. So what they do is select a sample of precincts in such a way that each voter in a state has an equal chance of being selected, and then try to interview a similar sized sample of voters from each of the sampled precincts. A small precinct will have a lower chance of selection than a large precinct, but IF your small precinct is selected, you yourself will have a higher chance of being selected than you would if you voted in a large precinct, so the net result is that everyone has an equal chance of being selected. However, this form of sampling ("cluster sampling") means that there is less variability in the data than there would be if the sampling was truly random. Exit polls are, in effect, a large number of small polls, nested in states. Each precinct in the sample is a mini-poll, with a small sample size (and a large MoE), and only a relatively small number (in the tens, not hundreds) of precincts are sampled in each state. In order to give an estimate for the entire state, however (or for the nation, in the National precinct sample), the voters are considered as though randomly sampled (to give more statistical power), and the MoE is increased to compensate for the reduced observed variability.

Getting a representative sample: non-sampling error

Unfortunately, we can't stop there. Not all "error" in polls is "sampling error" even after we've allowed for "cluster sampling". There are many sources of "non-sampling" error, and these include factors that may systematically bias the polls. And to make it worse, unlike the sampling error that is expressed in the "MoE", this error can't be quantified in advance. Some sources of "non-sampling" error will tend to cancel out across precincts. For example "measurement error" - simple mistakes - or "coverage error" - a group of voters missed because the interviewer was taking a break - may cause error that favors one candidate or party in one precinct, but are just as likely to result in an error in the opposite direction in another. So while each precinct might have a "biased" result, the precinct "biases" should cancel out - sum to zero - over all. However, other sources of non-sampling error may not do so. If, for example, in a particular state, voters for one party tend to vote at crowded times, where voters are more likely to be "missed" (selected, but not interviewed because the interviewer is busy), then that party's voters may be systematically under-represented in the poll, across precincts. This will result in a discrepancy between poll and count that may well be statistically "significant".

Non-response bias

More serious still, is "non-response bias". Participation in the poll is entirely voluntary; selected voters are free to refuse if they do not want to participate. A problem therefore arises if there is any tendency for one group of voters to be less willing to participate than the other group (and we can never rule this out: there is no reason to think that people's attitude to political choices is unrelated to their attitude to pollsters, particularly pollsters sponsored by the news media). If this happens, then the pollsters are, in effect sampling from "a different population" from the total population of voters - they are sampling from that subset of voters who are willing to participate in the polls. And that subset may be more Democratic - or more Republican - than the total population.

There are measures the pollsters can - and do - take to compensate for non-response bias. Interviewers are asked to note the age, race and sex (by visual estimate) of all those who are selected for participation but who do not take part. Data analysts can then compare the age, race and sex ratios in the non-respondents to those in the respondent sample, and if a particular demographic group appears under-represented, they know that some form of "non-response bias" has occurred - and can re-weight the data to compensate. Indeed it is because of these kinds of data that we know a great deal about non-response bias. Unfortunately, only non-response bias by visible characteristics can be observed. The pollsters cannot know whether Republican or Democratic voters are over or under represented in their samples. And "non-response bias" may be subtle - if those who are reluctant to participate manage to avoid actually being selected - then we will tend to get what is called "selection bias".

Selection bias

Interviewers are given an "interviewing interval" that is designed to net a consistent sample size across precincts, for example 100 voters. So large precincts will be given a longer interviewing interval than small precincts. For example, interviewers may be asked to interview every 10th voter in a large precinct, while those interviewing in a small precinct may be asked to interview every second, or even every, voter. However, when the interviewing interval is large, and especially when a polling place is crowded, then strict "Nth voter" protocol may be more difficult to adhere to, and it may be easier for interviewers to find themselves unconsciously selecting the Nth +1, or Nth-1 voter if the Nth looks likely to refuse. Indeed, it may also simply make it easier for unwilling voters to evade the selection process. In the 2004 poll data, bias was significantly greater where N was large, and/or when the interviewer had to stand more than 25 feet from the polling place. Note that where selection bias occurs, response rate may actually be enhanced - if you tend to select voters who are more likely to agree to participate, then your completion rate will go up. Unfortunately, so may your bias.

In short, therefore; exit polls are surveys, and they are subject to both sampling and non-sampling error, and the sources of non-sampling error include sources of bias. In addition, and increasingly, the pollsters need to use telephone samples for absentee and early voters, and for all these samples - absentee, early, voters exiting the polls on election day - they need to make guesstimates about the likely turnout, based on past results, which may or may not be extrapolated correctly to the current election.

Compensating for non-sampling error

For these reasons, the pollsters have a number of data sources that they use to corroborate (or not) the results they get from the actual polls. One of these sources is pre-election polls - if the exit poll responses diverge greatly from pre-election polls, then they have reason to regard their exit poll responses as potentially biased. The pollsters became aware of such a divergence election day 2004 (as we know, from a leak by Wonkette), before a single result was available. Another indicator is the vote-returns themselves. Again, if the incoming vote returns indicate a systematic divergence from the exit poll response, the pollsters have reason to suspect bias in their sample. For this reason, they dynamically reweight their estimates of the final result as first the precinct, and then the county tabulations are reported, and only when they are sufficiently confident that their estimate is correct (statistically confident that is) do they recommend "calling" a state for one candidate or another. They also dynamically reweight their cross-tabulations from the same sources, to correct for what they assume is biased voter representation in their data. on edit: cross-tabulations are not reweighted dynamically; the first reweighting generally occurs two to three hours after state close of poll, and further reweighting is done thereafter as necessary.

So: if you want to know what the pollster's estimate of the results are, independently of reweighting by vote-returns, what you want are the cross-tabulations at close of poll in each state. These were provided by CNN in 2004 (not "leaked" as widely regarded - simply posted as an early stab at the numbers) and there is no reason that I know of, despite rumors to the contrary, why close-of-poll cross-tabulations won't be posted again on Tuesday. They may well change as the night wears on; this won't be because anyone is trying to "cover-up" a "leak" but simply because, rightly or wrongly, the official result is assumed to be correct, and any discrepancy between poll and count due to error in the poll.

And now to dispel a few myths:

"'Uncorrected' poll results won't be released in 2006"

Joe Lenski, in an interview with Andrew Kohut from Pew, said that "We're going to put in place systems in which no one, even at the networks, can view any of this data before 5 p.m. on Election Day." This should reduce the chance of information from incomplete samples being disseminated. An incomplete sample is highly likely to be biased, because voters do not arrive randomly. If you miss a late Republican rush, or late Democratic rush, you will get the wrong picture. Whether the networks will post close-of-poll cross-tabulations remains to be seen, but I have seen nothing to indicate that they won't.

"There is no evidence that Republicans are less likely to respond to exit polls"

There is plenty of evidence for pro-Democratic bias in exit polls. The best kind of evidence is experimental evidence, where the experimenter actually controls a variable that is randomly allocated. The random allocation ensures that it will be "orthogonal" to any other factor that might affect the phenomenon you are interested, in this case, discrepancy between poll and count. In two experiments that I know of, methodological factors were manipulated in order to try to increase response rates (one involved giving free folders; another involved experimenting with shorter questionnaires). In both cases, the one condition did result in different response rates, but also, surprisingly, to increased apparent Democratic bias. As the difference in bias cannot have been due to fraud (there would be no way any fraudsters could have known which precincts had free folders) then we know that the manipulated condition must have been causal - that methodological factors differentially affected the participation rate of Democrats versus Republicans.

"There is no evidence that Republicans were less likely to respond to pollsters in 2004"

There are strong correlations in the exit poll data between methodological factors (such as interviewing rate) and the magnitude of the precinct-level discrepancy. Where factors were present that would have made it easier for unwilling voters to avoid being polled (or for eager voters to volunteer) the greater was the observed discrepancy.

"'rBr' ['reluctant Bush responders'] is disproved by the fact that response rate in 2004 was higher in Republican states/precincts"

Because selection bias may result in high response rates, this "proof" is somewhat flawed from the outset. However, more importantly, it is when Democratic response rates differ from Republican response rates that bias will occur, whether the response rates are 15% and 20% or 60% and 80%. Overall response rates can tell us very little about bias.

"The 2004 exit polls indicate that many millions of votes were stolen"

There is absolutely no correlation between the magnitude of the precinct-level discrepancies in 2004 and change in Bush's vote share relative to 2000 (what UK commentators call "swing"). If a single factor, e.g. fraud, was responsible both for both the discrepancy, and for inflating Bush's vote, then you would expect the two to be positively correlated. In fact, the correlation is slightly, but insignificantly, negative. If fraud was responsible for the discrepancy in 2004 then either it was absolutely uniform (which is not the case normally made) or it was carefully targeted in precincts (not states) where Bush was expected do badly. Either way, the data does not support the inference that the discrepancy was due to fraud; heroic assumptions need to be made to make it even consistent with fraud on a very large, nationwide scale.

"Exit polls are uncannily accurate"

The precinct level data has shown a consistent Democratic bias over the last 5 presidential elections, and the causes of the bias have been well-researched. In 1992, the discrepancy was almost as large as in 2004. The reputation for "uncanny accuracy" probably, ironically, derives from pollsters' extreme caution about calling states unless they are very sure they are right - and they make sure they are sure by incorporating vote-returns into the estimates in all but the most slam-dunk of races. In the UK, where, for all our faults, we conduct pretty transparent elections, the exit polls are regarded as a bit of a joke (a cruel joke in 1992). Peter Snow, the BBC poll presenter on election night has as his catch phrase "it's all a bit of fun".

"Exit polls are used to monitor election integrity around the world"

Not as far as I know. In Ukraine, there was direct evidence of blatant fraud (acid in ballot boxes; candidate poisoned with disfiguring, potentially lethal poison). Sure, fraud will tend to play havoc with exit polls, but given that exit polls can play havoc with themselves, they can never be a primary instrument for monitoring election integrity. Indeed, here are some cautions:

http://www.cartercenter.org/...

http://www.cartercenter.org/...

http://www.cartercenter.org/...

http://jimmycarter.org/...

Take home message:

The early exit polls will give you a reasonable idea of who is winning on election night, but there is no point in expecting the results to be within any calculated "Margin of Error", as MoE calculations assume random sampling error only, and do not reflect non-sampling error, which polls inevitably contain. Therefore, even if the final results diverge "significantly" from the early poll, it won't necessarily mean there is fraud. Exit polls have too many potential sources of bias for bias ever to be ruled out. If you want to find fraud, don't try and find it in the NEP exit polls. Independent polls designed for the purpose may tell you more, but they are unlikely to be much more immune from bias than the NEP exit polls, and may be more vulnerable.

The Edison-Mitofsky FAQ is here:

http://www.exit-poll.net/...

[ Parent ]
**********************************************************************************
It could have been the COUNT that was wrong (1+ / 0-)
Recommended by:bdevil89
Your discussion presumes that the official count was accurate.

No good scientist would presume something that major.

Your analysis of 2004 must take into account 2 possibiliies:
either the exit polls were wrong OR the official count was corrupted.

You must examine both possibilities in order to have a solid analysis

Thus this sentence from your diary presumes that the "incoming vote" was accurate:
Again, if the incoming vote returns indicate a systematic divergence from the exit poll response, the pollsters have reason to suspect bias in their sample.

What if it was not?

The fatal flaw in the [Evaluation of Edison-Mitofsky www.exit-poll.net/election-night/] is that they refused to even acknowledge the possiblity that the official count was corrupted - anywhere.

by AskQuestions on Sat Nov 04, 2006 at 03:13:01 PM PDT
**********************************************************************************


*****************************************************************************

Satisfied that you mention you worked for Mitofsky (0+ / 0-)
That was the point I was trying to make, that you are or have been allied with Mitofsky, who was a premier pollster and I am sorry that he disavowed his 2004 results.

He did refuse to acknowledge the possibility that a corrupted count was a cause of the discrepancy, instead blaming it on the awkward and unproven reluctant Bush respondent theory, which is a theory not borne out by his own data.

He did also document the much higher accuracy in the paper-ballot districts, though there were extremely few of these alas. I am only sorry that this finding was not explored further, at least by anyone that I know of.

by AskQuestions on Sat Nov 04, 2006 at 04:33:21 PM PDT
**************************************************************************
[ Parent ]

Satisfied that you mention you worked for Mitofsky by AskQuestions, Sat Nov 04, 2006 at 04:33:21 PM PDT (0+ / 0-)
Did you actually read my diary? (2+ / 0-)
Recommended by:AnonymousArmy, Demi Moaned
Mitofsky did not "refuse to acknowledge the possiblity that a corrupted count was a cause of the discrepancy" - why do you think he tested the hypothesis? And why do you think he contracted me to do more?

And the "theory" that Bush voters participated at a lower rate than Kerry voters was indeed borne out by his own data. It was borne out by his own pre-election data on election day, and it was borne out by his (and my) analyses of the precinct level data. It's in his report. Asserting otherwise is simply to mis-state facts.

As for the paper-ballot finding, he did explore it further, and subjected it to analysis by size-of-place which was important, as almost all paper ballot precincts were in rural districts. But then I went even further, and found that when similar sized places were compared, there was no significant difference in discrepancy between paper ballot precincts and precincts with other technologies. Interestingly, when I looked at large urban precincts serving communities of more than 50,000, where there were no paper ballot precincts, I did find a significant difference between technologies. The discrepancy was significantly greater in precincts using older technology (lever; punchcards) than in precincts using digital technology (DREs; optical scanners). So although it was significant, it doesn't actually support the inference of electronic vote-switching.

And BTW - if you were trying to make the point that I was "allied with Mitofsky" earlier, you made it singularly badly. You implied that Edison-Mitofsky had "input" to my diary. Well, Mitofsky is dead, and while I exchanged the emails with Joe Lenski after Mitofsky's death, and I have met him, my contract was not with his firm, but a private contract with Mitofsky. And in any case - I explained my relationship with Mitofsky and the data in my first post.

Talk Rational for rational talk.

by Febble on Sat Nov 04, 2006 at 04:53:53 PM PDT

[ Parent ]

Did you actually read my diary? by Febble, Sat Nov 04, 2006 at 04:53:53 PM PDT (2+ / 0-)
Controlling for technologies (0+ / 0-)
I'm very interested in ur analysis of divergence controlling for technologies. Did ur analysis also control for supervisory factors. I.e. if one was to postulate that one side (rebublicans in this case) could use computerized voting machines to switch votes, then it might only be reasonable to do in precints, counties or states where they controlled the vote counting aparatus. (Now the question of whether they would need control of only the precinct, only the state or both would depend on what their actual method of vote switching was and would need to be tested separately...)
I remember scanning the results from Florida compared to exit polls and comparing divergence based on technology and seeing (to the untrained eye) what appeared to be correlation. However, i can imagine that if you threw in results across the country, these discrepencies might be drowned out.
So my question is, was an analysis done that also took into account the party in control of the various vote counting methods combined with the technologies involved. If so, what did u find? if not, why not?

by Dan D on Sun Nov 05, 2006 at 08:21:27 AM PDT

[ Parent ]

Controlling for technologies by Dan D, Sun Nov 05, 2006 at 08:21:27 AM PDT (0+ / 0-)
No, that data was not available (1+ / 0-)
Recommended by:Dan D
but there would have been problems even if it had been. One thing that I think is not clear to a lot of people (understandably) is that the number of precincts polled in each state is very small. So drilling down into within-state factors is just not likley to yield any statistically meaningful finding. There is, on average, less than one precinct per county per state.

That's why I argue that actual vote return data is potentially much more informative - you just have to find a different baseline, one of which might be divergence from previous elections; another is divergence from party registration (I too had a look at that in Florida, and though at first there was something interesting, but in the end it was inconclusive).

So that's why my approach was to do analyses that made sense across the whole sample. My approach was twofold: to find methodological correlates of discrepancy (and these were clear, and together actually accounted for all the net redshift) and secondly to determine whether the magnitude of the discrepancy was correlated with change in Bush's vote share, and it wasn't -not even slightly.

This last finding alone makes it very unlikely that fraud was a major contribution to the discrepancy; coupled with the finding that methodological factors were a major contribution, it seems clear that it is highly unlikely that any fraud effects would be found at lower statistical power, which any within-state analysis is bound to have. Election Science Institute looked at precinct level discrepancies in Ohio, and while they found what I found (i.e. failed to find any correlation between change in Bush's vote-share and discrepancy) the statistical power was so low, that the confidence limits would have been fairly wide. But by the same token, the statistical power of any analysis with only 49 data points is going to be extremely small.

So the short answer is: you need a lot more polled precincts in any given state to do really meaningful analysis that might point to fraud. And of course, all my other caveats about polls would still apply.

Talk Rational for rational talk.

by Febble on Sun Nov 05, 2006 at 10:00:24 AM PDT

[ Parent ]

No, that data was not available by Febble, Sun Nov 05, 2006 at 10:00:24 AM PDT (1+ / 0-)
re the rBr data (2+ / 0-)
*********************************************************************************
For the alternate view, read the book by Freeman (1+ / 0-)
Recommended by:bdevil89
WAS THE 2004 PRESIDENTIAL ELECTION STOLEN? by Steven F. Freeman and Joel Bleifuss.

This book is based on a close analysis of the Evaulation of Edison-Mitofsky, which was the report that Edison Mitofsky prepared in Nov and Dec 2004 and released to the media on Inauguration Eve in Jan 2005 (at a moment when the media would have soooo much time to deal with it). It was 77 pages long and complex, and the media ran their reports on it the very next day, cribbing straight from the summary.

Edison-Mitofsky had never done such a report before, it was their attempt to defend their polls - and get hired to poll again - when there was such a pronounced difference between the exit poll results and the official count.

All is not as Febble says, this diary was written with the input and approval of Edison-Mitofsky and is their defense of their methods.

They do not want you to question their exit polls . . .

by AskQuestions on Sat Nov 04, 2006 at 02:52:50 PM PDT
***********************************************************************************
by AskQuestions on Sat Nov 04, 2006 at 04:55:08 PM PDT
*****************************************************
Might as (2+ / 0-)
Recommended by:StupidAsshole, We hold these truths
well call it the Daily Kool-Aid.

I am a physician, and when a person comes in with a problem, I ask and look for all the evidence (symptoms, history, physical) as well as the context of the problem. I then formulate what could be the possible problems based on the evidence and the likelihood (epidemiology, evidenced based medicine).

The correlating analogy is that a voting technology that has been counting our votes for the past 6 years or so (i.e. central tabulators, now more local e-voting machines) is done by software that is proprietary (i.e. privately owned, or private property). We must place trust in a machine owned by corporations and pretty much certified by the manufacturers themselves. WE have no way to verify the vote except for trusting a black box that it is registering and counting our votes correctly. Again, where is the basis of the trust coming from? From the corporations of course.

Now the exit polls (not necessarily opinion polls) have had a history of being a sign of whether the official vote count is being counted correctly or not in many parts of the world. Of course, they have been very off in the US in recent years thanks in part to? (rBr, False recall, etc.) Now is it coincidence that the exit polls have been so far off since the Bush administration has come into office, perhaps, or is there more evidence to point that this administration is up to committing more serious and unscrupulous actions in order to maintatin power.

I could list the numerous scandals, disastrous actions, and outright lies this administration has stated, but I don't want to depress people. That being said, are they above rigging elections? is that a line they wouldn't cross, or have they shown time and time again, that there is no bar too low to stoop under in order to stay in power. And more, importantly is there an avenue to allow that malfeasance to occur?

Let's see:
Exit polls widely discrepant from computerized vote count Check

Numerous and unprecedented historical precedents and odds that Bush overcame to win the election, baffling many pundits and poli-scientists: Check

Vote counting done in secret by partisan corporate software and without a hard paper ballot to verify: Check

Massive Retail vote supression done by a Repub party that touts democracy as essential for peace in the world (esp. in battleground states): Check

Amazing Repub GOTV (again, unprecedented, but verified by the computerized vote count) that won the election for Bush in spite of all the negatives against Bush: Check

Mitkofsky and Edison not releasing the raw data for statisticians and congressmen to analyze and give validity to the election bc it's private property: Check

Numerous anecdotes of vote count manipulation by partisan election boards (e.g. Warren County in Ohio), fraudulent recounts, Overvotes (more votes than registered voters), Undervotes (thousands of votes for a local judge, but none for the President), Vote flipping on touch screens all biased toward the Repub candidate, Co-campaign chairs in charge of state elections, etc. : Check, in fact, double check.

There's more evidence, but as you can see the evidence gathered (inlc. exit polls) point to some rather grave and disturbing conclusions, or as we say in medicine, a differential or impression.

Conspiracy theory? I think not, you would really have to blind, not to see the outright fraud in light of the context (corrupt administration, computerized vote counting) and all the evidence gathered that has been perpertrated on the American people. This is not tin-foil hat territory, this is reality folks, Wake up. This administration is not above doing anything to stay in power, to dismiss or trivialize such things, is a disservice to this country and the world.

The antidote we need is handmarked paper ballots, counted by hand, in front of all parties involved, and non-partisan independent exit polling for all major and close races, a huge dose of a wake up call, and a return of the vote counting and the government to the American people.

That being said, Vote and GOTV, but be prepared to march to DC on the 8th to rectify the fraud that will happen (I hope not, but based on the history and the evidence gathered from this administration it's clear what has been and will happen again on Election Day).

by bdevil89 on Sat Nov 04, 2006 at 01:54:54 PM PDT
**********************************************************
[ Parent ]

I think Republicans would rig the election if.... (1+ / 0-)
Recommended by:highacidity
they could. And Republicans think Democrats would rig the election if they could. That's the basic reason why we need to switch to the most verifiable and secure form of casting and counting votes.

To me, that means paper ballots.

"Our programs are as lawful as they are valuable." -Michael Hayden

by smintheus on Sat Nov 04, 2006 at 02:10:26 PM PDT

[ Parent ]

the biggest problem with paper ballots (0+ / 0-)
is they become unwieldy with multiple candidates/multiple races and props like in CA.

"Politics is the art of looking for trouble, finding it everywhere, diagnosing it incorrectly and applying the wrong remedies." - Groucho Marx

by DemFromCT on Sat Nov 04, 2006 at 02:12:02 PM PDT

[ Parent ]

your epidemiology is ragged (0+ / 0-)
"Now is it coincidence that the exit polls have been so far off since the Bush administration has come into office, perhaps...."

Well, the first step is to ask, is it fact? Certainly it is true that the 2004 exit poll was far off, but we know that earlier U.S. exit polls have been off. So, that's one problem up front.

Setting that aside, one might ask, if we are considering the exit polls as diagnostic, where do they indicate that the largest problems or shenanigans or crimes in 2004 were? In Vermont, Delaware, New York, New Hampshire, and Mississippi. As a denizen of New York, I'm starting to feel quite sulky that two years later, all the folks who purport to have great confidence in exit polls have shown so very little interest in exposing the massive fraud here. It's not good enough to say that the exit polls and the other evidence 'all point to fraud.' They should, minimally, point to fraud in the same places, shouldn't they?

How many "poli-scientists" have professed bafflement at "historical precedents and odds that Bush overcame"? Not many that I can find. Bush was ahead in most of the polls, and the predictive models indicated that the economy was good enough for Bush to win easily.

If you would care to winnow and to marshal the evidence systematically, that would be useful.

by HudsonValleyMark on Sat Nov 04, 2006 at 02:30:52 PM PDT

[ Parent ]
********************************************************************
The discrepancy was large in NY 2004 (0+ / 0-)
Exit polls had Bush at 34.4% and Kerry at 64.1%

Official count was Bush at 40.1% and Kerry at 58.4%

But the difference would not have affected the outcome in NY which was never really in doubt, so it was largely overlooked.

by AskQuestions on Sat Nov 04, 2006 at 04:04:12 PM PDT

[ Parent ]

I think you've missed his point. (0+ / 0-)
That was the point he was making. If the people (like Freeman) who are so sure that the exit poll discrepancies must indicate fraud, why is no-one trying to figure out how massive fraud was implemented on New York levers?

Do you think New York levers were hacked? And how?

Talk Rational for rational talk.

by Febble on Sat Nov 04, 2006 at 04:07:53 PM PDT

[ Parent ]
****************************************************************
There's a good book on NY elections (0+ / 0-)
and the history of fraud here.

It's by Ron Hayduk, a professor at Borough of Manhattan Community College, CUNY, published by Northern Illinois Univ. Press, GATEKEEPERS TO THE FRANCHISE: SHAPING ELECTION ADMINISTRATION IN NY. Unfortunately it was published in 2005, too soon to really cover the 2004 election in depth, although he does discuss some of the differences between counties in different areas in NYC in 2004.

Again, Democratic undercount as revealed by the exit polls does not need to be accounted for solely by fraud. It is a fact that more Democratic votes are discarded, as undercounts, overcounts and provisional ballots.

But who knows what happened in NY, who has investigated? the point is that no one really has.

by AskQuestions on Sat Nov 04, 2006 at 04:42:26 PM PDT
*****************************************************************
[ Parent ]

Well, seeing as the result (0+ / 0-)
was absolutely in line with pre-election polls, it's not terribly surprising it wasn't high on anyone's To Do list.

[ Parent ]
*************************************************************************
Miofsky-Edison doesn't release precinct data (0+ / 0-)
This is a huge problem for people who really want to analyze the exit polls and figure out whether they do indicate actual fraud.

This is also the huge problem with media exit polls. They are commissioned by the media - the consortium is ABC, NBC, CBS, CNN, AP and FOX - to be used by the media in their commentary (I think they care more about the "voter values" and demographic analysis than the outcome). And the basic data is owned by the media and not released publicly.

We need independent election verification exit polls, which are designed somewhat differently. We need completely transparent polls where the data is actually released.

by AskQuestions on Sat Nov 04, 2006 at 04:10:12 PM PDT
************************************************************************

[ Parent ]
***********************************************************************
Further question (0+ / 0-)
what do you mean by

I did figure "it" out and "it" didn't - what is "it"?

And yes, a true election verification exit poll would have to be designed differently, with more precincts sampled within individual states. One way to do it would be to concentrate on battleground states and/or states with high levels of complaints and machine malfunctions in prior elections, and forego states where the result is a foregone conclusion.

by AskQuestions on Sat Nov 04, 2006 at 04:46:34 PM PDT
*****************************************************************************
[ Parent ]

Yes, the best kind of exit poll (0+ / 0-)
to try to detect fraud would be one that had a clear a priori hypothesis. I actually recommended that to one of the independent groups.

You wrote:

This is a huge problem for people who really want to analyze the exit polls and figure out whether they do indicate actual fraud.
That was the "it" I figured. I analyzed the exit polls to "figure out whether they do indicate actual fraud" and they didn't.

My main findings were:

Redshift was strongly correlated with methodological factors likely to be associated with departures from non-random sampling (e.g. long interviewing interval).
A fairly small number of methodological factors together accounted for all the net redshift.
There was absolutely no hint of any correlation between the magnitude of the discrepancy and change in Bush's vote share.
When similar sized precincts were compared, discrepancies were similar in precincts serving smaller communities regardless of technology used. In precincts serving larger communities, the discrepancy was greater in precincts in which older (non-digital) technology was used.

Talk Rational for rational talk.

by Febble on Sat Nov 04, 2006 at 05:02:01 PM PDT

[ Parent ]
***************************************************************************
Specifically what factors (1+ / 0-)
Recommended by:StupidAsshole
what were the methodological factors that together accounted for all the red shift?

As for the lack of correlation between the magnitude of the discrepancy and the change in Bush's vote share:

It is very interesting that the states in which the polls predicted a Kerry win but the official count when to Bush were Colorado, Florida, Iowa, New Mexico, Nevada and Ohio.

Victory in Ohio, Nevada and New Mexico- or just in Ohio - would have given Kerry the presidency.

by AskQuestions on Sat Nov 04, 2006 at 05:16:51 PM PDT
*************************************************************************
[ Parent ]

not so very interesting, actually (0+ / 0-)
If there is bias favoring Kerry across a wide range of states, it stands to reason that the states where Kerry leads in the poll but Bush wins in the count will be battleground states.

That doesn't react to the lack of correlation between red shift and swing: it simply changes the subject.

by HudsonValleyMark on Sun Nov 05, 2006 at 06:13:43 AM PDT

[ Parent ]

Making unnatural data look natural is hard (0+ / 0-)
Incredibly much harder than most people imagine. Fudged data tends to be chock full of red flags ... and that's if you have ideal control over its production.

This is just one of several formidable barriers to successful large-scale election tampering.

If you want to tip an election, it usually has to be very close. Most elections are mismatches.

You either have to tip a lot of votes in a few places (big red flags, getting easier to spot in the info age), or tip a few votes in a lot of places.

Unless "a lot of places" means "everywhere", there will be patterns that stand out against background, in addition to the foundational risks.

Fudging data a whole lot of places means executing a one-time caper (no practice opportunities) in a changing environment of diverse players and protocols, geographically and communicationally distributed.

This ordinarily means you'd need a big operations center, with a large number of trusted confederates, and you still risk being exposed on account of something broken or incorrectly documented in the target environment, just to win something you have a 50-50 a priori chance of winning anyway.

The Great Obama might saw the lady in half, but he won't make the elephant disappear. The Confluence

by RonK Seattle on Sat Nov 04, 2006 at 01:37:03 PM PDT

[ Parent ]

There was a red flag in 2004 (1+ / 0-)
Recommended by:StupidAsshole
*************************************************************************
The red flag was that the discrepancy between the exit poll results and the official count was almost always in favor of Bush.

That is, Bush almost always did better in the official count than in the exit poll - this was so in 10 of the 11 battleground states,

There were only 6 states - Kentucky, ND, Oklahoma, SD, West Virginia, Montana - where Bush did worse in the official count than in the exit poll. In 3 of those states the difference was under 2% so within an acceptable margin of error. The highest # of electoral votes in these 6 states was 8 (Kentucky). And these were all states where Bush won anyway.

For a statistician, there were red flags all over the place then. Error should be random - there should have been roughly the same number of states where Bush did better in the polls than the official count as states where Bush did better in the official count than the polls. 50/50 instead of the statistically improbably 6/44.

Also - the discrepancy doesn't need to be caused by intentional fraud. It is a fact that more Democratic votes than Republican votes are provisional ballots and undercounts/overcounts. So Democratic bias in the exit polls is the same thing as Democratic undercount (or Democratic "spoiled ballots" as Greg Palast calls them).
And Democratic undercount is documented, see Palast's work for starters.

by AskQuestions on Sat Nov 04, 2006 at 04:28:21 PM PDT
*************************************************************************************