Archive for category politics

Probability of vote being pivotal is so small it’s not worth voting

That was the view of 2nd-year PhD student Douglas VanDerwerken up until this Presidential election.  He abstained on the basis of the lack of return on investment for spending the time to vote when it really cannot make a difference.  VanDerwerken lays it all out for statistics magazine Significance in an article for their current (October) issue.*  According to his reckoning, there is less than one chance in a million (4.5×10^-7 to be precise) of any person’s vote having an impact.  This would be a situation where the voter lives in a swing State and the election comes to a dead heat.

Fortunately (in my opinion—being one who views it as a civic duty) VanDerwerken had an epiphany based on moral reasons, so he shall vote.  Thank goodness!

“If you think about it, voting in a large national election – such as the US Presidential election – is a supremely irrational act, because the probability that your vote will make a difference in the outcome is infinitesimally small.”

– Satoshi Kanazawa, rational choice theorist**

* “The next President: Will your vote decide it”

**See Kanazawa’s three-part series on “Why Do People Vote” for his blog “The Scientific Fundamentalist” hosted by Psychology Today. Start with Part 1 posted here and continue on to the end for the answer.

No Comments

USA unemployment statistic creates a sensation

“Unbelievable jobs numbers..these Chicago guys will do anything..can’t debate so change numbers.”

– Jack Welch

Thursday morning I attended a briefing on the economy by an expert from Wells-Fargo bank.  Looking over the trends in USA unemployment rates he noted that no incumbent since World War II has achieved re-election when joblessness exceeded 8 percent.  Friday the Bureau of Labor Statistics (BLS) announced that the the national unemployment rate is now 7.8%, an improvement from 8.1% last month.    How accurate is this number and is it precise enough that a 0.3% difference can be considered significant?  I agree with the conclusion of this critique posted by Brookings Institution  that “a large part of monthly unemployment fluctuations are spurious.”  So, really, all this fuss about it being 8.1 versus 7.8 percent is really silly from a statistical point of view.  However, it is entertaining!

 

 

No Comments

The next bubble that’s bound to burst: college tuition

I am glad to have graduated the youngest of my 5 children—now self-sustaining in Ohio State University’s biochemistry PhD program.  Even taxpayer-subsidized state-school students like her can easily pile up $10,000s in debt for ever-growing tuition. Those going to private institutions are likely to end up with a lot more money to pay back after they complete their studies.

One year ago my high-school classmate Mark Perry, now a professor of economics and finance in the School of Management at the Flint campus of the University of Michigan, warned about a Higher Education Bubble.  Under the bombshell blurb “That’s a jump of 1,120%” [from cost of college in 1978], the latest (August 27) issue of Bloomberg Businessweek extends the Bureau of Labor Statistics in Perry’s scary chart to 2012. Given what happened in housing, this is becoming extremely alarming!

Students are paying less and less of direct college costs, relying more on government grants and loans. That has encouraged universities to jack up tuition expenses, fueling a vicious circle reminiscent of the housing bubble.

– David Hogberg, Investors Business Daily

A more graphic illustration is provided via this glimpse at a Broadside by Glenn Reynolds.  View his video and weep if you have children heading for college.  It’s hard to imagine that this can go on (graduates paying many $100s per month for debt) for much longer.  The meager educational-returns on massive investments (loaded into huge debts) just do not seem sustainable.

No Comments

“Randomistas” building steam for government to do better by designed experiments

“Businesses conduct hundreds of thousands of randomized trials each year. Pharmaceutical companies conduct thousands more. But government? Hardly any.”

–David Brooks, The New York Times, 4/26/12 editorial seen here

For those of us in the know about statistical tools this statement provides light at the end of a long tunnel.  However, this columnist gets a bit carried away by the idea that an FDA-like agency inject controlled experiments throughout government.

Although it’s great to see such enthusiasm for proactive studies based on sound statistical principles, I prefer the lower-profile approaches documented by Boston Globe Op-Ed writer Gareth Cook in this May 2011 column.  He cites a number of examples where rigorous experiments solved social problems, albeit by baby steps.  Included in his cases are “aggressively particular” successes by a group of MIT economists who are known as the “randomistas”—a play on their application of randomized controlled trials.

Evidently the obvious success of Google (12,000 randomized experiments in 2009, according to Brooks) and others reaching out over the internet has caught the attention of the mass media.  Provided they don’t promote randomistas running wild, some good will come of this, I feel sure.

No Comments

Supreme Court overturns tyranny of statistical significance

In today’s Wall Street Journal, The Numbers Guy (Carl Bialik) reports on a unanimous ruling by the Supreme Court that companies cannot hide behind statistical significance (lack thereof in this case) as an excuse for nondisclosure of adverse research.  He passes along this practical advice:

“A bigger effect produced in a study with a big margin of error is more impressive than a smaller effect that was measured more precisely.”

— Stephen Ziliak, economics professor

However, this legal analysis of the ruling cautions that statistical significance remains relevant for assessing materiality of an adverse event.

Given all this, we can be certain of only one thing – more lawsuits.

 

2 Comments

Election day pits pollsters as well as politicians

Sunday’s St. Paul Pioneer Press reported* an astounding range of predictions for today’s election results for Governor of Minnesota.  The Humphrey Institute showed Democrat Dayton leading Republican Emmer by 41 to 29 percent, whereas Survey USA (SUSA) respondents favored Dayton by only 1 percent – 39-38!  The SUSA survey included cell-phone-only (CPO) voters for the first time – one of many factors distinguishing it from their competitor for predicting the gubernatorial race.

What I always look for along with such predictions is the margin of error (MOE).  The Humphrey Institute pollsters provide these essential statistical details: “751 likely voters living in Minnesota were interviewed by telephone. The margin of error ranges between +/-3.6 percentage points based on the conventional calculation and +/-5.5 percentage points, which is a more cautious estimate that takes into account design effects, in accordance with professional best practices.”**  Note that the more conservative MOE (5.5%) still left Dayton with a significant lead, but just barely at 12 points (vs 5.5%x2 = 11% overlap of MOEs).

Survey USA, on the other hand, states their MOE as +/- 4%.  They provide a very helpful statistical breakdown by CPO versus landline, gender, age, race, etc. at this web posting.  They even include a ‘cross-tab’ on Tea Party Movement – a wild card in this year’s election.

By tomorrow we will see which polls get things right.  Also watching results with keen interest will be the consultants who advise politicians on how to bias voters their way.  Sunday’s New York Times offered a somewhat cynical report on how these wonks “Nudge the Vote”.  For example, political consultant Hal Malchow developed a mailer that listed each recipient’s voting history (whether they bothered to do so, or not), along with their neighborhood (as a whole, I presume).  Evidently this created a potent peer pressure that proved to be 10 times more effective in turning non-voters into voters!  However, these non-intuitive approaches stem from randomized experiments, which require a control group who get no contacts (Could I volunteer to be in this group?).  This creates a conundrum for political activists – they must forego trying to influence these potential voters as the price paid for unbiased results!

“It’s the pollsters that decide. Well, a poll can be skewered [sic #]. I can go out and get you a poll on anything you want and probably get the results that I want just in how I conduct it.”

— Jesse Ventura, professional wrestler (“The Body”) and former governor of Minnesota

# Evidently a Freudian slip – him being skewered on occasion by biased polls. 😉

* “Poll parsing” column by David Brauer, page 15B.

** From this posting by Minnesota Public Radio

No Comments

Minnesota’s ’08 Senate race dissed by British math master Charles Seife

Sunday’s New York Times provided this review of Proofiness – The Dark Arts of Mathematical Deception – due for publication later this week.  The cover, seen here in Amazon, depicts a stats wizard conjuring numbers out of thin air.

What caught my eye in the critique by Steven Strogatz – an applied mathematics professor at Cornell, was the deception caused by “disestimation” (as Proofiness author Seife terms it) of the results from Minnesota’s razor-thin 2008 Senate race, which Al Franken won by a razor-thin 0.0077 percent margin (225 votes out of 1.2 million counted) over Norm Coleman.  Disestimation is the act of taking a number too literally, understating or ignoring the uncertainties that surround it; in other words, giving too much weight to a measurement, relative to its inherent error.

“A nice anecdote I like to talk about is a guide at the American Museum of Natural History, who’s pointing at the Tyrannosaurus rex.  Someone asks, how old is it, and he says it’s 65 million and 38 years old.  Sixty-five million and 38 years old, how do you know that?   The guide says, well, when I started at this museum 38 years ago, a scientist told me it was 65 million years old. Therefore, now it’s 65 million and 38.  That’s an act of disestimation.  The 65 million was a very rough number, and he turned it into a precise number by thinking that the 38 has relevance when in fact the error involved in measuring the dinosaur was plus or minus 100,000 years.  The 38 years is nothing.”

–          Charles Seife (Source: This transcript of an interview by NPR.)

We Minnesotans would have saved a great deal of money if our election officials had simply tossed a coin to determine the outcome of the Franken-Coleman contest.  Unfortunately, disestimation is embedded in our election laws, which are bound and determined to make every single vote count, even though many thousands in a State-wide race prove very difficult to decipher.

No Comments

Ink made to last and fonts that minimize its consumption

Over the past few weeks, I’ve come across a number of interesting inkles about ink.

  1. A team of U.S.-British researchers announced earlier this month that they deciphered previously-illegible scrawling by African explorer David Livingstone, which he made 140 years ago under desperate circumstances using the juice of local berries.  See the image enhancement in this article by New Scientist Tech.  Given the depressing content of Livingstone’s laments, it may be just as well he used ephemeral ink.
  2. The Dead Sea Scrolls, now on exhibit at the Minnesota Science Museum (see this picture, for example), were written with extremely durable black ink (well over 2000 years old!) comprised of lamp black (soot), gum Arabic and flaxseed oil.  According to this Numerica entry on the chemistry of ink a red version was made by substituting cinnabar (mercury sulfide ? – HgS).  That must have been used by the editor overseeing publication of the Scrolls. ; )
  3. Printer.com suggests that we all save ink by favoring certain fonts over others.  For example Century Gothic* uses 30 percent less ink than Arial.  As a general rule the serif fonts do better than the sans serif ones.  An article by Dinesh Ramde of Associated Press on 4/7/10 reported that a school of 6,500 students, such as the University of Wisconsin-Green Bay, can save up to $10,000 per year by switching to an ink-stingy font.  To really make a statement about their support for Earth, UW-GB ought to go with the “holey” ecofont.  However, rather than going to something so ugly, perhaps the best thing for all concerned about going green would be to be prohibited from printing anything and just hand-write what’s absolutely essential to put on paper (or papyrus).

No Comments

A breadth of fresh error

This weekend’s Wall Street Journal features a review by Stats.org editor Trevor Butterworth of a new book titled Wrong: Why Experts Keep Failing US – And How to know When Not to Trust Them.  The book undermines scientists, as well as financial wizards, doctors and all others who feel they are almost always right and thus never in doubt.  In fact, it turns out that these experts may be nearly as often wrong as they are right in their assertions.  Butterworth prescribes as a remedy the tools of uncertainty that applied statisticians employ to good effect.

Unfortunately the people funding consultants and researchers do not want to hear any equivocation in stated results.  However, it’s vital that experts convey the possible variability in their findings if we are to gain a true picture of what may, indeed, transpire.

“Error is to be expected and not something to be scorned or obscured.”

— Trevor Butterworth

,

No Comments

Over-reacting to month-to-month economic statistics

In his column this weekend the Numbers Guy at Wall Street Journal, Carl Bialik, notes* how uncertain the monthly statistics for unemployment and the like can be.  For example, the Census Bureau reported that sales of new single-family homes fell to record low last month.  However, if anyone (other than Bialik) read the fine print, they’d see that the upper end of 90 percent confidence interval estimates an increase in sales!

“Most of the month-to-month changes are not only nonsignificant in a statistical way, but they are often straddling zero, so you can’t even infer the direction of the change has been accurately represented.”

–          Patrick O’Keefe, economic researcher

The uncertainty stems for the use of sampling as a cost-saving measure for government agencies and ultimately us taxpayers.  For example, field representatives covering 19,000 geographical units throughout the U.S. only sample 1 out of 50 houses to see whether they’ve been sold.

The trouble with all this uncertainty in statistics is that it ruins all the drama of simply reporting the point estimate. ; )

*(See “It Is 90% Certain That Unemployment Rose. Or Fell.” and a related blog on “What We Don’t Know About the Economy” )

No Comments