Posts Tagged politics

Gerrymanderers may soon be sent packing for doing too much cracking

Wisconsin Governor Scott Walker and his cohort of Republicans might have gone too far in redrawing their State’s political boundaries to their advantage. Last November, a federal district court declared these maneuvers, called gerrymandering,* unconstitutional. However, as discussed in this Chicago Tribune article, the Supreme Court might consider overturning the ruling—these gerrymanders being partisan, not racially discriminatory.

One of the most infamous of all gerrymandered districts—1992’s 12th Congressional District in North Carolina-is pictured here. It became known as the “I-85 district” due to being no wider than the freeway for stretches that connected the desired populations of voters.

North Carolina’s 12th was a kind of in vitro offspring of an unromantic union: Father was the 1980s/1990s judicial and administrative decisions under the Voting Rights Act, and Mother was the partisan and personal politics that have traditionally been at redistricting’s core. The laboratory that made this birth possible was the computer technology that became available for the 1990s redistricting cycle. The progeny won no Beautiful Baby contests.

— North Carolina Redistricting Cases:  the 1990s, posted at Minnesota Legislature Web Site

You may wonder, as I did, how gerrymandering works. The latest issue of Nature explains it with their graphic on “packing and cracking” here. Also, see the figures on measuring compactness. Mathematicians approach this in various ways, e.g., the area of the district compared to with that of the smallest polygon that surrounds it (called the convex hull). Quantifying the fairness of boundaries creates a great deal of contention–which measure to use being chosen for greatest advantage of whomever is wielding the figures.

Partisan gerrymandering, if not outlawed, will be catalyzed by the 2020 census. Keep an eye on this.

*A word coined in 1812 when Massachusetts’s Governor Gerry redrew a district north of Boston into the shape of a salamander.

No Comments

Jittery gauges making people crazy on election night

Early last Tuesday evening I went to the New York Times elections website to check on the Presidential race.  It had Clinton favored, but not by much—just a bit over 50% at the time, with the needle wavering alarmingly (by my reckoning) towards the side of Trump.  A few hours later I was shocked to see it at a plus 70% for Trump.  By the time I retired for the night the Times had him at near 100%, which, of course turned out to be the case, to my surprise and many others, even President Elect Trump himself, I suspect.

Being a chemical engineer, I like the jittery gauge display—it actually is less unsettling for me than a needle that is fixed (which usually happened only when a measuring instrument failed).  Even more important, from my perspective as an aficionado of statistics, is the way this dynamic graphic expressed uncertainty—becoming less jittery as the night went on and returns came in.  However, the fluctuating probabilities freaked out a lot of viewers, leading to this explanation by NYT as to Why we used jittery gauges.

For an unbiased, mainly positive, review of this controversial graphical approach by the Times to report election results see this Visualizing Data blog.

“Negativity expressed towards the jitter was a visceral reaction to the anguish caused by the increasing uncertainty of the outcome, heightened by the shocking twist in events during the night, [but] I found it an utterly compelling visual aid.”

— Andy Kirk, author of Visualizing Data

P.S. Here’s a new word that I picked up while researching this blog: “skeuomorphism”, meaning the designing of graphics to resemble real world counterparts, for example, Apple Watch’s clock-like interface.  Evidently battles have been raging for years in the tech world over using this approach versus flat, minimalist, designs.  I had no idea!

No Comments

Statistician mines poll results to come up with odds-on fav for President

CBS News this morning reported the prediction by New York Times statistician Nate Silver on who will be our next President.  OK, now that you know (presuming you could not resist following the link), how sure are you that it’s accurate?  After all Silver is the author of The Signal and the Noise: Why So Many Predictions Fail – But Some Don’t—published only a month or so ago.  My hunch is that Silver does as well as anyone—given so many unknowns that cannot be known, not the least of which is the fickle nature of undecided voters who might en masse switch allegiance the day of the election.  Anyways, I am viewing his prediction the same as a weather forecast two days out, that is, with a good deal of skepticism but, nevertheless, appreciation for the science behind the modeling.*

PS.  A friend asked me this week whether averaging polls is really valid.  I suppose so based on Silver doing it.  See how he does it at this detailing by him in his “538” blog (538 is the number of electors in the United States Electoral College).

*For example, within 72 hours of a hurricane’s landfall, meteorologists now predict the bulls-eye within a 100-mile radius—compared to 350 miles 25 years ago.  They did really well forecasting Sandy as reported here by The Washington Post.

No Comments

Probability of vote being pivotal is so small it’s not worth voting

That was the view of 2nd-year PhD student Douglas VanDerwerken up until this Presidential election.  He abstained on the basis of the lack of return on investment for spending the time to vote when it really cannot make a difference.  VanDerwerken lays it all out for statistics magazine Significance in an article for their current (October) issue.*  According to his reckoning, there is less than one chance in a million (4.5×10^-7 to be precise) of any person’s vote having an impact.  This would be a situation where the voter lives in a swing State and the election comes to a dead heat.

Fortunately (in my opinion—being one who views it as a civic duty) VanDerwerken had an epiphany based on moral reasons, so he shall vote.  Thank goodness!

“If you think about it, voting in a large national election – such as the US Presidential election – is a supremely irrational act, because the probability that your vote will make a difference in the outcome is infinitesimally small.”

– Satoshi Kanazawa, rational choice theorist**

* “The next President: Will your vote decide it”

**See Kanazawa’s three-part series on “Why Do People Vote” for his blog “The Scientific Fundamentalist” hosted by Psychology Today. Start with Part 1 posted here and continue on to the end for the answer.

No Comments

USA unemployment statistic creates a sensation

“Unbelievable jobs numbers..these Chicago guys will do anything..can’t debate so change numbers.”

– Jack Welch

Thursday morning I attended a briefing on the economy by an expert from Wells-Fargo bank.  Looking over the trends in USA unemployment rates he noted that no incumbent since World War II has achieved re-election when joblessness exceeded 8 percent.  Friday the Bureau of Labor Statistics (BLS) announced that the the national unemployment rate is now 7.8%, an improvement from 8.1% last month.    How accurate is this number and is it precise enough that a 0.3% difference can be considered significant?  I agree with the conclusion of this critique posted by Brookings Institution  that “a large part of monthly unemployment fluctuations are spurious.”  So, really, all this fuss about it being 8.1 versus 7.8 percent is really silly from a statistical point of view.  However, it is entertaining!

 

 

No Comments

“Randomistas” building steam for government to do better by designed experiments

“Businesses conduct hundreds of thousands of randomized trials each year. Pharmaceutical companies conduct thousands more. But government? Hardly any.”

–David Brooks, The New York Times, 4/26/12 editorial seen here

For those of us in the know about statistical tools this statement provides light at the end of a long tunnel.  However, this columnist gets a bit carried away by the idea that an FDA-like agency inject controlled experiments throughout government.

Although it’s great to see such enthusiasm for proactive studies based on sound statistical principles, I prefer the lower-profile approaches documented by Boston Globe Op-Ed writer Gareth Cook in this May 2011 column.  He cites a number of examples where rigorous experiments solved social problems, albeit by baby steps.  Included in his cases are “aggressively particular” successes by a group of MIT economists who are known as the “randomistas”—a play on their application of randomized controlled trials.

Evidently the obvious success of Google (12,000 randomized experiments in 2009, according to Brooks) and others reaching out over the internet has caught the attention of the mass media.  Provided they don’t promote randomistas running wild, some good will come of this, I feel sure.

No Comments

Election day pits pollsters as well as politicians

Sunday’s St. Paul Pioneer Press reported* an astounding range of predictions for today’s election results for Governor of Minnesota.  The Humphrey Institute showed Democrat Dayton leading Republican Emmer by 41 to 29 percent, whereas Survey USA (SUSA) respondents favored Dayton by only 1 percent – 39-38!  The SUSA survey included cell-phone-only (CPO) voters for the first time – one of many factors distinguishing it from their competitor for predicting the gubernatorial race.

What I always look for along with such predictions is the margin of error (MOE).  The Humphrey Institute pollsters provide these essential statistical details: “751 likely voters living in Minnesota were interviewed by telephone. The margin of error ranges between +/-3.6 percentage points based on the conventional calculation and +/-5.5 percentage points, which is a more cautious estimate that takes into account design effects, in accordance with professional best practices.”**  Note that the more conservative MOE (5.5%) still left Dayton with a significant lead, but just barely at 12 points (vs 5.5%x2 = 11% overlap of MOEs).

Survey USA, on the other hand, states their MOE as +/- 4%.  They provide a very helpful statistical breakdown by CPO versus landline, gender, age, race, etc. at this web posting.  They even include a ‘cross-tab’ on Tea Party Movement – a wild card in this year’s election.

By tomorrow we will see which polls get things right.  Also watching results with keen interest will be the consultants who advise politicians on how to bias voters their way.  Sunday’s New York Times offered a somewhat cynical report on how these wonks “Nudge the Vote”.  For example, political consultant Hal Malchow developed a mailer that listed each recipient’s voting history (whether they bothered to do so, or not), along with their neighborhood (as a whole, I presume).  Evidently this created a potent peer pressure that proved to be 10 times more effective in turning non-voters into voters!  However, these non-intuitive approaches stem from randomized experiments, which require a control group who get no contacts (Could I volunteer to be in this group?).  This creates a conundrum for political activists – they must forego trying to influence these potential voters as the price paid for unbiased results!

“It’s the pollsters that decide. Well, a poll can be skewered [sic #]. I can go out and get you a poll on anything you want and probably get the results that I want just in how I conduct it.”

— Jesse Ventura, professional wrestler (“The Body”) and former governor of Minnesota

# Evidently a Freudian slip – him being skewered on occasion by biased polls. 😉

* “Poll parsing” column by David Brauer, page 15B.

** From this posting by Minnesota Public Radio

No Comments

Minnesota’s ’08 Senate race dissed by British math master Charles Seife

Sunday’s New York Times provided this review of Proofiness – The Dark Arts of Mathematical Deception – due for publication later this week.  The cover, seen here in Amazon, depicts a stats wizard conjuring numbers out of thin air.

What caught my eye in the critique by Steven Strogatz – an applied mathematics professor at Cornell, was the deception caused by “disestimation” (as Proofiness author Seife terms it) of the results from Minnesota’s razor-thin 2008 Senate race, which Al Franken won by a razor-thin 0.0077 percent margin (225 votes out of 1.2 million counted) over Norm Coleman.  Disestimation is the act of taking a number too literally, understating or ignoring the uncertainties that surround it; in other words, giving too much weight to a measurement, relative to its inherent error.

“A nice anecdote I like to talk about is a guide at the American Museum of Natural History, who’s pointing at the Tyrannosaurus rex.  Someone asks, how old is it, and he says it’s 65 million and 38 years old.  Sixty-five million and 38 years old, how do you know that?   The guide says, well, when I started at this museum 38 years ago, a scientist told me it was 65 million years old. Therefore, now it’s 65 million and 38.  That’s an act of disestimation.  The 65 million was a very rough number, and he turned it into a precise number by thinking that the 38 has relevance when in fact the error involved in measuring the dinosaur was plus or minus 100,000 years.  The 38 years is nothing.”

–          Charles Seife (Source: This transcript of an interview by NPR.)

We Minnesotans would have saved a great deal of money if our election officials had simply tossed a coin to determine the outcome of the Franken-Coleman contest.  Unfortunately, disestimation is embedded in our election laws, which are bound and determined to make every single vote count, even though many thousands in a State-wide race prove very difficult to decipher.

No Comments

A breadth of fresh error

This weekend’s Wall Street Journal features a review by Stats.org editor Trevor Butterworth of a new book titled Wrong: Why Experts Keep Failing US – And How to know When Not to Trust Them.  The book undermines scientists, as well as financial wizards, doctors and all others who feel they are almost always right and thus never in doubt.  In fact, it turns out that these experts may be nearly as often wrong as they are right in their assertions.  Butterworth prescribes as a remedy the tools of uncertainty that applied statisticians employ to good effect.

Unfortunately the people funding consultants and researchers do not want to hear any equivocation in stated results.  However, it’s vital that experts convey the possible variability in their findings if we are to gain a true picture of what may, indeed, transpire.

“Error is to be expected and not something to be scorned or obscured.”

— Trevor Butterworth

,

No Comments

Over-reacting to month-to-month economic statistics

In his column this weekend the Numbers Guy at Wall Street Journal, Carl Bialik, notes* how uncertain the monthly statistics for unemployment and the like can be.  For example, the Census Bureau reported that sales of new single-family homes fell to record low last month.  However, if anyone (other than Bialik) read the fine print, they’d see that the upper end of 90 percent confidence interval estimates an increase in sales!

“Most of the month-to-month changes are not only nonsignificant in a statistical way, but they are often straddling zero, so you can’t even infer the direction of the change has been accurately represented.”

–          Patrick O’Keefe, economic researcher

The uncertainty stems for the use of sampling as a cost-saving measure for government agencies and ultimately us taxpayers.  For example, field representatives covering 19,000 geographical units throughout the U.S. only sample 1 out of 50 houses to see whether they’ve been sold.

The trouble with all this uncertainty in statistics is that it ruins all the drama of simply reporting the point estimate. ; )

*(See “It Is 90% Certain That Unemployment Rose. Or Fell.” and a related blog on “What We Don’t Know About the Economy” )

No Comments