Archive for August, 2014

The most useless sports stat I’ve seen yet

When Minnesota Twin Trevor Plouffe came up to bat last night at Target Field they flashed this totally irrelevant stat up on the scoreboard: “Through July Plouffe is the only Major Leaguer to have at least 35 at bats vs 1 team.”   I wondered how anyone can come up with such obscure information.  This XKCD cartoon explains it.

No Comments

Shocking research—young men prefer a jolt of electricity over doing nothing

Two-thirds of University of Virginia male students preferred a shock to doing nothing, whereas only one-quarter of the women did.  This finding by psychologist Tim Wilson, which I read about in the Wall Street Journal,* does not surprise me in the least—young fellows always seek excitement that causes immediate pain or potential catastrophe for their life and limb.  The more micromorts, equal to a one-in-a-million chance of death, the better, at least so far as men are concerned.

According to WSJ’s 7/18/14 article “Risk Is Never a Numbers Game,” micromorts (MM) were devised in the 1970’s by Stanford’s Ronald A. Howard to quantify the chances of death for any particular activity.  Each day on average the typical American faces a 1.3 MM probability of a sudden end from external causes, that is, not a natural demise.  The authors, Michael Blastland and David Spiegelhalter, bring up all sorts of morbid statistics.  What interested me was not the murders and other deathly events brought on through little or no fault of the individual, but rather the discretionary doings such as horseback riding (~1 MM) and mountain climbing (12,000 MMs!).  If you like heights but the latter sport exceeds your tolerance for risk, consider parachuting at a far safer level of 7 MMs or be really conservative by simply going on a roller coaster at 0.0015 MMs.

Whenever I see statistics like this, I wonder if one shouldn’t just strap on a helmet, grab a mattress, blanket and pillow, go down into the basement with the supplies left over from the millennium Armageddon and curl into the fetal position over in the southwest corner where tornadoes do the least damage.  That being very boring, I’d first set up a battery with leads for giving myself a shock now and then.

No Comments

How to weigh risks of medical tests and treatments

After suffering a heart attack in 2004, I was directed by my first cardiologist to go in annually for nuclear imaging.  Test after test showed near normal function despite the noticeable but relatively minor muscle damage.  My last scan in 2009 came back with a twist, though.  The heart looked good but a white spot showed up in my chest that looked like a cancer.  This necessitated me going in for a CT scan.  It came back negative (no cancer).  Nevertheless I spent a couple of weeks in a state of high suspense.

Now I’m undergoing underwriting for a key-man insurance policy for my company.  Maybe I will need to go back in for another nuclear imaging.  Although it will be useless to push back on this, my current cardiologist says there’s no need to spend the money and expose me to the radiation so long as I’m not exhibiting any changes in my heart health (I am not).

My point is that one should not assume that it’s always good to get testing, both because of its inherent dangers and because of the chance of false positive results—mistakes that can be very costly for the patients psyche.  Also, the outright costs of over-testing cannot be overlooked.  In this case I got irradiated two-fold by the nuclear imaging and then by the CT’s x-ray bombardment.  (Is it irony that a test for cancer increases ones chance of cancer?)

My friend Rich put me on to this very-informative podcast by Minnesota Public Radio (MPR) featuring a talk, aired on June 30, by Dr. Jerome Goodman and his wife, Dr. Pamela Goodman on how to make medical decisions.  Not only are both partners in this couple outstandingly qualified to speak on these matters, they bring quite different perspectives on how to weigh the risks of tests and treatments versus the potential benefits.

For example, a middle-aged woman is told that her cholesterol being above 200 exceeds the level considered safe for her heart-health.  She is advised to go on statins to reduce the risk of heart attack by 30 percent.  However, some research by this patient uncovers the statistic that women of this age only face a 1% risk of such a catastrophic event whereas 1 to 10% of those going on statins suffer the side-effect of myopathy—a painful and debilitating muscular disease.  As a result of this woman doing her homework, she decides not to take the doctor’s advice.  Does that make sense?

There really is no right answer for any of these medical decisions, but it surely is worth pressing your physician for data on risks versus benefits, doing your own research and, if it’s important enough, getting a second opinion.

2 Comments

Not the usual boring statistics conference—attendees called to duty for developing an optimal blend of beers

Earlier this month I attended the 5th European Design-of-Experiment User Meeting in Cambridge, England, which, considering the topic being statistical design of experiments, turned out not to be as dull as one might think.  All the credit for the pizzazz goes to our colleagues across the Atlantic—PRISMTC; in particular Paul Nelson and Andrew Macpherson.  They conjured up an in-conference experiment that developed an optimal blend of three local beers (all made by Milton Brewery and sold in bulk by Polypins (36 pints) from £56, Firkins (2 polypins) from £84), a pale ale called Cyclops (30-80%), a bitter under the label Justinian (20-70%) and a dark mild named Medusa (0-50%).  Prior testing by these two boffins of stats and zymurgy (that is, the study of yeasty concoctions) led them to constrain the ranges of the three brews to the ranges shown in parentheses.

Paul and Andrew laid out a clever design that, via balanced incomplete blocking, restricted any one taster to only 4 blends, while testing enough combinations often enough to provide adequate power for discerning just the right formula.  The fun bit was them asking us conference-goers to provide the necessary data prior to an atmospheric dinner at Magdalene (for some reason pronounced in English as “maudlin”) College.

This limitation on beer was one departure from a similar mixture experiment on beers that I ran* with my two sons and son-in-law as the tasters (little chance them going along with such a sensible restriction).  The other wrinkle was them requiring all of us to taste a strip of paper that ferreted out about a third of the tasters being “super tasters”—those who immediately recoiled from the bitter taste (many thought it just tasted like paper).**

It turned out that the bitterest blend, in contrast to the mildest of the beer mixtures, was not greatly liked.  I think this must be an acquired taste!  You can see this on the triangular, 3D response surface graph of the predicted response—the lowest corner being the B:Bitter.  Surprisingly, mixing in some A:Ale makes a relatively tasty brew—these two beers synergize, that is, provide much better results than either one alone.  But the tastiest blend of all is the peak at the C:Mild corner, with 30% of Cyclops, 20% of Justinian and 50% of the Medusa, some blends on a ridge through the middle of the triangular mixture space look promising.

3D Response Surface of PRISMTC Beer-Blend TasteThree cheers for three beers and hats off to the brilliance of Paul and Andrew of PRISMTC for pulling off this fun, clever and informative taste test.  See their full, illustrative report here.

*See Mixture Design Brews Up New Beer Cocktail—Black & Blue Moon

**Check out this BBC report and short video on testing for super tasters

No Comments