Finns giving up to $2000 per month at random to 2000 jobless citizens
It seems too good to be true, but the Finnish government started off this new year by giving 2000 unemployed Finnish people between 25 and 58 years of age an income of up to $2000 per month, according to this New York Times report. They will monitor these lucky recipients to see how many squander their money on vodka versus investing it in something productive such as a business startup. Their experiment will run for two years.
This initiative for a universal basic income has a broad appeal from leftist liberals to libertarians, who see it as a way to shrink stifling social services. According to this CBS News report, an experiment by Manitoba to provide a “mincome” produced positive results. It will be interesting to see if the Finns also find that it pays to provide free money.
“It always should be worth taking the job rather than staying home and taking the benefits. We have to take the risk to do this experiment.”
– Pirkko Mattila, Finland’s Minister of Social Affairs and Health
Arctic vortex delivers an impressive “Cold Force” for mid-December
As you can see, I awoke this morning to an outside temperature of minus 20.2 degrees F, which comes to precisely minus 29.00000 on the Celsius scale according to this metric converter. When I opened the window, the air provided an impressive slap to my face—no need for coffee to provide an eye-opener. However, I had to quickly shut out the cold before it gave me a brain freeze.
The iconic fellow pictured on my La Crosse Technology Wireless Weather Station, whom I generally find very indicative on temperature, did not get dressed warmly enough today. He needs a mask to avoid a frostbitten nose and frozen ears. When the Arctic express whistles down into our mid-Continent winter wasteland, I fall back on the Anderson Cold Scale, which came up just shy of Freezing Force 6 in the predawn hour. That tells me to don 6 layers before venturing out for my morning walk. I start with Long Johns. If this is your first winter up north and you need a warm undergarment, check out this traditional one from the Gentleman’s Emporium — fire engine red with rear fireman’s flap.
Beware–eating like a king is not healthy
With Thanksgiving coming up I am looking forward to a feast beyond all others throughout the year. Therefore, I did not want to know that eating like a king has been demonstrated to be unhealthy. I learned of this while reading The Seven Pillars of Statistical Wisdom by Stephen M. Stigler, one of the world’s foremost experts on the history of statistics. In his chapter on the pillar of Design, he relates (p. 150) a story from the Old Testament of how Daniel eschewed a rich diet of meat and wine offered by King Nebuchadnezzar. Daniel proposed what may be the earliest clinical trial—he and his three companions eating only pulse* and water for 10 days. Meanwhile several followers of the King enjoyed his fare for the same period. In the end Daniel and his friends fared better, at least on the basis of health.
The lesson here is to polish off the bounty of Thanksgiving before 10 days are up, in other words, do not lay off those lovely leftovers! Then eat like Daniel for a few weeks in preparation for the year-end holiday feasts. That will keep you healthy by my interpretation of Daniel’s pioneering study on diet. ; )
*Dried beans and peas (yuk!) as seen here.
Jittery gauges making people crazy on election night
Posted by mark in Graphics, Uncategorized on November 15, 2016
Early last Tuesday evening I went to the New York Times elections website to check on the Presidential race. It had Clinton favored, but not by much—just a bit over 50% at the time, with the needle wavering alarmingly (by my reckoning) towards the side of Trump. A few hours later I was shocked to see it at a plus 70% for Trump. By the time I retired for the night the Times had him at near 100%, which, of course turned out to be the case, to my surprise and many others, even President Elect Trump himself, I suspect.
Being a chemical engineer, I like the jittery gauge display—it actually is less unsettling for me than a needle that is fixed (which usually happened only when a measuring instrument failed). Even more important, from my perspective as an aficionado of statistics, is the way this dynamic graphic expressed uncertainty—becoming less jittery as the night went on and returns came in. However, the fluctuating probabilities freaked out a lot of viewers, leading to this explanation by NYT as to Why we used jittery gauges.
For an unbiased, mainly positive, review of this controversial graphical approach by the Times to report election results see this Visualizing Data blog.
“Negativity expressed towards the jitter was a visceral reaction to the anguish caused by the increasing uncertainty of the outcome, heightened by the shocking twist in events during the night, [but] I found it an utterly compelling visual aid.”
— Andy Kirk, author of Visualizing Data
P.S. Here’s a new word that I picked up while researching this blog: “skeuomorphism”, meaning the designing of graphics to resemble real world counterparts, for example, Apple Watch’s clock-like interface. Evidently battles have been raging for years in the tech world over using this approach versus flat, minimalist, designs. I had no idea!
Scary statistics about Halloween
Posted by mark in pop, Uncategorized on October 27, 2016
I am torn whether it will be scarier to dress up as the nightmarish Freddie Krueger from Elm Street or as a statistics instructor. Which would you rather be locked in a windowless room with? Hmmm… best you not answer that.
Anyways, here are some frightful facts about the upcoming holiday reported in yesterday’s USA Today:
- 171 million Americans plan to partake in Halloween festivities. Crazy!
- On average, women will pay double for “non-sexy” Halloween costumes. The “sexy” costumes cost on average around $30, while the demure ones (boo!) go for near $60.
- Witch and pirate are the first two costumes of choice, followed by Trump and Clinton. Hmmm… is this a case of perfectly opposite correlation?
Happy Halloween!
Obscurity does not equal profundity
Posted by mark in Uncategorized on October 9, 2016
In 1989 I attended a debate where George Box defended the standard approach for design of experiments against the Taguchi method. In summary he simply put up a mathematical equation on three scraps of transparencies that projected “Obscurity” “not equal to” “Profundity”. This created a memorable uproar from the Taguchi disciples in the audience.”
I am reminded of this upon the news that the winner of the 2016 Ig Nobel Peace Prize is this paper by University of Waterloo Ph.D. psychology candidate Gordon Penny et al On the reception and detection of pseudo-profound bullshit. This treatise sorts out what is serious bullshit versus simply nonsense or harmless mundanity. It provides this example of pseudo profundity from an actual tweet sent by a well-known New Age healer and advocate of alternative medicine:
Attention and intention are the mechanics of manifestation.
Evidently many people are not only prone to eating up stuff like this but they also lack to ability to sniff it out. The Waterloo researchers tested a large number (280) undergrads on a Bullshit Receptivity (BSR) scale. They then completed several follow up studies, going all out to shovel the BSR. ; )
It composts down to bullshit not only being more ubiquitous than ever before (being a big part of internet) but also increasingly popular. The authors’ hope by their study to reduce BSR and thereby the generation of it due to this improved detection of obscure pseudo profundities. That would be good!
A curve in the road to grade inflation
The New York Times Sunday Review features an opinion by Wharton School Professor Adam Grant as to Why We Should Stop Grading Students on a Curve. He asserts that his peers now give over 40% of their grades at A level—a percentage that has grown steadily for the last 30 years as detailed in this March 2016 report by GradeInflation.com. I am not surprised to see my alma mater the University of Minnesota near the top on the chart of Long Term Grade Inflation by Institution, because, after all, we pride ourselves on being nice.
During my years at the “U” most classes were graded on the curve, which Prof. Grant abhors for creating too much competition between students. However, it worked for me. I especially liked this system in my statistical thermodynamics class where my final score of 15 out of 100 came out second highest out of all the students, that is, grade A. Ha ha. This last week President Obama chastised the U.S. press for giving Trump a pass based on grading on the curve. I see no problem with that. ; )
I do grant Grant an A for creativity in coming up with a lifeline for struggling students. He allows them to write down the name of a brighter classmate on one multiple choice question. If this presumably smarter student gets it right, that question earns full credit. My only suggestion is that whomever gets called in the most for providing lifelines should be graded A for being on top of the curve. But then I see nothing wrong with rewarding the best and the brightest.
The increasing oppression of soul-less algorithms
As I’ve blogged before*, algorithms for engineering and statistical use are near and dear to my heart, but not when they become tools of unscrupulous and naïve manipulators. Thus an essay** published on the first of this month by The Guardian about “How algorithms rule our working lives” gave me some concern. In this case the concern is that employers who rely on mathematically modelled ways of sifting through job applications tend to punish the poor.
“Like gods, these mathematical models are opaque, their workings invisible to all but the highest priests in their domain: mathematicians and computer scientists. Their verdicts, even when wrong or harmful, are beyond dispute or appeal. And they tend to punish the poor and the oppressed in our society, while making the rich richer.”
– Cathy O’Neil
Of course we mustn’t blame algorithms per se, but those who write them and/or put them to wrong use. The University of Oxford advises that mathematicians don’t write evil algorithms. This October 2015 post passes along seven utopian principles for ethical code. Good luck with that!
P.S. A tidbit of trivia that I report in my book RSM Simplified: “algorithm” is an eponym for Al-Khwarizm, a ninth century Persian mathematician who wrote the book on “al-jabr” (i.e., algebra). It may turn out to be the most destructive weapon for oppression ever to emerge from the Middle East.
* Rock on with algorithms? October 2, 2012
** Adapted from Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy — a new book on business statistics coming out tomorrow by “Math Babe” Cathy O’Neil.
“Bright line” rules are simple but not very bright
Just the other day a new term came to light for me—a “bright line” rule. Evidently this is commonplace legal jargon that traces back to at least 1946 according to this language log. It refers to “a clear, simple, and objective standard which can be applied to judge a situation” by this USLegal.com definition.
I came across the term in this statement* on p-values from American Statistical Association (ASA) on statistical significance:
“Practices that reduce data analysis or scientific inference to mechanical ‘bright-line’ rules (such as ‘p < 0.05’) for justifying scientific claims or conclusions can lead to erroneous beliefs and poor decision-making.”
The ASA goes on to say:
“Researchers should bring many contextual factors into play to derive scientific inferences, including the design of the study, the quality of the measurements, the external evidence for the phenomenon under study, and the validity off assumptions that underlie the data analysis.”
It is hard to argue that if the p-value is high, the null will fly, that is, results cannot be deemed statistically significant. However, I’ve never bought into 0.05 being the bright-line rule. It is good to see ASA dulling down this overly simplistic statistical standard.
I can see the value for “bright line rules” in legal processes, a case in point being the requirement for the Miranda warning being given to advise US citizens of their rights when being arrested. However, it is ludicrous to apply such dogmatism to statistics.
*(The American Statistician, v70, #2, May 2016, p131)
Models responsible for whacky weather
Posted by mark in Basic stats & math, pop, science on August 14, 2016
Watching Brazilian supermodel Gisele Bundchen sashay across the Olympic stadium in Rio reminded me that, while these fashion plates are really dishy to view, they can be very dippy when it comes to forecasting. Every time one of our local weather gurus says that their models are disagreeing, I wonder why they would ask someone like Gisele. What does she and her like know about meteorology?
There really is a connection of fashion and statistical models—the random walk. However, this movement would be more like that of a drunken man than a fashionably-calculated stroll down the catwalk. For example, see this video by an MIT professor showing 7 willy-nilly paths from a single point.
Anyways, I am wandering all over the place with this blog. Mainly I wanted to draw your attention to the Monte Carlo method for forecasting. I used this for my MBA thesis in 1980, burning up many minutes of very expensive main-frame computer time in the late ‘70s. What got me going on this whole Monte Carlo meander is this article from yesterday’s Wall Street Journal. Check out how the European models did better than the Americans on predicting the path of Hurricane Sandy. Evidently the Euros are on to something as detailed in this Scientific American report at the end of last year’s hurricane season.
I have a random thought for improving the American models—ask Cindy Crawford. She graduated as valedictorian of her high school in Illinois and earned a scholarship for chemical engineering at Northwestern University. Cindy has all the talents to create a convergence of fashion and statistical models. That would be really sweet.