Posts Tagged design of experiments

Data detectives keep science honest

An article in Wall Street Journal last week* drew my attention to a growing number of scientists who moonlight as data detectives sleuthing out fraudulent studies. Thanks to their work the number of faulty papers retracted increased from 119 in 2002 to 5,500 last year. These statistics come from Retraction Watch who provide a better, graphical, perspective on the increase based on percent retractions per annual science and engineering (SE) publication–not nearly as dramatic given the explosion in publications over the last 20 years, but still very alarming.

“If you take the sleuths out of the equation it’s very difficult to see how most of these retractions would have happened.”

Ivan Oransky, co-founder of Data Colada –a  blog dedicated to investigative analysis and replication of academic research.

Coincidentally, I just received this new cartoon from Professor Nadeem Irfan Bukhari. (See my all-time favorite from him in the April 27, 2007 StatsMadeEasy blog Cartoon quantifies commitment issue.)

It depicts statistics as the proverbial camel allowed to put its nose in the tent occupied by science disciplines until it become completely entrenched.

Thank goodness for scientists like Nadeem who embrace statistical tools for design and analysis of experiments. And kudos to those who guard against faulty or outright fraudulent scientific publications.

*The Band of Debunkers Busting Bad Scientists, Nidhi Subbaraman, 9/24/23

No Comments

Crater Experiment Makes a Big Impact




Craters are crazy and cool.  One that is quite amazing was created by the Barringer Meteorite that crashed into Arizona about 50,000 years ago with an explosion equal to 2.5 megatons of TNT.  Based on this detailing of what a 2 MT bomb would do I figure that Barringer would have completely wiped out my home town of Stillwater, Minnesota and its 20,000 or so residents, plus far more beyond us.  The picture my son Hank took of the 1 mile wide 570 foot deep crater does not do justice to its scale.  You really need to go see it for yourselfMeteor Crater as the two of us did.

Because of my enthusiasm for craters, making these rates number on my list of fun science projects in DOE It Yourself.  As noted there, members the Salt Lake Astronomical Society wanted to drop bowling balls from very high altitudes onto the salt flats of Utah, but workers in the target area from the U.S. Bureau of Land Management objected to the experiment.

Kudos to science educator Andrew Temme for leading students through a far more manageable experiment shown in this video.  In reply to me asking for permission for providing a link to his fantastic impact movies Andrew gave me this heads-up.  “I attended a NASA workshop to get certified to handle real moon rocks and meteorites at the NJ State Museum in Trenton.  This lab in the educator guide suggested mixing up your own lunar powder and throwing objects to simulate impact craters.  When I got home I ran the lab with a few of my classes and then made the video.  I used a Sony handheld camera that had a slow motion setting (300 fps).”  Awesome!

The other day I went up to the 9th floor of my condo building in Florida and tossed a football down on to the parking lot.  I am warming up to heaving a 15 pound mushroom anchor over on the beach side from atop one of the far pricier high rises along the Gulf.  However, I have to wait until the turtle nesting season is over.

, ,

No Comments

A helpful hierarchy for statistical analysis spells out how deep to drill on the statistics




Fred Dombrose, a force for use of statistical design of experiment in biomedical research, alerted me to an enlightening article on statistics asking “What is the question?” in the March 20 issue of Science magazine.  It lays out these 6 types of data analyses laid out by biostatistician Jeffrey Leak:

  • Descriptive
  • Exploratory
  • Inferential
  • Predictive
  • Causal
  • Mechanistic

For the distinguishing details going up this ladder see this Data Scientist Insight blog.  However, the easiest way to determine where your study ranks is via the flowchart provided in the Science publication.  There you also see four common mistakes that stem from trying to get too much information from too little data.

“Poor design of experiments is one of, if not the most, common reason for an experiment to fail.”

– Jeff Leek, “Great scientist – statistics = lots of failed experiments”, simplystats blog of 4/12/13

No Comments

Conqueror paper dominates in flight test




After seeing this record-breaking airplane flight I bought a ream of the Conqueror® CX22 paper used for the construction of the amazing flying machine.  Would it produce the same outstanding results from weekend warriors?

I put this to the test on Sunday with my son-in-law Ryan, my son Ben and his friend Josh.  Of course none of them could throw like the champion “pilot” and Arena Football League quarterback, Joe Ayoob, who vaulted the hand-folded paper aircraft 226 feet, 10 inches on Feb. 26, 2012 at McClellan Air Force Base in California.  Also, the simple dart template used for making the airplanes could not compete with the design of “the paper airplane guy” John Collins.  However, after blocking out the difference between throwers (Ryan being the standout), I found a significant advantage to the heavier (26.6 pound) and stiffer Conqueror paper over a standard 24-pound stock we use at Stat-Ease made by Navigator).

Paper airplane DOEThe picture tells the story (click it for a close-up view)–the Conqueror shown in red far exceeding the standard stock (black points), with one exception highlighted at the upper left.  It turns out that Ben ‘accidentally’ spilled beer on his buddy Josh’s airplane.  That’s the way things go on the weekend competitions—whatever it takes to win.

No Comments

Experiments by Incan agronomists




Earlier this week I visited Machu Picchu in Peru, which features extensive terracing for crop-growing. Five-hundred years ago the Incans took full advantage of a uniquely temperate microclimate on sites like this along the border of the Amazon. First of all they engineered a drainage system out of rocks brought up from the river below. Then they covered it with dirt laboriously hauled from fertile plains at lower elevations. Next they evidently experimented on different crops at different levels to get the best interaction with varying temperatures each step of the way down from the peak. According to a team of agronomists and archeologists Machu Picchu terraces from U Penn who reproduced the Incan farming conditions, yields of potatoes came in at two or even four times what would be expected. All this is quite impressive—building a self-sustaining city at nearly 8000 feet on a peak with only a few, small flat areas.

No Comments

Design of experiments (DOE) most important for optimizing products, processes and analytical technologies




According to this February 2014 Special Report on Enabling Technologies two-thirds of BioProcess readers say that DOE makes the most impact on their analytical work.

 “The promise of effective DOE is that the route of product and process development will speed up through more cost-effective experimentation, product improvement, and process optimization. Your ‘batting average’ will increase, and you will develop a competitive advantage in the process.”

–Ronald Snee

No Comments

Must we randomize our experiment?




In the early 1990s I spoke at an applied statistics conference attended by DOE gurus George Box and Stu Hunter.  This was a time when Taguchi methods had taken hold, which engineers liked because the designs eschewed randomization in favor of ordering by convenience–with hardest-to-control factors changed only once during the experiment.  I might have fallen for this as well, but in my early days in R&D I worked on a high-pressure hydrogenation unit that, due to risks of catastrophic explosion, had to be operated outdoors and well away from any other employees.  (Being only a summer engineer it seemed that I was disposable.)  Naturally the ambient conditions varied quite dramatically at times, particularly in the Fall season when I was under pressure (ha ha) to wrap up my project.  Randomization of my experiment designs provided me insurance against the time-related lurking variables of temperate, humidity and wind.

I was trained to make runs at random and never questioned its importance.  Thus I was really surprised when Taguchi disciples attending my talk picked on me for bothering to do so.  But, thank goodness, Box had already addressed this in his 1989 report Must We Randomize Our Experiment.  He advised that experimenters:

  1. Always randomize in those cases where it creates little inconvenience.
  2. When an experiment becomes impossible being subjected to randomization
    • and you can safely assume your process is stable, that is, any chance-variations will be small compared to factor effects, then run it as you can in non-random order;
    • but, if due to process variation, the results would be “useless and misleading” without randomization, abandon it and first work on stabilizing the process;
    • or consider a split-plot design.

I am happy to say that Stat-Ease with the release of version 9 of its DOE programs now provides the tool for the compromise, as Box deems it, between randomizing or not, that is—split plots.  For now it is geared to factorial designs, but that covers a lot of ground for dealing with hard-to-change factors such as oven temperature in a baking experiment.*  Details on v9 Design-Expert® software can be found here http://www.statease.com/dx9.html along with a link to a 45-day free trial.  Check it out!

*For a case study on a split-plot experiment that can be easily designed, assessed for power and readily analyzed with the newest version of Stat-Ease software, see this report by Bisgaard, et al (colleagues of Box).

No Comments

George Box–a giant in the field of industrial design of experiments (DOE)




George Box passed away this week at 94.  Having a rare combination of numerical and communication skills along with an abundance of common sense, this fellow made incredible contributions to the cause of industrial experimenters.  For more about George, see this wonderful tribute by John Hunter.

My memorable stories about Box both relate to his way with words that cut directly to a point:

  • In 1989 at the Annual Quality Congress in Toronto seeing him open his debate with competing guru Genichi Taguchi by throwing two words on an overhead projector–“Obscurity” and “Profundity”, and then after a dramatic pause, adding the not-equal sign between them.  This caused Taguchi’s son Shin to leap up from the front row and defend his father.  This cause the largest crowd I have ever seen at a technical conference to produce a collective gasp that one only rarely experiences.
  • In 1996 at a DOE workshop in Madison, Wisconsin enjoying his comeback to a very irritating disciple of Taguchi who kept interrupting the lecture: “If you are going to do something, you may as well do it right.”

Lest this give the impression that Box was mean-spirited see this well-reasoned white paper that provides a fair balance of praise and criticism of Taguchi, who created a huge push forward for the cause of planned experimentation for quality improvement.

The body of work by George Box in his field is monumental.  It provides the foundation for all that we do at Stat-Ease.  Thank you George, may you rest in peace!

No Comments

Random thoughts




The latest issue of Wired magazine provides a great heads-up on random numbers by Jonathan Keats.  Scrambling the order of runs is a key to good design of experiments (DOE)—this counteracts the influence of lurking variables, such as changing ambient conditions.

Designing an experiment is like gambling with the devil: only a random strategy can defeat all his betting systems.

— R.A. Fisher

Along those lines, I watched with interest when weather forecasts put Tampa at the bulls-eye of the projected track for Hurricane Isaac.  My perverse thought was this might the best place to be, at least early on when the cone of uncertainty is widest.

In any case, one does best by expecting the unexpected.  That gets me back to the topic of randomization, which turns out to be surprisingly hard to do considering the natural capriciousness of weather and life in general.  When I first got going on DOE, I pulled numbered slips of paper out of my hard hat.  Then a statistician suggested I go to a phone book and cull numbers from the last 4 digits from whatever page opened up haphazardly.  Later I graduated to a table of random numbers (an oxymoron?).  Nowadays I let my DOE software lay out the run order.

Check out how Conjuring Truly Random Numbers Just Got Easier, including the background by Keats on pioneering work in this field by British (1927) and American (1947) statisticians.  Now the Australians have leap-frogged (kangarooed?) everyone, evidently, with a method that produces 5.7 billion “truly random” (how do they know?) values per second.  Rad mon!

,

No Comments

Strategy of experimentation: Break it into a series of smaller stages




Tia Ghose of The Scientist provides a thought-provoking “Q&A” with biostatistician Peter Bacchetti on “Why small is beautiful” in her June 15th column seen here.  Peter’s message is that you can learn from a small study even though it may not provide the holy grail of at least 80 percent power.*  The rule-of-thumb I worked from as a process development engineer is not to put more than 25% of your budget into the first experiment, thus allowing the chance to adapt as you work through the project (or abandon it altogether).  Furthermore, a good strategy of experimentation is to proceed in three stages:

  • Screening the vital few factors (typically 20%) from the trivial many (80%)
  • Characterizing main effects and interactions
  • Optimizing (typically via response surface methods).

For a great overview of this “SCO” path for successful design of experiments (DOE) see this detailing on “Implementing Quality by Design” by Ronald D. Snee in Pharm Pro Magazine, March 23, 2010.

Of course, at the very end, one must not overlook one last step: confirmation and/or verification.

* I am loathe to abandon the 80% power “rule”** but, rather, increase the size of effect that you screen for in the first stage, that is, do not use too fine a mesh.

** For a primer on power in the context of industrial experimentation via two-level factorial design, see these webinar slides posted by Stat-Ease.

 

,

No Comments