Archive for category Uncategorized

Illuminating results from sparkler experiment

This video, concluding with the obligatory lighting up of multiple sparklers, lays out the results of another fun and educational experiment by Chemical and Biological Engineering (CBE) students at South Dakota School of Mines and Technology (SDSMT) for their Applied Design of Experiments for the Chemical Industry class.

The testers: Anthony Best, Henry Brouwer, and Jordyn Tygesen, uncovered significant interactions of wind, water and lighting position on the burn time as illustrated by the Pareto chart of effects from Design-Expert software.

I expect these three experimenters will be enjoying extremely sparkly celebrations this summer!

No Comments

Killer reveals contents of International Shark Attack File

I will bet the caught your attention. It did mine but for reasons more benign than indicated by my sensational blog title: The report comes from an outdoors columnist for Florida’s Treasure Coast Newspapers named Ed Killer. He passed along the latest statistics on shark attacks released Monday by the Florida Museum on Natural History. It turns out that “interactions” with these dreaded aquatic carnivores decreased by nearly 10% to 129 worldwide in 2020. Unfortunately, deaths increased to 13, up by 2 from 2019, including the first ever in Maine. Australia led the world for shark fatalities and came in second to the USA for bites.

“When a surfer gets bit in New Smyrna Beach [Florida], it’s often by a blacktip and requires some stitches to recover from. But when a surfer gets bit in Australia, it’s by a 2000-pound 15-foot-long great white shark. A nibble from a white shark can take off a leg.”

– Gavin Naylor, Director, Florida Program for Shark Research*

All this talk about sharks makes me feel a lot better being homebound in Minnesota for the time being. In 1975 my wife and I moved to California just in time for the premier of Jaws at the local drive-in movie theater. I suffered twitchy-legged nightmares for some time afterwards imagining a great white shark lurking at the foot of my bed. Watching this Danish advertisement provides an antidote my now-revived shark fears.

*See details on the data science behind their The International Shark Attack File (ISAF), and a fascinating animated graphic showing attacks by location worldwide over 50 years, here.

No Comments

Experiment reveals secret to maximizing microwave popcorn—Part two: Results

Nothing beats microwave popcorn for snacking. That’s what makes unpopped kernels (UPK) so aggravating—not just for the loss of yummy yield, but also for the pain from accidentally biting down on them. Therefore, I am quite pleased to report significantly reduced UPK discovered by my designed experiment detailed in part 1 of this blog.

The big reveal comes from the interaction plot showing that the effect of preheating depends on the timing method.

First off, look up at the upper left of the graph and notice that the default GE timing, done by a humidity sensor, creates significantly greater UPKs—the lower end of the least significant difference (LSD) bars (p<0.5) fall above the higher ends of all other LSDs. The actual results using my GE microwave popcorn button, shown by the red (no preheat) and green (yes-preheat) circles on the left, ranged from 41 to 92—far too many UPKs per bag.

Next, see how the combination of GE++ (adding time) with no preheating wins out overall. The actual counts, shown by the red circles at middle bottom, ranged from 23 to 34—far fewer UPKs than before.

Life is good: Best not bother to put in 1 cup of water and wait for a minute; also, no complications introduced by setting up my cell phone, quieting the household, and standing by to turn off the microwave when alerted by Popcorn Expert. All I need to do is press the popcorn button and then 9 twice for the extra time. Easy! And, by the way, the popcorn tastes great—no burning!

I never would have made this significant improvement without the more-precise:

  • measurement of UPK counts (versus weight) and
  • Poisson-regression (versus ordinary least squares) modeling*
    *(available in the newly released version 13 of Design-Expert® software)

I encourage you to do your own microwave popcorn experiment, ideally multifactor ones using Design-Expert version 13, now available as a free, fully functional, 14-day trial. Many factors can be tested—first and foremost being brand of popcorn and time in the microwave. Two ‘hacks’ posted to the question-and-answer website Quora intrigue me:

Another hack botched by me (as confessed in part 1) is pouring the popcorn into a vented microwave container. Throw one or more of these factors into your design of experiment (DOE) and please let me know the statistical outcome along with the raw data.

I remain a few dozen kernels short of the perfect microwave popcorn: Zero UPK with every exploded morsel being incredibly delicious.

Every once in a while, someone will mail me a single popcorn kernel that didn’t pop. I’ll get out a fresh kernel, tape it to a piece of paper and mail it back to them.

Orville Redenbacher

No Comments

Experiment reveals secret to maximizing microwave popcorn—Part one: Setup

Energized by a new tool in Design-Expert® software (DX) for modeling counts (to be discussed in Part 2—Analysis of results), I laid out a design of experiment (DOE) aimed at reducing the number of unpopped kernels (UPK) from microwaved popcorn. I figured that counting the UPKs would be a far more precise measure of popcorn loss than weighing them, as done in this prior study by me and my son Hank).

My new experiment varied the following two factors in a replicated, full, multilevel, categorical design done with my General Electric (GE) Spacemaker microwave oven:

A. Preheat with 1 cup of water at 1 minute on high, No [L1] vs Yes [L2]

B. Timing, GE default [L1] vs GE++ [L2] vs Popcorn Expert app [L3]

I tested the preheating (factor A) before and found it to be unproductive. However, after seeing it on this list of microwave ‘hacks’, I decided to try again. Perhaps my more precise measuring of UPK might show preheating to be of some help after all.

The timing alternatives (factor B) came about when I discovered Popcorn Expert AI Cooking Assistant for systematically applying the #1 hack—the two-second rule: When this much time passes between pops, stop.

By the way, I also tried the third hack—pouring the popcorn into a covered glass bowl, but that failed completely—causing a very alarming “SENSOR ERROR”. It turns out that the GE Spacemaker uses humidity to determine when your popcorn is done. The plastic cover prevented moisture from escaping. Oops! Next time I try this it will be with a perforated lid.

While researching the user manual for the first time since buying the Spacemaker 15 years ago (engineers rarely read instructions) and learning about the humidity angle for the first time, I also found out that pressing 9 twice after beginning the popcorn cook added 20 and then 10 more seconds (++) at the end.

The original experiment-design of 12 runs (2×3 replicated) was laid out in a randomized recipe sheet by DX, all of them done using 3 ounce bags of Jolly Time, Simply Popped Sea Salt microwave popcorn. Due to a few mistakes by the machine operator (me) misreading the run sheet, two extra runs got added—no harm done: more being better for statistical power.

Part 2 of this two-part blog will delve into the analysis details, but it became readily apparent from a one-to-one comparison that the default popcorn setting of my GE microwave came up far short of Popcorn Expert for reducing UPK. However, the “++” adjustment closed the gap, as you will see.

To be continued…

No Comments

Moving averages creating coronavirus confusion

The statistics being reported on Covid-19 keep pouring in—far too much information by my reckoning. Per the nation’s top infectious disease expert, Dr. Anthony Fauci, I focus on positivity rates as a predictor of the ups and down of the coronavirus. However, the calculations for even this one statistic cause a great deal of controversy, especially in times like now with rising cases of Covid-19.

For example, as reported by The Las Vegas Review-Journal last week, positivity rates for the Nevada now vary by an astounding five-fold range depending on the source of the statistics. It doesn’t help that the State went from 7-day to 14-day moving averages, thus dampening down an upsurge.

“We’re trying to get that trend to be as smooth as possible, so that an end user can look at it and really follow that line and understand what’s happening.”

State of Nevada Chief Biostatistician Kyra Morgan, Nevada changed how it measures COVID’s impact. Here’s why., The Las Vegas Review-Journal, 10/22/20

My preference is 7 days over 14 days, but, in any case, I would always like to see the raw data graphed along with the smoothed curves. The Georgia Rural Health Innovation Center provided an enlightening primer on moving averages this summer just as State Covid-19 cases spiked. Notice how the 7-day averaging takes out most of the noise in the data. The 14-day approach goes a bit too far in my opinion—blunting the spike at the end.

I advise that you pay attention to the nuances behind Covid-19 statistics, in particular the moving averages and how they get shifted from time to time.

PS My favorite method for smoothing is exponentially weighted moving averages. See it explained at this NIST Engineering Statistics Handbook post. It is quite easy to generate with a simple spreadsheet. With a smoothing constant of 0.2 (my preference) you get an averaging similar to a moving average of 5 periods, but it is far more responsive to more current results.

No Comments

Engineer detects “soul crushing” patterns in “A Million Random Digits”

Randomization provides an essential hedge against time-related lurking variables, such as increasing temperature and humidity. It made all the difference for me succeeding with my first designed experiment on a high-pressure reactor placed outdoors for safety reasons.

Back then I made use of several methods for randomization:

  • Flipping open a telephone directory and reading off the last four digits of listings
  • Pulling out number from pieces of paper put in my hard hat (easiest approach)
  • Using a table of random numbers.

All of these methods seem quaint with the ubiquity of random-number generators.* However, this past spring at the height of the pandemic quarantine, a software engineer Gary Briggs of Rand combatted boredom by bearing down on his company’s landmark 1955 compilation of “A Million Random Digits with 100,000 Normal Deviates”.**

“Rand legend has it that a submarine commander used the book to set unpredictable courses to dodge enemy ships.”

Wall Street Journal

As reported here by the Wall Street Journal (9/24/20), Briggs discovered “soul crushing” flaws.

No worries, though, Rand promises to remedy the mistakes in their online edition of the book — worth a look if only for the enlightening foreword.

* Design-Expert® software generates random run orders via code based on the Mersenne Twister. For a view of leading edge technology, see the report last week (9/21/20) by HPC Wire on IBM, CQC Enable Cloud-based Quantum Random Number Generation.

**For a few good laughs, see these Amazon customer reviews.

No Comments

Magic of multifactor testing revealed by fun physics experiment: Part Three—the details and data

Detail on factors:

  1. Ball type (bought for $3.50 each from Five Below (www.fivebelow.com)):
    • 4 inch, 41 g, hollow, licensed (Marvel Spiderman) playball from Hedstrom (Ashland, OH)
    • 4 inch, 159 g, energy high bounce ball from PPNC (Yorba Linda, CA)
  2. Temperature (equilibrated by storing overnight or longer):
    • Freezer at about -4 F
    • Room at 72 to 76 F with differing levels of humidity
  3. Drop height (released by hand):
    • 3 feet
    • 6 feet
  4. Floor surface:
    • Oak hardwood
    • Rubber, 3/4″ thick, Anti Fatigue Comfort Floor Mat by Sky Mats (www.skymats.com)

Measurement:

Measurements done with Android PhyPhox app “(In)Elastic”. Record T1 and H1, time and height (calculated) of first bounce. As a check note H0, the estimated drop height—this is already known (specified by factor C low and high levels).

Data:

Std   # Run   # A: Ball type B: Temp deg F C: Height feet D: Floor type Time seconds Height centimeters
1 16 Hollow Room 3 Wood 0.618 46.85
2 6 Solid Room 3 Wood 0.778 74.14
3 3 Hollow Freezer 3 Wood 0.510 31.91
4 12 Solid Freezer 3 Wood 0.326 13.02
5 8 Hollow Room 6 Wood 0.829 84.33
6 14 Solid Room 6 Wood 1.119 153.54
7 1 Hollow Freezer 6 Wood 0.677 56.17
8 4 Solid Freezer 6 Wood 0.481 28.34
9 5 Hollow Room 3 Rubber 0.598 43.92
10 10 Solid Room 3 Rubber 0.735 66.17
11 2 Hollow Freezer 3 Rubber 0.559 38.27
12 7 Solid Freezer 3 Rubber 0.478 28.03
13 15 Hollow Room 6 Rubber 0.788 76.12
14 11 Solid Room 6 Rubber 0.945 109.59
15 9 Hollow Freezer 6 Rubber 0.719 63.43
16 13 Solid Freezer 6 Rubber 0.693 58.96

Observations:

  • Run 7: First drop produced result >2 sec with height of 494 cm. This is >16 feet! Obviously something went wrong. My guess is that the mic on my phone is having trouble picking up the sound of the softer solid ball and missed a bounce or two. In any case, I redid the bounce.
    • Starting run 8, I will record Height 0 in Comments as a check against bad readings.
  • Run 8: Had to drop 3 times to get time registered due to such small, quiet and quick bounces.
    • Could have tried changing setting for threshold provided by the (In)Elastic app.
  • Run 14: Showing as outlier for height so it was re-run. Results came out nearly the same 1.123 s (vs 1.119 s) and 154.62 cm (vs 153.54). After transforming by square root these results fell into line. This makes sense by physics being that distance for is a function of time squared.

Suggestions for future:

  • Rather than drop the balls by eye from a mark on the wall, do so from a more precise mechanism to be more consistent and precise for height
  • Adjust up for 3/4″ loss in height of drop due to thickness of mat
  • Drop multiple times for each run and trim off outliers before averaging (or use median result)
  • Record room temp to nearest degree

No Comments

Magic of multifactor testing revealed by fun physics experiment: Part Two—the amazing results

The 2020 pandemic provided a perfect opportunity to spend time doing my favorite thing: Experimenting!

Read Part One of this three-part blog to learn what inspired me to investigate the impact of the following four factors on the bounciness of elastic spheroids:

  A. Ball type: Hollow or Solid

  B. Temperature: Room vs Freezer

  C. Drop height: 3 vs 6 feet

  D. Floor surface: Hardwood vs Rubber

Design-Expert® software (DX) provides the astonishing result: Neither the type of ball (factor A) nor the differing surfaces (factor D) produced significant main effects on first-bounce time (directly related to height per physics). I will now explain.

Let’s begin with the Pareto Chart of effects on bounce time (scaled to t-values).

First observe the main effects of A (ball type) and D (floor surface) falling far below the t-Value Limit: They are insignificant (p>>0.05). Weird!

Next, skipping by the main effect of factor B (temperature) for now (I will get back to that shortly), notice that C—the drop height—towers high above the more conservative Bonferroni Limit: The main effect of drop height is very significant. The orange shading indicates that increasing drop height creates a positive effect—it increases the bounce time. This makes perfect sense based on physics (and common knowledge).

Now look at a multi-view Model Graphs for all four main effects.

The plot at the lower left shows how the bounce time increased with height. The least-significant-difference ‘dumbbells’ at either end do not overlap. Therefore, the increase is significant (p<0.05). The slope quantifies the effect—very useful for engineering purposes.

However, as DX makes clear by its warnings, the other three main effects, A, B and D, must be approached with great caution because they interact with each other. The AB and BD interactions will tell the true story of the complex relationship of ball type (A), their temperature (B) and the floor material (D).

See by the interaction plot how the effect of ball type depends on the temperature. At room temperature (the top red line), going from the hollow to the solid ball produces a significant increase in bounce time. However, after being frozen, the balls behaved completely opposite—hollow beating solid (bottom green line). These opposing effects caused the main effect of ball type (factor A) to cancel!

Incredibly (I’ve never seen anything like this!), the same thing happened with the floor surface: The main effect of floor type got washed out by the opposite effects caused by changing temperature from room (ambient) to that in the freezer (below 0 degrees F).

Changing one factor at a time (OFAT) in this elastic spheroid experiment leads to a complete fail. Only by going to the multifactor testing approach of statistical DOE (design of experiments) can researchers reveal breakthrough interactions. Furthermore, by varying factors in parallel, DOE reveals effects far faster than OFAT.

If you still practice old-fashioned scientific methods, give DOE a try. You will surely come out far ahead of your OFAT competitors.

P.S. Details on elastic-spheroid experiments procedures will be laid out in Part 3 of this series.

No Comments

Being kind pays off—wear a mask for the sake of others and earn positive returns

Last month I reported the positive news that people really do like to help others. I figured it would be best to focus on the kind behavior seen even in the most troubling times of tensions here in Minneapolis and around the world.

Since then the coronavirus flared up across the USA. Despite this, many Americans remain adamant against wearing masks, even though this would be kind to their fellow citizens.

I get it—no one likes to be told what to do and the face coverings create a lot of hot and bother. My approach, being committed to kindness, is to always wear a mask in public indoor spaces while steering clear of anyone going without one, choosing times and stores that provide plenty of maneuvering room.

Two books coming out this month provide some hope that mask-averse people may come around to kindly covering up on Covid-19: Survival of the Friendliest and The Kindness of Strangers. They generated a buzz for kindness that got amplified by the Associated Press last week in their report on Not so random acts: Science finds that being kind pays off.

“Doing kindness makes you happier and being happier makes you do kind acts.”

Economist Richard Layard

For those of you who seek data on why people are kind or unkind, check out Oliver Scott Curry’s Kindlab. I love the graphic showing the scientist measuring the height of the “K: Check it out for laughs! Then follow the link to “doing a kind act has a significant effect on well-being” for results gleaned from 27 experimental studies.

There are some caveats, however. The effects reported by Curry et al are small. Also, the individual studies tend to be underpowered—averaging only about a third of the number of subjects needed to detect effects of interest.

Furthermore, it’s clear from Kindlab and other sources (for example, my prior blog noted at the outset of this post), that many people lack a motivation to be kind.

For example, a twenty-something bar-hopper is very unlikely to wear an unfashionable, drink-inhibiting mask. Why bother to protect his or her peers from a disease that probably won’t kill them anyways (never mind the grandparents).

How can this dangerously unkind behavior be turned around?

No Comments

Fun with colors

Download one of these color-identifier apps to your cell phone for some summer ‘staycation’ fun. Stop and measure the roses!

I did so with the top-rated Color Grab. It reported “Brilliant Rose” and “Golden Yellow” for the flowers in my vase.

The ‘heads-up’ about Color Grab came from Oliver Thunich—a master statistician who teaches DOE for our German affiliate Statcon. He came up with an innovative way to demonstrate mixture design for optimal formulation by blending three juices: clear apple, passion fruit, and pink grapefruit.

Using Design-Expert® software Oliver developed an experiment with 20 recipes that varied the ingredients in an optimal way to model the resulting color in RGB (three responses).

Based on the results, I came up with the ideal formulation (flagged on the 3d graph) to produce a Pure Red color with as little of the expensive passion fruit as possible.

My high point in coloring came in kindergarten when the teacher sent me home after coloring with a black crayon on black paper—just too dark by her reckoning. However, now that I know that color can be engineered, I may pick it up again. In any case, I do appreciate an array of red, green and blue (i.e., RGB) and all that’s in between, especially in a floral display.

P.S. A hummingbird just flew up to my home-office screen window—just a foot away from where I sit.  It would be interesting to see what the color identifier comes up with for this iridescent-feathered friend.

No Comments