Scroll sawers put blades to the statistical test by cutting out ducks


Years ago I helped Quality Assurance Manager John Engler solve a tricky issue at Robinson Rubber via design of experiments (DOE). He contacted me last fall to help him apply DOE to a nagging question about scroll sawing: Does it pay to buy pricier blades?

We worked together to design a simple-comparative randomized-block experiment on 10 competitive blades. John enlisted 20 fellow hobbyists in his NorthStar Scrollers club to cut out a duck from pine (see pattern below) using the selected blades (such as the one taped on the board) in a random order.

They then rated the results on a 1 to 9 scale—higher the better—for speed of cut, blade jumpiness, fuzzies (undesirable!), edge smoothness, burns and line following.

Scroll saw ready to cut out a duck

The blades differed significantly by all attributes at p < 0.0001 other than the line following (p = 0.3419). For the most critical measure—speed of cut, blades 3, 8 and 9 stood above all others on average.

The power of doing 21 replicates—widely spread as indicated by the red dots—and, furthermore, blocking out the scroller-to-scroller differences, is seen by the narrowness of the least significant differences (based on a p of 0.05).

Accounting for all the attributes via Stat-Ease software’s multiple response optimization these three blades held up overall with number 3 being the winner by costing less than the other two.

After I reported my findings to the group, John laid out a number of mitigating factors:

  • Experience of the scroll sawers
  • Type of wood, e.g., something a lot harder
  • The life of the blades (important to consider for the cost)

But all-in-all, this planned experiment proved to be a big hit with the NorthStar Scroller hobbyists. What impressed me was their depth of knowledge on scroll-saw blades and why we observed such significant differences due to the patterns and orientation of their teeth, etc. I was also struck by how some individuals could tell right away which blades worked best—even before seeing the entire set of data. This reinforces my feeling that laying out and analyzing experiments works best by combining the know-how of a DOE expert (like me) with subject matter experts (not me in this case—far from it!).

“This went much better for me than I thought it might and I learned some things about blades along the way. This was fun!”

–Helen (a NorthStar Scroller blade-tester)
  1. No comments yet.

You must be logged in to post a comment.

%d bloggers like this: