Griggs: Chaos, butterflies and numbers that lie
In the 1960s, MIT professor and meteorologist Edward Lorenz found something strange when he rounded the number .506127 to .506 in a weather modeling program — it shifted weather pattern predictions over the next two months. This decimal change of less than 0.0001 led to his “Butterfly Effect” proffering that a tornado in Texas could be caused by a butterfly flapping its wings in Brazil — yes, mentioned in Jurassic Park.
Lorenz taught us that precision matters. Trouble starts when “fake” precision uses exact numbers to emphasize something that cannot be correctly presented in accurate terms. The confusion is when extreme accuracy in reported numbers can be true or misleading — we often cannot know.
• Video games boost well-being by 18%.
• 55% with credit cards are missing out on free rewards.
• 44% say they have reduced food waste in 2020.
• [Our product] kills 99% of germs left after brushing.
• 91% less back pain [with our product]
• 62% reduction in symptoms [with our drug]
• 87% increase in sales [after our training]
Chaos Theory (deterministic chaos) is the study of seemingly random behavior inside systems that are determined by finite laws. This is a more formal take on Lorenz’ butterfly effect built upon feedback loops, interconnectedness and underlying patterns. The professor would cringe at the sloppy, misdirected and misunderstood use of statistics in general circulation today.
SPONSORED CONTENT
How to avoid lyin’ statistics for you and your business? The responsibility is two-fold. Researchers and marketers can give more information on results that are honest and true to their implications. The public and consumers can broaden attention spans to dissect and study the numbers.
“There are three types of lies — lies, damn lies, and statistics.”
“A single death is a tragedy; a million deaths is a statistic.”
“The average American has one breast and one testicle”
A larger sample size will usually mean better results. Larger samples require more resources while sizzling press releases or social media posts have to wait. The lie comes when research doesn’t meet the “truth” criteria and ends up being popularized and later debunked. Bad published statistics made people question whether exercise, sunshine, coffee or even oatmeal is good or bad for you.
Face or logical validity asks whether an idea or a suggested causal relationship (A leads to B) passes the smell test. Did grapefruit, chocolate or tapeworm pills cause people to lose weight? Face validity is the more subjective of the four types of research validity (construct, content, face, criterion). The purpose is to see if the test or study actually measures what it claims to measure — customer migration, regional political preferences, measures of depression in teenagers, nightlight impacting wildlife.
Levels of significance are reported in every peer-reviewed or journal-published study. Most studies list the level of significance the authors used to see if the idea (treatment) had an effect on the outcome. These measures are usually .10, .05 or .01., meaning the results were due to chance or other factors 1 in 10, 1 in 20 or 1 in 100 times, respectively. For you and me, we need to know if the researchers (or marketers) truly have something that works (.01) or if they ‘loosened the screws’ to get the results they wanted (.10).
At times we begin with marvelously accurate measures and then apply mean, median and mode (average, midpoint, most occurring number) to alter the meaning into something we want or plan to use for shock value. A Psychology Today article said that over the past 157 years of measurement average human body temperature has dropped from 98.6º F to 97.9º F (Walter Veit) — that’s true precision with no rounding errors. Another researcher bemoans the untidy laces in psychology research, “[since the 2011 study] many of our findings are insufficiently precise and insufficiently generalizable.” (Hans Rocha Ijzerman)
Don’t be tricked into believing something that is “statistically” true yet, false on the face validity scale — those butterflies are flapping furiously. Besides, I got some concrete data from a fortune cookie that read, “42.7% of all statistics are made up.”
Rick Griggs is a former Intel Corp. training manager and inventor of the rolestorming creativity tool. He runs the 10-month Leadership Mastery Academy. rick.griggs83@gmail.com or 970-690-7327.
In the 1960s, MIT professor and meteorologist Edward Lorenz found something strange when he rounded the number .506127 to .506 in a weather modeling program — it shifted weather pattern predictions over the next two months. This decimal change of less than 0.0001 led to his “Butterfly Effect” proffering that a tornado in Texas could be caused by a butterfly flapping its wings in Brazil — yes, mentioned in Jurassic Park.
Lorenz taught us that precision matters. Trouble starts when “fake” precision uses exact numbers to emphasize something that cannot be correctly presented in accurate terms. The confusion is when…