A Quick Thought on Analytical Uncertainty

When I first started working for Big Aerospace, I was shocked at the standard approach to presenting experimental or test results.  A single data point at a particular frequency or temperature was taken as gospel.  Every distribution was normal.  Steps were to taken to avoid multiple measurements.  And I have yet to see, in my four and a half years of being surrounded by rocket scientist types, a plot, graph, or Powerpoint presentation with error bars.  I find that hilarious, since it seems like every single engineer I meet has one color of belt or another in Six Sigma-jitsu.

A mathematician friend of mine from undergrad wound up applying to medical school a couple of years ago and is about halfway through his first year at a school just down the street from me (probably my first choice, too).  We hooked up a couple of weeks ago for lunch and he talked about how a lot of medical research takes, at best, a shoddy approach to statistics, although his description was a bit more florid.  I had to suppress a small laugh when I told him the same sort of thing happens all over the place in engineering companies too.  I have to admit that a lot of scientific research gets a pass on statistical and logical rigor.  It shouldn’t happen.  The scientific community really should police itself, but it doesn’t.  Too often, the purported research from people like Hwang Woo-Suk or Andrew Wakefield, gets through the filter for various reasons.  I’m getting a little bit off-track here and I don’t want this to turn into a rant  about the failures by the gatekeepers of scientific fact – and I have to get back to work.

Inspired by this post by Petulant Skeptic on the perils of p-value, I decided to take it upon myself to start teaching myself statistics at work.  This is partially because I don’t want to be a moron when it comes to statistics as a physician someday and also because I’m working on a couple of projects at work which really do require a statistical approach.  Sadly, everyone I’ve turned to at my company for a basic discussion on statistics, particularly measurement and experimental uncertainty, hasn’t really had a clue what I was asking about or why it was important.  That, I suppose, brings me to the real point of my post, which was to share a useful link on uncertainty which I ran across provided by NIST.  I found it a useful read and figured others might be interested.

Advertisements

One Response

  1. I like confidence intervals better than p-values. It also annoys me to no end that a paper on a smallish sample will get rejected over a p-value of 0.06. Most reviewers have forgotten what a p-value actually is. it makes me sad.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: