My previous comments on problem solving lead me to think about how I might illustrate the use of statistical methods in directly solving engineering problems. I have been involved in many interesting and challenging problems in my 30 years in the automotive industry. The recent media coverage of both the Toyota problem with sticking accelerator pedals and the BP oil spill in the Gulf of Mexico caused me to think back to my involvement in a similarly high profile case – the Firestone tire crisis of 2000/1, which resulted in around 300 fatalities and a $3Bn recall of 20 million tyres. There are many similarities in all three of these cases (not least the role of the media, and government agencies), but in the case of Firestone, I will show how a range of statistical methods was used (from simple EDA methods like box plots, to more sophisticated methods such as competing risk proportional hazard regression) to get to the root cause of the problem, andto quickly get ahead of the game, and decide on what actions to take, before the regulatory authorities told us what to do.
Some statistically based initiatives aimed at solving engineering and quality problems often tend to over-emphasise empirical methods at the expense of deductive logic; the Six Sigma movement is a good example of this – the problem solving algorithm of Define, Measure, Analyze, Improve, and Control (see for example http://en.wikipedia.org/wiki/Six_Sigma for some background to what these steps entail) puts great store in solving problems by measuring lots of characteristics, and analysing the resulting data. However, Six Sigma has nothing to say about eliminating hypotheses through deductive logic. In my Brighton talk, I will introduce a simple method to facilitate this step in problem solving and root cause determination, so that an empirical approach using statistical methods can then be better targeted. This method is not taught in Six Sigma classes (groping around in Minitab output for “significant” p-values seems to be the preferred approach), or even referenced in statistical texts; which is strange given the central role of statistical methods generally in problem solving.
I have started to think about what I want to say on Statistical Engineering at the RSS conference in Brighton next month. I have been thinking a lot about the iterative learning cycle involving the interchange between inductive and deductive logic; as statisticians, do we pay enough attention to this distinction? It seems to me that the scientific context of the problems we are involved in solving should play a central role in this iteration. I will say something about this with regard to engineering problems…. but what do statisticians working in other fields think about this? Your thoughts ahead of conference would be welcome…
Looking forward to the YSS pre-conference training day. This year we have presentations on
- A young statisticians experience of the RSS conference
- Creating posters with impact
- How to market yourself and network at the conference
In the afternoon there will be a workshop on how to write a good conference abstract. The session aims to
- provide examples of good and bad abstracts
- explain the conference committee guidelines for accepting abstracts
- feature working in groups to create an abstract for an article from Significance to be reviewed using the committee guidelines.
Prizes will be awarded for the best abstract!
See you there,
Paul Baxter (YSS Secretary)