Well, I wondered what Taguchi Optimization was...and surely enough, Wikipedia had an entry that seems apt.
He, therefore, argued that quality engineering should start with an understanding of the cost of poor quality in various situations. In much conventional industrial engineering the cost of poor quality is simply represented by the number of items outside specification multiplied by the cost of rework or scrap. However, Taguchi insisted that manufacturers broaden their horizons to consider cost to society. Though the short-term costs may simply be those of non-conformance, any item manufactured away from nominal would result in some loss to the customer or the wider community through early wear-out; difficulties in interfacing with other parts, themselves probably wide of nominal; or the need to build-in safety margins. These losses are externalities and are usually ignored by manufacturers. In the wider economy the Coase Theorem predicts that they prevent markets from operating efficiently. Taguchi argued that such losses would inevitably find their way back to the originating corporation (in an effect similar to the tragedy of the commons) and that by working to minimise them, manufacturers would enhance brand reputation, win markets and generate profits.
Such losses are, of course, very small when an item is near to nominal. Donald J. Wheeler characterised the region within specification limits as where we deny that losses exist. As we diverge from nominal, losses grow until the point where losses are too great to deny and the specification limit is drawn. All these losses are, as W. Edwards Deming would describe them, ...unknown and unknowable but Taguchi wanted to find a useful way of representing them within statistics. Taguchi specified three situations:
- Larger the better (for example, agricultural yield);
- Smaller the better (for example, carbon dioxide emissions); and
- On-target, minimum-variation (for example, a mating part in an assembly).
The first two cases are represented by simple monotonic loss functions. In the third case, Taguchi adopted a squared-error loss function on the grounds:
- It is the first symmetric term in the Taylor series expansion of any reasonable, real-life loss function, and so is a "first-order" approximation;
- Total loss is measured by the variance. As variance is additive it is an attractive model of cost; and
- There was an established body of statistical theory around the use of the least squares principle.
The squared-error loss function had been used by John von Neumann and Oskar Morgenstern in the 1930s. There is a theorem I think - help appreciated
Though much of this thinking is endorsed by statisticians and economists in general, Taguchi extended the argument to insist that industrial experiments seek to maximise an appropriate signal to noise ratio representing the magnitude of the mean of a process, compared to its variation. Most statisticians believe Taguchi's signal to noise ratios to be effective over too narrow a range of applications and they are generally deprecated.
I would say the optimization criteria will in general vary - that seems obvious. The question of "loss to society" has political and economic implications, and I would say when those implications are examined indict the libertarian perspective in a way that gives what we anecdotally know to be true to be validated in practice.
No comments:
Post a Comment