A lot has been discussed about feedback regarding behavior that could and should change. What kind of feedback should be given to somebody who makes a small mistake that causes a big problem? I find many times I am asked to correct an issue that was basically happened only because somebody simply forgot to dot an “i” or cross a “t”. I feel it is wrong to “pile it on”; the person usually feels bad enough because they are getting piled on by the people who have to deal with the mistake’s effect. I also feel feedback should not be given out depending on how severe the effect was or simply reacting to what is hot. I find top performers make more mistakes if they do the hardest and most tasks. Should they get feedback based on their mistakes?
I would also like a little more discussion on this type of behavior. You describe behavior as “what people say and how they say it” Is behavior in the model also “the parts they make” or “code they write” or “the software they install” or “the burgers the fry”?

Re: Feedback on human errors.
[quote="jkanold"]What kind of feedback should be given to somebody who makes a small mistake that causes a big problem? [/quote]
The first thing to keep in mind is that feedback is not about the past. Feedback is about encouraging future effective behavior. Feedback is not a reprimand.
Second, my guess is that in this kind of situation you don't need to give feedback. It sounds like the person who caused the problem is already aware that they screwed up. In that case, the best thing to do might be to catch them doing something right, and go with "Hey, I know you're having a tough week, but can I give you some feedback? Getting that bugfix checked in and working with the test team to make sure it fixed the problem went a long way toward helping us recover. It shows me that you're still concerned, still working toward making us successful. Thanks, and please keep it up."
Third, if you've been giving feedback regularly, it's no big deal. If you need to say "Not following the test procedure causes problems for our customers, creates more work for the release team, and gives the development team a bad reputation" and you regularly give feedback, they'll be expecting it. It's likely they'll have already thought about how to respond with what they can do differently.
Fourth, if a small error can create a major problem, it may be that the individual contributor isn't the problem, but rather the whole process. Diligence is certainly important, [u]and[/u] critical processes should have a quality control process. Somebody should at least count the number of items in the box, to see if it matches the packing list before the box is sealed. Somebody should check the bottle against the prescription to make sure the right medication was dispensed. Somebody should check the closeout photos to make sure the foam is correctly bonded. Somebody should independently regression test a new system (or better, review the automated test results every morning from the continuous integration regression suite) before it even goes to the installation team.
The QC should scale to criticality: If somebody gets the wrong burger, that's not a big issue. Just make a new one, quick. If somebody gets the wrong medication, that's potentially fatal, so both the dispenser and the checker need to be diligent about getting the right meds.
Finally, behavior is stuff you can observe that a person does. The parts they make and the code they write are a product, not behavior. The things they do to write code or fab parts are behavior, and it includes things like not reaching across the safety bar to peel that part out. Telling somebody "That part is bad" is not feedback in the MT model. Telling somebody "When you reach past the safety bar on that machine, I get scared that we're going to have to make a huge insurance payment to your widow" is great feedback to give, if it's delivered with a smile. Talk about what they [u]did[/u] as the behavior; talk about the fact that the part is unusable as one of the results.
Is that helpful?
tc>
Feedback on human errors.
Yes Tom, that was very helpful. I guess it also ties into the "don't play whack-a-mole" and "focus on the important and not the urgent" I do have a hard time explaining to peers that it is not worth spending fifty hours in meetings and hundreds of hours of band-aid solutions to a one of a kind small error. I can ignore it for a few days until the next urgency comes up; I sorta feel that is not quite right either. But on the other hand, if the whack-a-mole solution is not going to happen anyways, what's the difference. I should stick to my own priorities and the model.