We’ve seen that natural human traits to avoid blame and organization responses that suppress reporting can impact on how useful our incident data set is. There is another aspect that impacts on the quality of analysis conducted – building or selecting a review team.
Another dimension – which impacts more on what is generated from the analysis of an incident is the way in which Incident Forms are structured and teams are formed. There is a history of wanting to be able to compare Incident records between business units and organisations. This has led to the formation of standards which by and large focus on the description of categories. The categories themselves have come from input from many diverse industries and businesses.
Coming from this are forms and guidance for incident analysis that are:
- Category based – with a focus on the injury sustained when making comparative reports;
- Standards driven – drawing on sources such as AS1885.1 – which “locks in” a requirement for a lot of data that slows down form completion and uses general terms that lead to a lot of “Other” categories being selected by Supervisors who are trying to fill out the form without being familiar with what these categories are meant to mean (they aren’t terms usually included in a firm’s language);
- Strictly channeled through a work flow so that based on severity certain levels of the Company need to be notified within set time frames, and;
- Harder to follow up and build upon.
When the process requires a team to form – for more serious incidents – it is very difficult to find a volunteer. Most members of the firm don’t want to be the one to lay blame on a colleague. Similarly when the incident occurred in your area of control – you may not want to be exposed to ridicule during an analysis of what went wrong.
Complex analysis methods can also generate problems. There are some great, technically sound investigation “tools” commercially available – BUT (and this is a biggy) – if you don’t use them often they can prove very confusing and the team will “bog down” trying to work out how to use the tool. This can really mean that the truth gets lost in the process and the recommendations are more based on the language in the tool – rather than the reality of what went wrong.
Ask yourself these questions:
- Are your Incident Report forms based on an external standard – and as such have kept the language / words that are present in the standard?
- Is there plenty of space and encouragement for providing a good description of the incident – with some guidance about the key point to include for it to be a good description?
- Does the Incident report require quick escalation (see earlier post on organization response) that requires use of a complex tool?
For the final post in this series – click here.