Extract

Objectives

...

Methods

A naturalistic, descriptive, mixed method study conducted in three stages:

...

Descriptive statistics were used to analyse audit data; the Kappa statistic was used to measure inter-observer agreement to examine reliability. Qualitative description was used for qualitative analyses.

Results

Content validity was supported by both literature and expert review. Whilst also supported, face validity and usability were impacted by the volume of items and the relevance of the exemplar behaviours to the local context of practice. Use of Kappa statistic (K) to measure inter-observer agreement revealed 30 illustrative items were acceptable with ‘moderate’ or ‘good’ agreement (K 0.41 or higher); 14 items had ‘fair’ agreement (K 0.40 to 0.00); and of the remaining items, 4 had poor inter-observer agreement (K < 0.00), and K could not be calculated for 4 items. Inter-observer agreement was acceptable for 70.5% (n = 11) of content items, 58.8% (n = 10) of process items and 44.4% (n = 8) of environment items.

You do not currently have access to this article.