Show simple item record

dc.contributor.authorV. Catts, Stanleyen_US
dc.contributor.authorD.J. Frost, Aaronen_US
dc.contributor.authorI. O'Toole, Brianen_US
dc.contributor.authorJ. Carr, Vaughanen_US
dc.contributor.authorLewin, Terryen_US
dc.contributor.authorL. Neil, Amandaen_US
dc.contributor.authorG. Harris, Meredithen_US
dc.contributor.authorW. Evans, Russellen_US
dc.contributor.authorCrissman, Belindaen_US
dc.contributor.authorEadie, Kathyen_US
dc.date.accessioned2017-04-24T09:57:05Z
dc.date.available2017-04-24T09:57:05Z
dc.date.issued2011en_US
dc.identifier.issn00048674en_US
dc.identifier.doi10.3109/00048674.2010.524621en_US
dc.identifier.urihttp://hdl.handle.net/10072/44421
dc.description.abstractAim: Clinical practice improvement carried out in a quality assurance framework relies on routinely collected data using clinical indicators. Herein we describe the development, minimum training requirements, and inter-rater agreement of indicators that were used in an Australian multi-site evaluation of the effectiveness of early psychosis (EP) teams. Methods: Surveys of clinician opinion and face-to-face consensus-building meetings were used to select and conceptually define indicators. Operationalization of definitions was achieved by iterative refinement until clinicians could be quickly trained to code indicators reliably. Calculation of percentage agreement with expert consensus coding was based on ratings of paper-based clinical vignettes embedded in a 2-h clinician training package. Results: Consensually agreed upon conceptual definitions for seven clinical indicators judged most relevant to evaluating EP teams were operationalized for ease-of-training. Brief training enabled typical clinicians to code indicators with acceptable percentage agreement (60% to 86%). For indicators of suicide risk, psychosocial function, and family functioning this level of agreement was only possible with less precise 'broad range' expert consensus scores. Estimated kappa values indicated fair to good inter-rater reliability (kappa > 0.65). Inspection of contingency tables (coding category by health service) and modal scores across services suggested consistent, unbiased coding across services. Conclusions: Clinicians are able to agree upon what information is essential to routinely evaluate clinical practice. Simple indicators of this information can be designed and coding rules can be reliably applied to written vignettes after brief training. The real world feasibility of the indicators remains to be tested in field trials.en_US
dc.description.peerreviewedYesen_US
dc.description.publicationstatusYesen_US
dc.languageEnglishen_US
dc.language.isoen_US
dc.publisherInforma Healthcareen_US
dc.publisher.placeUnited Kingdomen_US
dc.relation.ispartofstudentpublicationNen_US
dc.relation.ispartofpagefrom63en_US
dc.relation.ispartofpageto75en_US
dc.relation.ispartofissue1en_US
dc.relation.ispartofjournalAustralian and New Zealand Journal of Psychiatryen_US
dc.relation.ispartofvolume45en_US
dc.rights.retentionYen_US
dc.subject.fieldofresearchHealth, Clinical and Counselling Psychologyen_US
dc.subject.fieldofresearchcode170106en_US
dc.titleClinical indicators for routine use in the evaluation of early psychosis intervention: development, training support and inter-rater reliabilityen_US
dc.typeJournal articleen_US
dc.type.descriptionC1 - Peer Reviewed (HERDC)en_US
dc.type.codeC - Journal Articlesen_US
gro.date.issued2015-06-01T23:35:36Z
gro.hasfulltextNo Full Text


Files in this item

FilesSizeFormatView

There are no files associated with this item.

This item appears in the following Collection(s)

  • Journal articles
    Contains articles published by Griffith authors in scholarly journals.

Show simple item record