|dc.description.abstract||Research serves to increase the already known stock of knowledge (Salter & Kothari, 2016). Research may seek to address goals such as establishing/confirming facts, reaffirming results, solving new/existing problems or examining/supporting/developing theorems (Antwi & Hamza, 2015). The outcomes obtained through research aids decision makers, who in turn can act upon knowledge generated (Kothari, MacLean, & Edwards, 2009). In seeking further knowledge, a variety of methodologies are available to researchers (Bayley & Phipps, 2019; Karami, Rowley, & Analoui, 2006).
The literature acknowledges research can fall short, which delivers ongoing impact (Greenhalgh, Raftery, Hanney, & Glover, 2016; Grimshaw, Eccles, Lavis, Hill, & Squires, 2012). From a scientific viewpoint improperly produced research findings impacts subsequent knowledge generated; potentially misdirecting action needed to achieve desired outcomes (Althubaiti, 2016). From a practical viewpoint, research can fail to address/rectify specific problems being investigated (Gerhard, 2008). While data is always generated as part of a research process, the capacity for that data to be relied on for decision making purposes can be limited. Taken together, imperfect knowledge has empirical consequences impacting interpretation, validation, research replication (Allison, Shiffrin, & Stodden, 2018; Becker, 2005) and future research directions (Ryan & Daly, 2019; Wang, Yan, & Katz, 2018).
Data collection forms an important part of the research process; as data represents a fixed part of information which undergoes analysis to create subsequent knowledge (Cooper, 2017). Examination of data collection practices is an important undertaking given that data influences the knowledge generated (Eisend, 2015; Gerrits et al., 2019). Calls within the marketing literature highlight the need for research practice to bridge the gap between research that engages people’s hearts and minds giving consideration to the naturalistic settings in which the research is conducted within (Steils & Hanine, 2016). Challenges with management of research gaps stems from the need to discuss research findings within the contexts of sampled populations and the data collection methodologies available; taking their specific advantages and disadvantages upon knowledge generated into account (Karami et al., 2006; Queirós, Faria, & Almeida, 2017). Data collection also entails interaction between subjects and researchers, presenting another element to be considered when identifying shortcomings during data collection with potential for misreporting from distortion or oversight in information reported (Boutron & Ravaud, 2018).
Taken together, an approach to addressing gaps in knowledge captured during data collection must account for the complexities which data collection presents; while also considering the engagement, which exists between researchers and subjects during data collection. A proposed view of framing to address this research goal is systems thinking; the process of studying and understanding how systems and the actors within a system behave, interact, and influence one another (French & Gordon, 2015). When practitioners employ various systems ideas and techniques, they can be said to be applying a ‘systems methodology’ (Jackson & Keys, 1984). This applied systems thinking serves as a link between the philosophy in systems thinking, operationalisation for practitioners and an examination approach (Checkland, 1985). Subsequently, the purpose of this thesis was:
To examine the utility of applying a broader systems evaluation procedure to data collection to identify shortcomings on gathered data used to generate knowledge to understand how to improve outcomes in future data collections.
A sequential study design involving two studies forms the basis for this program of research. To begin, the first study in this thesis sought to answer the following research questions: 1) What biases are evident during the data collection processes of market research and product evaluation research? and 2) What strategies are presented in the literature to mitigate biases? Study 1, a narrative review of five systematic literature reviews was undertaken to review the decision making processes focussing on heuristics; usage which may allow biases to emerge and influence information captured/reported (Gerhard, 2008). This narrative review summaries 30 potential systematic errors which may occur during data collection (both intentional and unintentional). The findings provide an overview for researchers seeking to identify potential impacts on knowledge generated during data collection.
The second study in this thesis sought to answer the following research questions: 3) Are system elements present? and 4) How does a system thinking lens extend understanding of deficiencies in data collection? The narrative review identified a broad range of biases and heuristics which were synthesized into a shortlist to examine research practice undertaken in a naturalistic setting. Guided by a systems perspective an evaluation consisting of an observational approach was applied within one Australian government study that involved 20 participants over a period of five days. Results from the observation identified potential factors at play during evaluations among participants; reporting of which (or lack of) potentially influencing interpretation as a result. Utilising the marketing system conceptualised by Layton (2019); the study observed participants, social interactions, immediate research outcomes, and environmental factors; to identify a range of behaviours which could potentially lead to heuristics usage or biased decision making across the study. Observation of the government study was able to identify a selection of factors such as (but not limited to) a strict social hierarchy among members of the organisation, variances in the experience levels of participants in certain tasks and minority opinions being given lower priority regarding further in-depth discussion with participants. Taken together, such factors may have influence how data was collected during this specific study; with flow-on effects on generated knowledge sought.
In combination, this thesis illustrates the need to incorporate an evaluation process with respect to data collection; to appropriately report any shortcomings within a study. Such examinations allow further exploration into limitations of knowledge captured. This thesis demonstrates application of a systemic lenses to evaluate research practice delivering one case example focussed on identifying biases within one government evaluation study; identifying potential gaps within knowledge captured. Lastly, this thesis extends practice by proposing the need to consciously examine and report research limitations/gaps in the knowledge generated; to allow added discussion around such shortcomings and to further the progression of knowledge generation moving forward.||