Interdisciplinary Open Science: What are the implications for educational technology research?

Evidence-based educational practice and policy relies on educational research to be accessible and reliable. For educators, creating the next generation of critical thinkers, collaborators, and effective communicators, is a complex educational problem, requiring a delicate marriage of methods and approaches for understanding the mind, behaviour, and social context of the learner in the digital age. As such, educational technology research plays an important role for informing practice and policy. However, reaching across the boundaries of research, policy, and practice, is inherently challenging, and can invoke unintended consequences. Miscommunications, and mistakes, are inevitable in interdisciplinary and applied science, but advances in technology now make it possible to openly share and translate educational technology research for policy and practice. Our aim in this paper is to describe how the emerging set of practices and philosophies within the Open Science movement can make educational technology research more transparent and aid translating it into practice.


Interdisciplinary educational technology research and the translation problem
Translating educational technology research when designing for learning in various contextualised settings is an intrinsic challenge in educational practice. The problem of designing effective instruction for learning that will enable better critical thinking, problem solving, collaboration, and communication in the digital age is as complex and wicked as they come (Rotherham & Willingham, 2010). A basic understanding of cognition (and metacognition) inferred from conventional laboratory-based experiments can fall short when applying the findings in the physical or virtual classroom. Likewise, general inferences about how people learn drawn from in vivo studies, thematic analyses of survey responses, or behavioural patterns uncovered in big data can fail to generalise when tested experimentally across materials, contexts or tasks (Lodge, Alhadad, Lewis, & Gašević, 2017). Educational researchers have argued that interdisciplinary collaboration is crucial for making real progress (McNamara, 2006;Palghat, Horvath, & Lodge, 2017) towards discovering general principles of learning across levels of granularity, and designing genuinely effective learning and instructional methods for use by learners, educators, and policy makers at the chalkface.
Interdisciplinarity can be viewed as a defining characteristic of educational technology research. A delicate marriage of methods and approaches to examining the mind, behaviour, and social-technological context of the learner is considered greater than the sum of its disciplinary parts. But effective communication is fundamental to the success of any scientific enterprise, particularly interdisciplinary work. The different languages across the family of academic disciplines and applied practices making up the educational technology community can obscure the methods, findings, and "modi operandi" of our peers (Dudai, Roediger, & Tulving, 2007), and inevitably result in miscommunication. This translation problem is exemplified by the propagation and persistence of educators' misconceptions about learning and the brain, and neuromyths in the classroom (e.g., learning styles, hemispheric dominance etc.; Howard-Jones, 2014;Pasquinelli, 2012). The inherent fogginess in bringing qualitatively different fields together to answer common questions about learning is also evidenced by the divided discourse around neuroscience and education as uncomfortable bedfellows (Ansari & Coch, 2006;Bruer, 1997).
More cooperation and collaboration between scientists, practitioners, and policy makers is a commonly proposed antidote to the (side) effects of interdisciplinary and applied research. There are also lessons to be borrowed from medicine, where seemingly simple interventions for saving lives, such as pre-surgery checklists, have initially failed on implementation without "user buy-in" or engagement with local hospitals (Anthes, 2015). It is impossible to effectively assess and appreciate the contributions of our interdisciplinary peers, and to effectively synthesise and translate our collective findings for use by educators and instructional designers, without sharing crucial elements of our research: materials, protocols, code, data etc. Transparency in how evidence is produced is a sine qua non for effective translation across disciplinary and implementation boundaries.

Interpretation, transparency and metascience
Research translation in educational technology is not helped by the opaque nature of research practices, which are themselves moulded by the interpretations of individual researchers. Humans are notoriously expert at seeing patterns in noisy, even random, data. Our chronic apophenia is illustrated by our tendency to expect a fair coin toss to come up tails if it follows a series of heads (i.e., gamblers fallacy; Bar-Hillel & Wagenaar, 1991), or to favour a basketball shooters' chances of scoring a free throw if they've been successful in previous shots during a game (i.e., the hot hand illusion; Gilovich, Vallone, & Tversky, 1985). We find illusory faces in pure noise images (i.e., pareidolia; Liu et al., 2014), and we hear illusory backward messages in rock music when told to listen for specific phrases (i.e., expectancy effects, confirmation bias; Vokey & Read, 1985). Such regularities, heuristics, or biases in judgements and decisions are shaped by our experiences; we rely on them automatically and with little awareness, and crucially depend on them to help make sense of the vast amount of information and complexity in our environment (Nisbett & Wilson, 1977). Scientists are trained to tell the difference between genuine systematic variations, and randomness that looks like systematic variation. Across disciplines, we deploy a range of methods to safeguard against our tendency to over-interpret random data (e.g., randomised controlled trials, blinding, reliability measures, meta-analysis, peer review). But, as acknowledged in a landmark report responding to the alarmingly high number of preventable medical errors in the US healthcare system (Kohn, Corrigan & Donaldson, 2001), "to err is human," and even the most rigorous methods, frameworks, coding schemes, and statistical models are inescapably in the hands of human researchers.
Metascience -or the science of science -has emerged in response to these issues. Metascience is concerned with issues of reproducibility and rigour and, much like interdisciplinary and implementation science, it is firmly embedded in the new zeitgeist sweeping across disciplines. In their manifesto for reproducible science published in Nature Human Behaviour, Munafò et al. (2017) argued that the combined effects of apophenia, confirmation bias, and hindsight (also known as the knew-it-all-along effect, where we tend to be influenced by the knowledge of an outcome; Fischhoff & Beyth, 1975), can easily lead to unconscious self-deception when performing research of all kinds, and ultimately to false conclusions, biased research syntheses, and faulty applications. Demonstrations of difficulties in reproducing research findings can be found in psychology (Open Science Collaboration, 2015), neuroscience (Poldrack et al., 2017), biomedical science (Ioannidis, 2005), economics (Camerer et al., 2016), and education (Makel & Plucker, 2014). A sobering proportion of researchers in other disciplines have reported failures to reproduce the results of other scientists (Baker, 2016). Makel and Plucker (2014) analysed the entire history of research articles published in the top 100 education journals, finding just 63 direct replications (with a 71.4% success rate) and 153 conceptual replications (with a 66% success rate).
Insufficient power (i.e., using small sample sizes to find small true effects; Button et al., 2013), selective reporting and publication bias (i.e., preferentially reporting and publishing positive or surprising results; Simonsohn et al., 2014), data dredging (i.e., confirmation bias; Head et al., 2015), hypothesizing after the results are known (i.e., hindsight; Kerr, 1998), and unavailability of a priori research plans, materials, protocols, code, and data, or poor transparency, are among the reasons put forward for the high prevalence of irreproducible findings. Methods with greater flexibility or degrees-of-freedom in how they can be used (e.g., exploratory multivariate analyses, use of preexisting datasets, document analysis) may be particularly vulnerable to errors in research synthesis resulting from unconscious cognitive influences (Dawson & Dawson, 2016). These issues are also commonplace in educational research. Polanin et al. (2016) reviewed 383 meta-analyses of intervention studies in top-tier education and psychology journals, finding just 81 with sufficient information to compute an average effect size, and substantially inflated effects in the published literature. This publication bias is indicative of the interpretative nature of science, its cumulative negative effects on meta-analytical synthesis, and the need for greater transparency in how research across education, including in educational technology, is carried out.

The emergence of Open Science
In response to the findings from metascience, the Open Science movement emerged, transforming scientific practice in an ever-increasing number of fields (Nosek et al., 2018). While openness is a key principle of scientific practice as an enabler of practical and societal change, "Open Science" is an emergent set of research practice methods and recommendations to improve the quality and integrity of research through more transparent, and robust practices. The current conceptualisation of "Open" in Open Science is that openness exists in a continuum -associated practices can be more or less transparent, or accessible. Broadly, the set of recommended evidence-based practices of Open Science are based in common ideological goals of more transparent, shareable, honest conduct of and communication of research, and include access to, dissemination of, review of, and reuse of publications, data, materials, and methodology. These goals are underpinned by the desire to improve the quality of science in general, and can be implemented by, and affect the various members of the educational technology research and practice community -researchers, institutions, journals, funding agencies, practitioners, industry.
The Open Science academic community has thus far been working on developing recommendations to resolve normative issues in research methods and publishing cultures such as publication bias (Simonsohn et al., 2014), replicability (Open Science Collaboration, 2015), and questionable research practices (Banks et al., 2016). Open Science is not just a collection of research practices but is also a mindset -with the practice comes a democratised way of doing science, and critically in the present stage of the movement, an openness to examine and improve one's own research practices. This, therefore, constitutes a cultural and transformative change, and brings about important shifts in systemic ways of thinking, doing, and knowing. Such implementations are accomplished not only through changes in practices, but also of culture at the different levels of research endeavours -and as such, can be revolutionary. This major systemic culture change is affecting other fields such as Psychology and Ecology more significantly, though is starting to make its way into Education. The current landscape of Open Science in Education includes some evidence of emergent discourse and changing practice. This includes two special issues in journals that have put out a call for research papers on the subject, with one invited review paper (see van der Zee & Reich, 2018), and an explicit attempt at seeking reproducibility (Beardsley, Hernández-Leo, & Ramirez-Melendez, 2018), notwithstanding the adoption of some Open Science practices by educational researchers on the ground (e.g., pre-registered research plan, MacQuarrie et al., 2018;pre-print, Selwyn, 2017). The British Journal of Educational Psychology has also recently (13 July 2018) announced that they are now accepting registered reports. For us researchers and practitioners in educational technology, this presents as an opportunity for us to lead (and participate in) this transformative change in the broader interdisciplinary Open Science discourse.

Open Science and educational technology research
The adoption of Open Science practices promises to be useful in helping to disseminate and translate educational technology research for use in practice. The problem of translation in educational technology research, and the issues associated with opaque research practices, can partially be solved by openly sharing more elements of our research process in easily accessible online repositories. Transparency and openness are cornerstone features of science, crucial for advancing knowledge. We are hindered in our ability to evaluate, reproduce, and extend on the research findings of our peers (and ourselves) if the necessary detail about how they were produced is missing or not freely available. Open science is about making these details-research methods, materials, data, code, workflows, pre-specified or a priori research and analysis plans-publicly available to other scientists and end-users. Figure 1 illustrates eight prototypical elements of the research process that can be made open and accessible to others. Historically, many of these elements (those shaded grey) have not been open, with few options to pre-register research plans or archive vital content and data.
Advances in technology have helped buoy several promising open science initiatives, such as the Transparency and Openness Promotion (TOP; Nosek et al., 2015) guidelines, which encourage journals to incentivise open research practices (e.g., by publishing pre-registered reports, awarding badges to authors for posting methods, materials, and data in trusted online repositories; see Kidwell et al., 2016 for early signs of success). Several free and easy to use online platforms, including the Open Science Framework (http://osf.io/), have been created to support individuals in making their research-related content openly available. These platforms are purposefully designed as tools for increasing transparency: they are built to facilitate pre-registration of all kinds of research and analysis plans (confirmatory or exploratory), and are capable of storing vast swathes of data.
There is, of course, nuance to how research is conducted across fields, and depending on the method of inquiry, and applies to research conducted in the educational sector or industry. Not all research projects contain all eight Open Oceans: Learning without borders CONCISE PAPER elements outlined in Figure 1. There are some cases where it is not possible to make all elements publicly available. Sharing raw data files, for instance, may be particularly tricky in areas of educational technology research where studies involve working with children or at-risk populations and data cannot be anonymised. The issue of consent and strategies to ensure data is not identifiable post-anonymisation becomes a key part of the apriori research plan. Sharing other elements, such as tools and materials (e.g., survey items, concept inventories, examples, stimuli etc.), measures of learning (e.g., test items, behavioural data), or code (for statistical analyses, presentation of materials), may present challenges for projects where there are legal and/or financial barriers (e.g., research using costly equipment, lab and classroom-based studies using copyrighted materials etc.). Seeking normative methods of transparent practice is nontrivial -there are many nuanced challenges for applied research that will need collective efforts at resolving. Research evidence is not value free, and subject to processes of judgment and interpretation, as is the process of translating and using research evidence to inform practice. Greater transparency should, in theory, serve to aid these processes of judgement and inference in both contexts, and it will serve the community -both researchers and practitioners -to collectively consider how transparency can support this. There are gradations to transparency and openness, however, and working within the constraints at hand, even modest individual efforts to make more elements of our research process transparent and available-towards becoming open educational technology researchers-could aid translation across disciplinary boundaries, and ultimately benefit our cumulative efforts to better understand how people learn, and how best to teach. These efforts should be complementary to, rather than instead of existing efforts to aid translation. For instance, proactive journal efforts to do so, such as inclusion of "Practitioner notes" for research papers (e.g., British Journal of Educational Technology, Journal of Learning Analytics), or vice versa ("Researcher notes" for practitioner papers; see Journal of Learning Analytics) are commendable practices towards bridging translation across disciplines and roles. In many ways, we have already become more open in our practices. Whether we like it or not, social media platforms are changing the nature of scientific communication and practice. This mode of communication of research is the field adapting to new means of sharing. Educational technology researchers often translate their research in order to affect practice and policy, and social media platforms connects us directly to practitioners and the society at large. Researchers are increasingly using Twitter to communicate research (Côté & Darling, 2018), and educators are increasingly using Twitter as their means of professional learning (McPherson et al, 2015). Thus, these platforms also provide a means to connect and build a community of practitioners and researchers. This increases the potential of collaboration and co-creation, and allows a stronger connection to the source of societal issues they may be working to address. As McKenney (2018, p.6) states: The past two decades have witnessed a blurring of traditional distinctions between science and society. This is happening through increased societal participation in the mechanisms that guide research … Furthermore, society's voice is being heard through the increase in researcher engagement with practice.
This blurring of the traditional borders of research and practice in today's society means that critical reflection in scientific practice could now also include dialogue with practitioners and other researchers in the community in more open, transparent ways. Translation is therefore, an important aspect of educational technology research at every level. How educational technology researchers share and converse about their research influences how practitioners and the public engage with, and understand it (Adams et al., 2017), as a reflection of the field.

Conclusion
Educational technology research and practice are deeply intertwined. Learning and teaching in the digital age is a complex and changing problem. Designing effective instructional methods for developing learners' critical thinking skills, as one example, has proven to be quite the challenge (Van Gelder, 2005). More interdisciplinary and translational research has been called for to help develop innovative and effective teaching and learning

Literature review
A priori research plan

Measures of learning
Computer code

Sharable elements of the research process
Open Oceans: Learning without borders CONCISE PAPER interventions, and implement them in practice. These efforts can be hindered, however, by unconscious cognitive influences, and a lack of transparency in how research is conducted. Improving transparency can help researchers communicate more accurately and efficiently with one another, better synthesize and evaluate educational science, and better translate those findings for use by learners, educators, and policy makers. Our hope is that this paper will serve as a catalyst for sparking further discussion towards this goal.