Atkins Reflection A Review Of The Literature For A Dissertation

Hacking the Literature Review: Opportunities and Innovations to Improve the Research Process

Jennifer Lubke, Virginia G. Britt, Trena M. Paulus, and David P. Atkins

Jennifer Lubke (Jennifer-Lubke@utc.edu) is Assistant Professor, the University of Tennessee at Chattanooga, Chattanooga, Tennessee; Virginia G. Britt (virginia.britt@wcs.edu) is Library Media Specialist, Nolensville High School, Nolensville, Tennessee; Trena M. Paulus (tpaulus@uga.edu) is Professor, University of Georgia, Athens, Georgia; David P. Atkins (david.atkins@utk.edu) is Professor, the University of Tennessee Libraries, Knoxville, Tennessee.

Thanks to Ann Viera, Elizabeth Pope, and the QUAL8545 students at the University of Georgia for their helpful comments on early drafts of this article.

Research outputs across the academic disciplines are almost exclusively published electronically. Organizing and managing these digital resources for purposes of review, and with the technical savvy to do so, are now essential skills for graduate study and life in academia. Paradoxically, digital and web-based technologies provide greater ease and efficiency with which to gather mass amounts of information, while at the same time presenting new challenges for reading, analyzing, organizing, and storing resources. Students, scholars, and the librarians who support them must adopt and refine practices to convert from paper-full to paperless literature review. This article proposes a methodical, reproducible, three-stage process that harnesses the power digital tools bring to the research cycle, regardless of the user’s preferred platform or operating system. Focusing just on the literature review phase, we develop a conceptual framework, illustrated with concrete tips and advice for storing and organizing, reading and annotating, and analyzing and writing. We demonstrate how a researcher’s self-selected suite of tools may be used to complement and even overcome the limitations of comprehensive academic literature and composition platforms such as Docear and F1000Workspace, especially regarding qualitative data analysis software for analyzing and coding research literature. Using these techniques, librarians can become teachers and research partners supporting the skill development of faculty and students.

A decade ago, Boote and Beile lamented the quality of dissertation literature reviews in educational research, suggesting that their criteria are part of the “hidden curriculum” and “tacit knowledge passed on from mentors to candidates.”1 As educational researchers, research methodologists, and librarians, we understand Boote and Beile’s arguments, as we have tried to engage our students and ourselves in strategies for a more “systematic literature review.”2 Consistent with the growing interest in digital tools to support research,3 library scholarship has investigated the role of digital tools in scholarly publication workflows,4 including “personal digital libraries,”5 “personal information management,”6 tools for discovery,7 and collaborative practices for information management.8 Digital tools are changing the nature of the research process, including the literature review, and have the potential to improve the quality of the outcomes by creating an entirely paperless process.

“Paper-full” literature reviews, characterized by numerous print copies of publications, sheets of hand- and typewritten notes, and lists of bibliographies, can pose several problems for scholars. Many publications are distributed in digital formats and the scholar’s final product is submitted in digital form. Moving between the print and digital environments, and managing and annotating journal articles, books, and other sources in a methodical way, can prove cumbersome. For born-digital materials, creating a paper version eliminates many advantages of the electronic versions. Transporting hard copies between office, home, and travel is difficult, and it is not easy to share these materials with collaborators. Librarians and the educators they serve see that many of today’s students are already paperless in how they interact with the published literature. While mid- to late-career scholars may find the idea of going paperless daunting, this may be less true of those early in their careers. Librarians have an opportunity to leverage the technical affordances of born-digital composition, data management, and publishing to support the creation and use of wholly digital literature review processes by scholars in all phases of their careers.

Most of the scholarship in this area has described various aspects of the literature review process, such as organizing and downloading PDFs;9 describing various citation management systems (CMSs) like Zotero,10 Mendeley,11 EndNote,12 and RefWorks;13 reporting on surveys of CMS users;14 comparing CMS features,15 including those for collaboration and social networking;16 and reporting the accuracy of “cite as you write” features.17 What is still missing, though, is guidance on how to reenvision or hack the literature review using new tools to create an entirely paperless workflow—including organizing relevant sources, annotating them, and synthesizing ideas across the sources to create an academic argument. Librarians can step into this void, imparting both guidance and instruction during the research process. Librarians can teach practical skills and offer advice regarding technologies that leverage the potential within electronic collections, within analytical software applications, and within composition and publication tools to create effective literature reviews.

In this paper we describe an innovative, purely digital, “paperless literature review” workflow18 that will lead to improved outcomes for scholars who must store, organize, read, annotate, and analyze the work of others before writing their own. Librarians can apply these workflows into their own publishing and outreach practices, adding value to the research process that goes beyond just connecting researchers with information resources. Using these techniques, librarians can become teachers and research partners supporting the skill development of faculty and students.

Literature Review

In a user-needs study conducted in a collaboration between Cornell University Library and Columbia University Libraries, doctoral students in the humanities “questioned the continued utility of librarians, asking why a librarian’s assistance is still needed, given the convenience of online research tools and availability of faculty expertise.”19 In the past, libraries were responsible for storing information, and users were on their own to build knowledge from it. Now, however, as Favaro and Hoadley have noted, the Internet plays the information storage and retrieval role, requiring libraries and librarians to take on new roles, one of which is helping patrons effectively manage their personal information collections.20 As Favaro and Hoadley asked, “How can we envision the role of libraries in a much broader process of knowledge building that extends beyond a trip to the library or constructing a bibliography?”21 These authors argued that tools for information discovery and access can no longer be segregated from those used in the subsequent steps of the research process and that librarians need a better understanding of digital workflows in order to support early-career scholars. These scholars need assistance selecting the best tools for various phases of the research process—from information retrieval to knowledge building—be they library-provided or freely available.

These digital workflows build primarily on PDF texts such as scholarly articles and book chapters. Research outputs, most commonly in the form of PDFs, are readily accessible online via Open Access or a library’s collections. Coupled with the advent of social media and mobile computing, this suggests new routes for researcher workflow, beginning with the literature review.22 For example, in a two-part cross-disciplinary study of nearly two hundred Pennsylvania State University (Penn State) faculty, Antonijević and Cahoy found that participants preferred commercial search engines over academic databases for discovery.23 A vast number of the study’s respondents reported that they work predominantly with PDFs, which they organize, back up, and archive in numerous ways, including hard drives, thumb drives, cloud-based solutions, and “old-fashioned” paper print outs and files. Old-fashioned data management practices persist for a number of reasons but are becoming increasingly cumbersome as PDFs dominate as the preferred digital format for publishing texts.24

Scholars’ PDF annotation practices are increasingly idiosyncratic and hybridized, owing to the proliferation of digital tools that enable researchers to create searchable and shareable notes.25 For instance, Bjarnason described his process of “marking up PDFs and writing corresponding notes in a single text file, used for all note keeping, and using keywords to allow topic searches and visualization.”26 One tool that has an expanded role to play is the digital citation manager or citation management system (CMS). Childress noted: “It could be said that citation management is the foundation for scholars to begin collecting, managing, and archiving their research findings as well as their own scholarly output.”27 Antonijević and Cahoy reported that a large number of social science and humanities faculty do, in fact, manage citations with EndNote and Zotero. Yet, some participants questioned the robustness of CMS for indexing and filing, opting instead for cloud-based solutions—even as they voiced privacy and security concerns about using nonuniversity servers.28

As the Penn State study illustrated, integrating any new tool, such as CMS, into a paperless workflow is not without its challenges. These challenges are not limited to CMS and include: developing and maintaining requisite technology skills,29 staying organized and avoiding information overload,30 adapting to changing file formats,31 choosing from a multiplicity of storage options,32 overcoming the familiarity bias of manual bibliographies,33 and managing discipline-specific concerns.34

A persistent challenge is integrating search and retrieval with organizing and archiving practices to avoid losing track of resources. Researchers report a sense of disintegration caused by the proliferation of digital tools for data management.35 Antonijević and Cahoy suggested this finding “implies that although institutional support and training programs are vital for the uptake of digital tools, such programs are not necessarily sufficient for effective integration of those tools into scholarly practice.”36 This lack of integration might be viewed as a two-pronged challenge of overcoming “supply-side influences,” or the lingering effect of practices cultivated in analog environments, and “demand-side characteristics,” or the unique habits and dispositions imposed by the scholar’s home discipline.37 For example, participants in Antonijević and Cahoy’s study reported that “integration of digital tools into their search activities resulted in a complete breakdown of their systems for organizing information, which were developed for print-based materials.”38 They go on to conclude that “while implementation of digital tools into one phase of the workflow might be rewarding, it might also become a challenge in other phases of the work.”39 The breakdown seems most acute when involving CMS, the capacities and functionalities of which are not fully trusted or understood by librarians40 nor the broader academic community.41

Citing “ubiquitous disconnects” between tools, Favaro and Hoadley wrote, “imagine the loss of focus and the distraction created whenever a student needs to move from a highlighted PDF to typing the metadata into EndNote or Zotero . . . to moving the document and inviting collaborators into a space such as Google Docs or Dropbox.”42 As researchers continue to seek a transformational killer app, a “one-stop shop” for the literature review workflow, the potential for information discovery through tagging, sharing, and networking groups available in Mendeley,43 Zotero,44 and CiteULike45 remains an area of active inquiry.46

Librarians must also consider disciplinary differences when helping their patrons convert to a paperless workflow.47 For example, at the University of Pennsylvania, humanities and social sciences faculty are chiefly concerned with archiving publications, whereas faculty in the hard sciences also want to archive their process.48 Disciplines that value qualitative methods of research are familiar with the utility of qualitative data analysis software (QDAS) for documenting the process,49 a functionality that we argue can be applied to analysis of literature, regardless of discipline or research paradigm.

The Paperless Literature Review Workflow

Scholars’ publishing suites such as Docear, which brands itself as an open source “literature suite that cuts the (paper) clutter,” and the F1000Workspace, which supports collecting, writing, and collaborating among scientists, both demonstrate potential in creating a transformational “one-stop shop” to support the writing workflow. However, they do not necessarily provide the most robust tools for literature reviews. To illustrate these workflows and concepts, we focus solely on the literature review, proposing paperless workflows which are not dependent on specific tools or platforms. We illustrate the process with the tools we use in our own work as well as describing innovative, open source alternatives, proposing models for librarians and researchers alike.

We use the topic of crowdfunding (the practice of fundraising on the Internet for personal need or professional projects, on sites such as GoFundMe, Kickstarter, or Indiegogo) to contextualize our discussion of the literature review process. Figure 1 provides an overview of a paperless workflow. Stage 1 begins with selecting a digital tool to store and organize sources—either a CMS or a cloud storage device. The choice may, in part, depend on whether the selected CMS provides: (1) robust PDF annotation tools and (2) a mobile app to be used during the reading phase. If so, sources can be both organized and annotated thoroughly within the CMS. If not, it may be better to opt for cloud storage and then move to a digital reader on a mobile device in Stage 2 to annotate the sources. Finally, in Stage 3, annotated sources can be uploaded to a QDAS program to synthesize and begin writing the literature review.

Stage 1. Store and Organize Sources

With a paperless workflow, the lifecycle of a literature source begins at harvesting and continues with storage. Born-digital publications readily synchronize with this process. Any paper-based sources or any digital formats that are not in optical character recognition (OCR) format may be reformatted and added to this workflow. Any born-print documents must be converted into machine-readable text, which can be accomplished by using an OCR scanning/conversion application such as is available in Adobe Acrobat Pro or a PDF-to-OCR conversion such as that available in Google Drive. Another option is to take notes from a book in a word processing program, convert into a PDF, and include it with the other files for the literature review.

A CMS can provide a robust hub through which all paperless activity may flow. Examples of CMS include EndNote, Mendeley, RefWorks, Zotero, and Papers. A CMS serves as a personal database of resources which can be organized, searched and shared. Figure 2 shows the Mendeley interface, with organization tools in the left-hand pane, including search and filtering tools, organizational folders, and groups. Sources within the crowdfunding folder appear in the middle pane and can be sorted by author, title, year, journal name, etc. Details of the selected source appear in the right-hand pane, and the PDF can be read, and to varying degrees, annotated within the CMS as well.

All CMS tools import PDFs and citation metadata directly from publisher databases and even web browsers. Literature searches can be conducted from within the CMS and/or sources can be downloaded directly into the software. For example, figure 3 shows Zotero, which shares a split screen with the web browser. Within the notes editor interface, new sources can be recorded, tagged, and linked to related resources in the Zotero database.

Each CMS has a note-taking feature, similar to the one seen in figure 3. When a new source is added to the library, a brief annotation about when and where the item was located, how the item will be used in the research project, and so on can be included, which helps ensure methodical tracking. The notes field can support more substantial annotations and memos, upwards of several pages in length. These annotations are searchable, and, while sharing utilities vary from system to system, they typically can be exported to the desktop or emailed to collaborators.

CMS selection depends on the needs and preferences of the user. Factors to consider include availability, cost, online storage capacity, collaboration features, and PDF annotation support (important in Stage 2). Discipline-specific concerns also will influence the user’s choice of CMS, so it is advisable to consult with collaborators and others in the field when making a selection. The Penn State University Libraries maintain a current and comprehensive online comparison chart of the major citation managers (http://guides.libraries.psu.edu/CitationStyles/Tools).

Cloud-based content management services (for example, Box, Dropbox, or Google Drive) can be used as an alternative to a CMS. Cloud storage services provide browser-based interfaces and downloadable desktop iOS- and Android-compatible mobile apps that enable drag-and-drop movement of files. This seamless integration can easily support a collaborative workflow and allows working with files on mobile devices. Built-in utilities allow easy sharing of files with collaborators as well as file integration with other desktop and mobile apps. Within the Dropbox iOS app, for example, the user can transfer a PDF to his or her preferred PDF reader or citation manager using the “Open In…” button (see figure 4).

Whether a CMS or a cloud storage system is selected, storing and organizing documents is an important first step in a paperless literature review workflow. Creating a system for naming files and folders is also important. We recommend naming files by author and publication date, for example, “Smith 2016,” so that files can be sorted alphabetically for fast retrieval. Since storing sources in the cloud lacks CMS functionality for creating multiple copies of the same citation in multiple libraries for various projects, it will be necessary to organize sources into separate folders by project, topic or some other scheme. Finally, it is important to ensure that the system is backed up, preferably in the cloud.

Stage 2. Read and Annotate Sources

Once sources have been organized and stored, the next step is to thoroughly read the literature, annotating as necessary, in preparation for synthesizing the sources and writing an academic argument. As Bjarnason has noted, the “physical filing cabinet for paper journal articles has been replaced by an online repository of marked up PDFs.”50 This repository can be either the CMS or a collection of sources in cloud storage. The ideal scenario is that the selected CMS provides an app so that sources can be read and annotated on a mobile device, such as Mendeley’s iPad app illustrated in figure 5.

Mendeley provides highlighting and note-taking features, but not underlining or some of the more robust features available in other PDF reading applications or devices (see figure 6). In this case, users should probably choose to annotate PDFs outside of the CMS and then import them.

For scholars who are used to reading and annotating on paper and find it too cumbersome to do so on a desktop or laptop computer, we recommend using a tablet device (such as an iPad) in conjunction with a digital reader. The GoodReader App (iOS) is one such digital reader; others include iAnnotate (iOS), Mac Preview (Mac), or Adobe Acrobat PDF Reader (Windows, Mac, iOS, Android, Windows Phone). Tablets can make the transition to a paperless workflow less distressing because it is possible to curl up with the readings, use a finger or stylus to annotate much like one would with a highlighter or pen, and otherwise physically engage with the texts in similar ways as on paper. Mobile devices also allow the text to be enlarged for visibility and to focus on the most meaningful segments of the source for deeper engagement. By using a lightweight tablet reader or a mobile device synchronized either with the CMS or cloud storage, resources become much more portable and comfortable to review.

Digital readers can easily be synchronized with cloud storage; for example, GoodReader can be connected and synchronized with Dropbox. PDFs can be uploaded and downloaded between the two programs. Sources that need to be read are downloaded from Dropbox into GoodReader, organized into folders as needed, annotated, “flattened” (to retain the annotations), and then uploaded back into Dropbox (see figure 7). In a collaborative project, this step of the process also allows any member of the team to pull down a source, create an annotated version, and easily return the file to the cloud for others on the team to read and view annotations and notes. GoodReader works similarly with Google Drive and other cloud storage services.

When the annotation feature is selected, digital readers will automatically suggest creating a second copy of the file for annotation purposes. This way the original, clean copy of the source is retained and a second, annotated copy is stored in GoodReader. Annotations can include marginalia or collapsible “sticky notes,” which allow for more detailed responses and memos on the text. In addition, these readers often support searching for keywords, highlighting (in a variety of colors), underlining, adding shapes and arrows, and selecting from a variety of other annotation tools as illustrated in figure 6.

After reading is complete, all notes and annotations are saved and can be revisited at any time. Once the source is ready for further analysis in QDAS or by other research collaborators, the app provides the option to send the PDF with all annotations “flattened.” Flattening saves the annotations within the PDF, ensuring they are readable on other devices and in other applications, such as QDAS introduced in Stage 3. The flattened source may be emailed, returned to the cloud storage program, or, at this point, put into a CMS.

Stage 3. Upload, Analyze and Write

Once sources have been stored, organized, read and annotated, it is time to analyze, synthesize themes, and create arguments in preparation for writing. Qualitative data analysis software programs are highly useful in this stage. First developed in the 1980s, QDAS programs were initially designed by researchers to help organize and analyze the massive amounts of data typically generated in a qualitative study, such as interview recordings and transcripts, observational field notes, digital photographs, and journal entries. Performing a literature review is in many ways a type of qualitative data analysis—with the literature serving as the data that needs to be described, analyzed, and interpreted.51 Just as QDAS programs such as QSR NVivo, ATLAS.ti and MAXQDA can make the analysis of research data more systematic and transparent, they can do the same for literature reviews.52 Here librarians can make real contributions to improve the quality of literature review workflows by ensuring that scholars understand what QDAS programs are and the tools and techniques they offer. QDAS developers themselves have taken up this application of their software, with QSR NVivo, for example, partnering with Endnote and offering webinars on how to conduct literature reviews. MAXQDA version 11 allows users to import RIS data from Endnote, Citavi and Zotero; and Pope (2016) recently published her strategies for conducting literature reviews in ATLAS.ti. QDAS programs are quite complex, but, for those conducting qualitative studies, the time invested in learning the software will pay off when it is time to use the software again for the data analysis. To select a QDAS program we recommend checking which ones are supported at your institution with site licenses and training, as well as which ones are being used by colleagues. There are also free webinars and video tutorials online for most of the programs.

In this workflow, QDAS provides a digital workspace to aggregate, code, organize, and compose all elements of the literature review. For example, QSR NVivo supports the importation of entire citation manager libraries, including both bibliographic metadata and the associated annotated PDFs. Alternately, it is possible for the researcher to upload entire folders of PDFs or simply to drag and drop all the annotated sources from a computer or cloud-based folder into the QDAS program to begin analysis. Uploading all the annotated sources into a QDAS package not only allows quick clicking from one source to the next for easy retrieval and review, but also provides tools and features to further organize the sources by any criteria relevant to the literature review. In ATLAS.ti, for example, the “documents family” feature allows grouping of sources according to characteristics such as the type of source it came from, or whether it has been reviewed. In figure 8, the sources are organized by source: journal articles, popular press articles, and theses and dissertations. Clicking on the journal articles family will immediately display only those sources for further review. Document families can also be used to organize sources by the major themes of literature review.

Each source that has been uploaded into the software can be described in the comment field. In figure 8, for example, the full reference citation for the source has been entered for ease of retrieval. Unfortunately, at the moment, QDAS programs do not have a “cite as you write” feature, so reference lists cannot be generated in the same way they are in a CMS. This makes it necessary to use both a CMS and a QDAS program for an ideal paperless workflow. Alternately, Docear and F1000Workspace provide CMS features, some limited analysis of sources through memos, and reference list generation.

One advantage of reading and annotating the sources prior to uploading them into a QDAS program is that after the initial reading, the major categories and themes of the literature review might be relatively clear to the researcher as the analysis phase begins. In other words, the beginnings of the argument might already be crystallizing. The highlights and annotations made while reading can guide the coding and retrieving of relevant sections of the sources. “Coding” is relatively easy to do in all the programs and simply means tagging a particular sentence, paragraph or entire section of a source with a label created by the researcher. This way, when it is time to write, all the other sections of the sources related to that topic can be retrieved at once. With source materials coded in the software, it is possible to retrieve all the literature sections on the same topic, reread them, and begin writing.

To illustrate, figure 9 shows yellow highlights that were made in GoodReader while reading the source on an iPad. During the initial reading and annotating it became clear that the various sources about crowdfunding often mentioned the variety of fields (journalism, filmmaking, science) that engage in the practice. This would no doubt be important to mention in the literature review. The sources also noted that successful crowdfunding efforts are often due to the strong existing social networks of those requesting the funds. Codes were then created in ATLAS.ti for both of these ideas (“fields using crowdfunding” and “existing social networks—building an audience”). These codes were attached to relevant sections of the source (called “quotations”) for easy retrieval when it was time to write. Such codes can be created before the sources are reviewed in QDAS or created along the way. QDAS keeps track of how often each code has been used so that it becomes clear which topics are being discussed most often in the literature. In figure 10, “criteria for success” (of crowdfunding efforts) was used across the sources sixteen times, and “existing social networks” thirteen times. “Fields using crowdfunding” was used eight times. This provides an initial idea of how much support there is in the literature for each topic to be covered in the review.

Once the sources have been coded, scholars may retrieve all the coded sections of the literature on each topic and view them together. To review all the literature that was coded as “criteria for success,” the code is clicked and a window appears with all the quotations on that topic across the literature sources. In figure 11, the sixteen quotations for “criteria for success” have been retrieved and are visible in the new window. Clicking on each quotation will make it appear in the context of the full source so it can be reviewed for additional insights.

It is easy to uncode, change code names, merge codes, or reorganize codes while reviewing them to be sure they reflect the findings of the literature review and will be helpful in creating the argument. QDAS tools also provide features that will run reports displaying all the quotations for each code, as illustrated in figure 12 with the ten quotations coded “definitions of crowdfunding.” The quotations can then be reviewed or copied and pasted right into a word-processing document as the literature review is being written.

Once codes have been finalized it can be helpful to visually display them to organize them into an argument structure in preparation for writing. In figure 13, codes were dragged and dropped into a network view in ATLAS.ti and meaningful links were created. The researcher used network links to show that the literature review would begin by defining “crowdfunding” and that this definition would include a discussion of its history, how it works, and which fields are using it. Another section of the review would discuss the criteria that make crowdfunding efforts successful. A major factor in successful crowdfunding efforts is having existing social networks that can build a strong audience for the crowdfunding request. By using the network view, the argument of the literature review is visible.

Another useful feature for writing is the memo tool provided by each QDAS package. This tool is a text editor that can be used to compose and therefore capture insights and ideas and even write initial drafts of the argument while staying spatially close to the literature being reviewed. In figure 14, some initial thoughts about “criteria for success” of crowdfunding campaigns are captured in a memo while looking at a section of one of the sources. It is possible to write entire sections of the literature review by using the memo tool, before bringing it back into word processing software to ready it for publication.

Finally, QDAS saves all this analysis in an archival and shareable file. At a later date, a researcher can update the literature review or extract portions to create new literature reviews or bibliographies. Biomedical researchers compiling systematic literature reviews and meta-analyses of literature could use QDAS as an efficient platform for managing these very large and very complex reviews. QDAS tools support many of the steps required in completing systematic reviews such as appraising the literature, extracting data from the literature, analyzing results across publications, and composing the written findings. While we have illustrated Stage 3 using ATLAS.ti, other QDAS programs such as QSR NVivo and MAXQDA provide very similar features and functionality.

At this point in the process the workflow moves back to the CMS. CMSs were designed to extract metadata from sources, which is then used to generate a list of references in whatever citation style the researcher prefers (APA, MLA, Chicago) through a plug-in for word processing programs. It is often this “cite as you write” feature that motivates researchers to adopt a CMS tool, in their effort to more easily generate accurate bibliographies and references pages from their cited sources. One way to choose between CMS tools is to compare how well each one performs metadata extraction. However, it is generally the case that a CMS offers only varying degrees of formatting accuracy and is highly dependent on how precisely the metadata is entered.53 As we have illustrated, the power of the CMS resides in a host of other functions to support literature reviews (for example, organizing, sharing, annotating sources), of which outputting references is but one.

Conclusion

With born-digital literature sources eclipsing paper-based sources, a paperless workflow seems inevitable and preferable. In this paper we have illustrated one such workflow to support a more methodical approach to creating literature reviews, including using digital tools for storing and organizing sources for the literature review, reading and annotating the sources, analyzing the sources, and writing up the arguments. While tools such as Docear and F1000Workspace are proposing comprehensive platforms to support an integral part of this workflow, neither yet includes the analytic robustness of QDAS software to systematically code and retrieve parts of the literature that support the overall argument.

This workflow was developed based on our own literature review processes, and we continue to learn about new strategies and tools from our colleagues and students. There is a great opportunity for librarians to collaborate with and to provide guidance to scholars and researchers, both new and established, interested in adopting paperless workflows, helping them identify tools that will be of greatest value. Librarians can build upon their expertise and experience with collecting literature and move beyond just helping scholars create the corpus of literature for review. Librarians are well situated to learn how the collections we build and curate can best be used by those we serve. Just as librarians embraced CMS as transformational tools for citation management, librarians can embrace and advocate QDAS and paperless review processes. Librarians can develop the skills, create the training, and foster collaborations required to become more fully embedded in the research process.

Librarians can expand their own research instruction portfolio to use article discovery and citation management as springboards into innovative literature review practices. Instruction doesn’t have to end with “…and now you can import your sources into EndNote.” Instruction can move into the next phase, exploring how a researcher can manage digital bibliographies and apply digital tools, perhaps the same tools used in their data analysis, to create literature reviews. Librarians can also partner with academic departments or research technology support operations who are vested in teaching research methods. At The University of Tennessee, Knoxville, librarians partner with instructors in the campus’s Office of Information Technology to offer training sessions which bridge citation management tools with NVivo, a campus-supported QDAS application. For librarians working at institutions that lack this level of institution-wide research support, learning more about the technologies used in each step of the research process and then stepping in to fill the need for research methods support could be an opportunity for outreach to academic departments or individual patrons that demonstrates the library’s worth and adds value to the institution. To better explore issues and options, it may be worthwhile to replicate Antonijević and Cahoy’s ethnographic study of the information management practices of scholars in the context of literature reviews to discover whether the workflow we propose here is similar to what scholars are currently doing.54 Also, the Kramer and Bosman survey of researchers’ tools and workflows provides valuable data and promotes innovation by illustrating technologies and workflows that support open source, open access and open science.55

Through the research and exploration of the literature review process, we have proposed a methodical, reproducible, paperless literature review processes that harnesses the power of digital tools. We also see potential in expanding the changing role of librarians as both teachers and research partners, supporting faculty and students in their research and publishing skills. Our hope is that through presenting this information, we might influence information behaviors and updates to the scholarly workflow through the use of digital tools.

References

  1. David N. Boote and Penny Beile, “Scholars before Researchers: On the Centrality of the Dissertation Literature Review in Research Preparation,” Educational Researcher 34, no. 6 (August 2005): 13.
  2. Sebastian K. Boell and Dubravka Cecez-Kecmanovic, “On Being ‘Systematic’ in Literature Reviews in IS,” Journal of Information Technology 30, no. 2 (June 2015): 1.
  3. Trena Paulus, Jessica Lester, and Paul Dempster, Digital Tools for Qualitative Research, (Thousand Oaks, CA: Sage, 2014).
  4. Sharon Favaro and Christopher Hoadley, “The Changing Role of Digital Tools and Academic Libraries in Scholarly Workflows: A Review,” Nordic Journal of Information Literacy in Higher Education 6, no. 1 (2014); Thomas L. Mead and Donna R. Berryman, “Reference and PDF-Manager Software: Complexities, Support and Workflow” Medical Reference Services Quarterly 29, no. 4 (2010); Bianca Kramer and Jeroen Bosman, “Innovations in Scholarly Communication—Global Survey on Research Tool Usage [version 1; referees: awaiting peer review]” F1000Research 5 (2016).
  5. Navid Aghakhani, Fatemeh Lagzian, and Bidyut Hazarika, “The Role of Personal Digital Library in Supporting Research Collaboration,” Electronic Library 31, no. 5 (2013): 548–60.
  6. Ina Fourie, “Collaboration and Personal Information Management (PIM),” Library Hi Tech 30, no. 1 (2012): 186–93; Steve Whittaker, “Personal Information Management: From Information Consumption to Curation,” Annual Review of Information Science and Technology 45, no. 1 (2011): 1–62; Antoinette Lush, “Fundamental Personal Information Management Activities—Organisation, Finding and Keeping: A Literature Review,” Australian Library Journal 63, no. 1 (2014): 45–51.
  7. Favaro and Hoadley, “Changing Role of Digital Tools.”
  8. Fourie, “Collaboration and Personal Information Management.”
  9. Vedana Vaidhyanathan et al., “Making Bibliographic Researchers More Efficient: Tools for Organizing and Downloading PDFs, Part 1,” Journal of Electronic Resources in Medical Libraries 9, no. 1 (2012): 47–55.
  10. Khue Duong, “Rolling Out Zotero across Campus as a Part of a Science Librarian’s Outreach Efforts,” Science & Technology Libraries 29, no. 4, (2010): 315–24; Peter Fernandez, “Library Values That Interface with Technology: Public Service Information Professionals, Zotero, and Open Source Software Decision Making,” Library Philosophy and Practice (e-journal), 2012, http://digitalcommons.unl.edu/libphilprac/803.
  11. Alison Hicks and Caroline Sinkinson, “Examining Mendeley: Designing Learning Opportunities for Digital Scholarship,” portal: Libraries and the Academy 15, no. 3 (2015): 531–49.
  12. Anya McKinney, “EndNote Web: Web-Based Bibliographic Management,” Journal of Electronic Resources in Medical Libraries 10, no. 4 (2013): 185–92.
  13. Mariela Hristova, “RefWorks Usage Patterns: Exploring the First Four Semesters of Use by Faculty, Graduate Students, and Undergraduates,” Internet Reference Services Quarterly 17, no. 2 (2012): 45–64; Cynthia Vaughn, “Citation Management: RefWorks,” Journal of Electronic Resources in Medical Libraries 10, no. 1, (2013): 25–31.
  14. Jenny Emanuel, “Users and Citation Management Tools: Use and Support,” Reference Services Review 41, no. 4 (November 25, 2013): 639–59; Jamie Salem and Paul Fehrmann, “Bibliographic Management Software: A Focus Group Study of the Preferences and Practices of Undergraduate Students,” Public Services Quarterly 9, no. 2 (2013): 110–20; Hannah Gascho Rempel and Margaret Mellinger, “Bibliographic Management Tool Adoption and Use: A Qualitative Research Study Using the UTAUT Model,” Reference & User Services Quarterly 54, no. 4 (Summer 2015): 43–53.
  15. Dawn Childress, “Citation Tools in Academic Libraries: Best Practices for Reference and Instruction,” Reference & User Services Quarterly 51, no. 2 (Winter 2011): 143–52; Merinda Kaye Hensley, “Citation Management Software: Features and Futures,” Reference & User Services Quarterly 50, no. 3 (Spring 2011): 204–8; Michael Steeleworthy and Pauline Theresa Dewan, “Web-Based Citation Management Systems: Which One Is Best?,” Partnership: The Canadian Journal of Library Information Practice and Research 8, no. 1 (2013): 1–8; Mangesh S. Talmale and Surya Nath Singh, “Social Citation 2.0 Tools: An Overview,” International Journal of Information Dissemination and Technology 3, no. 2 (2013): 80–85.
  16. Amit K. Mahajan and D. Kyle Hogarth, “Taking Control of Your Digital Library: How Modern Citation Managers Do More Than Just Referencing,” CHEST: Topics in Practice Management 144, no. 6 (2013): 1930–33; Holt Zaugg et al., “Mendeley: Creating Communities of Scholarly Inquiry through Research Collaboration” TechTrends 55, no 1 (Jan. 2011): 32–36.
  17. Lindley Homol, “Web-Based Citation Management Tools: Comparing the Accuracy of Their Electronic Journal Citations,” The Journal of Academic Librarianship 40, no. 6 (Nov. 2014): 552–57.
  18. Thorarin Bjarnason, “Digital to-Do: Paperless Literature Review,” IEEE Potentials 30, no. 4 (July 2011): 27–30.
  19. Gabriela Castro Gessner et al., Supporting Humanities Doctoral Student Success: A Collaborative Project between Cornell University Library and Columbia University Libraries (Washington, DC: Council on Library and Information Resources, 2011): 11, http://academiccommons.columbia.edu/item/ac:140506.
  20. Favaro and Hoadley, “The Changing Role of Digital Tools.”
  21. Ibid., 30.
  22. Ellen Collins, Monica E. Bulger, and Eric T. Meyer, “Discipline Matters: Technology Use in the Humanities,” Arts and Humanities in Higher Education 11, no. 1–2 (2012): 76–92
  23. Smiljana Antonijević and Ellysa Stern Cahoy, “Personal Library Curation: An Ethnographic Study of Scholars’ Information Practices,” portal: Libraries and the Academy 14, no. 2 (2014): 287–306.
  24. Bjarnason, “Digital to-Do.”
  25. Bjarnason, “Digital to-Do”; Favaro and Hoadley, “Changing Role of Digital Tools”; Gessner et al., Supporting Humanities.
  26. Bjarnason, “Digital to-Do,” 28.
  27. Childress, “Citation Tools in Academic Libraries,” 150.
  28. Antonijević and Cahoy, “Personal Library Curation.”
  29. Favaro and Hoadley, “Changing Role of Digital Tools”; Gessner et al., Supporting Humanities.
  30. Antonijević and Cahoy, “Personal Library Curation”; Favaro and Hoadley, “Changing Role of Digital Tools.”
  31. Ibid.
  32. Antonijević and Cahoy, “Personal Library Curation.”
  33. Antonijević and Cahoy, “Personal Library Curation”; Childress, “Citation Tools in Academic Libraries”; Gessner et al., Supporting Humanities.
  34. Antonijević and Cahoy, “Personal Library Curation”; Childress, “Citation Tools in Academic Libraries”; Collins, Bulger, and Meyer, “Discipline Matters.”
  35. Collins, Bulger, and Meyer, “Discipline Matters”; Favaro and Hoadley, “Changing Role of Digital Tools.”
  36. Antonijević and Cahoy, “Personal Library Curation,” 302.
  37. Collins, Bulger, and Meyer, “Discipline Matters,” 78
  38. Antonijević and Cahoy, “Personal Library Curation,” 303.
  39. Ibid., 302–3.
  40. Childress, “Citation Tools in Academic Libraries.”
  41. Antonijević and Cahoy, “Personal Library Curation”; Gessner et al., Supporting Humanities.
  42. Favaro and Hoadley, “Changing Role of Digital Tools,” 27.
  43. Emanuel, “Users and Citation Management Tools”; Hicks and Sinkinson, “Examining Mendeley.”
  44. Emanuel, “Users and Citation Management Tools.”
  45. Bjarnason, “Digital to-Do;” Emanuel, “Users and Citation Management Tools.”
  46. Antonijević and Cahoy, “Personal Library Curation.”
  47. Ibid., 303.
  48. Ibid., 299.
  49. Ibid.
  50. Bjarnason, “Digital to-Do,” 28.
  51. Harry F. Wolcott, Transforming Qualitative Data: Description, Analysis, and Interpretation (Thousand Oaks, CA: Sage, 1994).
  52. Paulus, Lester, and Dempster, Digital Tools.
  53. Childress, “Citation Tools in Academic Libraries.”
  54. Antonijević and Cahoy, “Personal Library Curation.”
  55. Kramer and Bosman, “Innovations in Scholarly Communication.”

Figure 1. A paperless workflow

Figure 2. The Mendeley desktop interface

Figure 3. The Zotero notes editor

Figure 4. The Dropbox iOS Interface

Figure 5. Mendeley’s app-supported annotation feature

Figure 6. Annotated PDF in GoodReader

Figure 7. GoodReader synchronizing with Dropbox

Figure 8. ATLAS.ti document families

Figure 9. Using codes in ATLAS.ti to tag literature review themes

Figure 10. Number of ATLAS.ti codes used in the crowdfunding literature

Figure 11. Retrieving coded quotations in ATLAS.ti

Figure 12. An output of all sections of the literature that include a definition of crowdfunding

Figure 13. Network display of the argument structure in ATLAS.ti

Figure 14. Using memos to write up the findings in ATLAS.ti

Abstract

Background

Reflection on experience is an increasingly critical part of professional development and lifelong learning. There is, however, continuing uncertainty about how best to put principle into practice, particularly as regards assessment. This article explores those uncertainties in order to find practical ways of assessing reflection.

Discussion

We critically review four problems: 1. Inconsistent definitions of reflection; 2. Lack of standards to determine (in)adequate reflection; 3. Factors that complicate assessment; 4. Internal and external contextual factors affecting the assessment of reflection.

Summary

To address the problem of inconsistency, we identified processes that were common to a number of widely quoted theories and synthesised a model, which yielded six indicators that could be used in assessment instruments. We arrived at the conclusion that, until further progress has been made in defining standards, assessment must depend on developing and communicating local consensus between stakeholders (students, practitioners, teachers, supervisors, curriculum developers) about what is expected in exercises and formal tests. Major factors that complicate assessment are the subjective nature of reflection's content and the dependency on descriptions by persons being assessed about their reflection process, without any objective means of verification. To counter these validity threats, we suggest that assessment should focus on generic process skills rather than the subjective content of reflection and where possible to consider objective information about the triggering situation to verify described reflections. Finally, internal and external contextual factors such as motivation, instruction, character of assessment (formative or summative) and the ability of individual learning environments to stimulate reflection should be considered.

Background

Physicians and other healthcare workers act in challenging professional environments. There is an exponential growth in knowledge and treatment options, patients are becoming more articulate and demanding, and inter-professional collaboration is the rule rather than the exception. Lifelong learning is, consequently, crucial to the provision of up-to-date healthcare services [1]. Rather than just attending conferences, lifelong learning today is seen as a continuous process, embedded in everyday professional practice. At its core lies practitioners' ability to reflect upon their own actions, continuously reviewing the processes and outcomes of treatments, defining new personal learning objectives, and planning future actions in pursuit of excellence [2-5]. Hence, the ability to reflect is an important outcome parameter for health care professionals [6-9]. As a result, many educational institutions incorporate the ability to reflect as an objective of their vocational programs, premised on a belief that reflective thinking is something that can be developed rather than a stable personality trait [4,10,11].

There is, however, uncertainty about how best to help people develop their ability to reflect [11]. Lack of an agreed way of assessing reflection is a particular obstacle to progress because assessment is needed for the identification of effectiveness of educational strategies and for research purposes [3]. Assessment has also a motivational influence as a source for feedback (formative assessment) and when to judge whether requisite levels of competence have been attained (summative assessment) [3,4,12]. The persisting lack of clarity about how to operationalise reflective learning is symptomatic of an even deeper problem. Different, widely accepted theories define reflection in different ways, consider different outcomes as important, define different dimensions along which reflection could be assessed and point towards different standards [11]. Consequently, research findings are hard to compare. This unsatisfactory state of affairs leaves curriculum leaders without practical guidelines, ways of identifying and supporting students who are weak reflectors, and ways of judging whether interventions are improving learners' ability to reflect. The purpose of this article is to review four factors, which confound the assessment of reflection:

1. Non-uniformity in defining reflection and linking theory with practice.

2. A lack of agreed standards to interpret the results of assessments.

3. Threats to the validity of current methods of assessing reflection.

4. The influence of internal and external contextual factors on the assessment of reflection.

Our approach was to identify all widely quoted theories, read them in depth, and triangulate them against one another to find what they (dis)agreed on and gaps between them. The result of this exercise was an interpretive framework, which we used to structure the 'Discussion' section. A test of the framework is beyond the scope of this article, whose aim is to make the framework and guidelines available to other people interested in implementing and/or assessing reflection in education.

Discussion

1. Defining reflection

Studies about reflection in professional practice and education are widespread in the literature; however, their results are hard to generalise or compare because they conceptualise reflection in such different ways. Boenink et al [10] described reflection in terms of the number of different perspectives a person used to analyse a situation. Reflection ranged from a single perspective to a balanced approach considering multiple relevant perspectives. Aukes et al [13] emphasised emotional and communication components when they conceptualised personal reflection as a combination of self-reflection, empathic reflection, and reflective communication. Sobral's [14] emphasis on reflection-in-learning approached reflection from a learning perspective.

If those three perspectives exemplify inconsistency in the field, the work of Dewey, Boud, Schön, Kolb, Moon, and Mezirow exemplifies shared ground between reflection theories and used terms. Dewey is usually regarded as the founder of the concept of reflection in an educational context. He described reflective thought as "active, persistent, and careful consideration of any belief or supposed form of knowledge in the light of the grounds that support it, and the further conclusions to which it tends" [15]. He saw reflective thinking in the education of individuals as a lever for the development of a wider democratic society.

In line with his work, Boud et al emphasised reflection as a tool to learn from experience in experiential learning [16]. They identified reflection as a process that looks back on experience to obtain new perspectives and inform future behaviour. A special feature of their description of reflection in three stages - 1. Returning to an experience; 2. attending to feelings; and 3. re-evaluating the experience - was the emphasis it placed on the role of emotions.

Moon described reflection as an input-outcome process [17]. She identified reflection as a mental function transforming factual or theoretical, verbal or non-verbal knowledge, and emotional components generated in the past or present into the output of reflection (e.g. learning, critical review or self-development).

Schön's concept of the reflective practitioner identified reflection as a tool to deal with complex professional situations [18,19]. Reflection in a situation (reflection-in-action) is linked to practitioners' immediate behaviour. Reflection after the event (reflection-on-action) provides insights that can improve future practice. Those two types of reflection together form a continuum for practice improvement.

The term 'reflective learning' describes reflection in the context of experiential learning. Kolb's widely accepted experiential learning cycle describes four stages of learning: 1. having an experience (concrete experience), 2. reflective observation (reflecting on this experience), 3. abstract conceptualisation (learning from the experience) and 4. active experimentation (trying out what you have learned) [20]. These four stages are conceptualised as a spiral, each of whose turns is a step forward in a person's experiential learning.

Lifelong learning is considered today as essential for maintaining a high standard of professional practice. Mezirow's transformative learning theory described lifelong learning in terms of learners' transforming frames of reference, in which reflection is the driving force [21].

Towards an 'eclectic model' of common elements

Although contemporary reflection models build on those theories, the diversity between them is a cause of continuing uncertainty. In response, we have assembled a simple comprehensive model from their common parts (table ​1). Atkins and Murphy [22] identified reflection as: 1. 'awareness of uncomfortable feelings and thoughts', resulting in 2. an 'analysis of feelings and knowledge', finally leading to 3. 'new perspectives'. They described self-awareness, critical analysis, synthesis, and evaluation as requisite skills for this process. Since those three phases are common to the work of previous authors, they provided a logical starting point for our model. We complemented Atkins and Murphy's phases with insights from other models. Korthagen's ALACT model ('Action, Looking back on action, Awareness of essential aspects, Creating alternative methods of action, and Trial') [23] describes the first phase of 'becoming aware' in two steps: a general retrospective action and a more interpretive action. Integrating those two theories, resulted in a first phase ('reviewing an experience') with two subcomponents: 1. generally describing what happened and 2. identifying essential aspects by considering both thoughts, feelings and contextual factors.

Table 1

Overview of theories/models/findings integrated into the model of common elements

Just reviewing an experience, however, does not necessarily lead to effective reflection. For Bourner [24], using searching questions to interrogate an experience was the key difference between reflecting and thinking and he saw 'reflective inquiry' as a crucial component of reflection. This aspect of reflection was also represented in Mamede and Schmidt's proposed structure of reflective practice as 'openness to reflection' [25]. Bourner only emphasised posing searching questions, however, not answering them. Korthagen's approach supplements Bourner's by contributing 'creating alternative methods of action' as a process of answering questions. This addition is compatible with Boud's characterization of analysis as a combination of association, integration, validation and appropriation. The internal dialogue that results is conducted within a 'personal frame of reference' that, according to Mezirow, directs the analysis and represents "the structure of assumptions and expectations through which we filter sense impressions" [21]. This personal perspective, made up of our perceptions, cognitions, feelings and dispositions (intentions, expectations and purposes), creates a context in which we give meaning to our sensory experiences. If the first phase of reflection, then, is identified as the description of an experience and the awareness of feelings, thoughts, and other essential aspects, our second phase of reflection is analysing experiences by reflective inquiry, which triggers a process of analysis within a person's unique frame of reference.

Moon's input-outcome model emphasises that reflection is purposeful [17]. This purpose is identified by Atkins and Murphy in the third phase of reflection as the 'identification of new perspectives' [22]. Both Korthagen and Boud, however, included an additional stage - the conversion of those new perspectives into actions that are the starting point for new reflective cycles [16,23]. The 'reconstruction phase' of Stockhausen's clinical learning spiral model of reflective practice among undergraduate nursing students in clinical practice settings had the same function [26]. During this phase, reflective insights were transformed into plans for future actions. Since those actions could lead to further reflections, reflecting on experiences was identified as a cyclic process that transformed significant experiences into deliberate, well informed practical actions. We incorporated those insights into the eclectic model by defining the outcome of a reflection process as the identification of new perspectives, which leads to future actions informed by reflection. Stockhausen also described a preparatory phase to establish objectives for a new clinical experience. This phase, which other authors have labelled as reflection-before-action [27,28], is incorporated into the eclectic model by representing reflection as a cyclical process. It allows reflection to be informed by learning goals arising from past reflections and stresses the importance of reflection as a developmental process. Both Korthagen and Stockhausen have highlighted this process with the term reflection spiral with each winding leading to a higher order of understanding, practice or learning [23,26].

Figure ​1 shows the complete eclectic model, which describes reflection in three phases: 1. 'Reviewing the experience', 2. 'Critical analysis', and 3. 'Reflective outcome'. Reflection, according to the model, is a cyclical process, which originates from experience and informs future behaviour. Each phase has two items, described in practical terms to make it possible to put the model into practice. Reviewing the experience has two components: 'description of the experience as a whole', and 'awareness of essential aspects based on the consideration of personal thoughts, feelings, and important contextual factors'. Critical analysis starts with 'reflective inquiry' - posing searching questions about an experience - and progresses to 'searching for answers' while remaining aware of the 'frame of reference' within which the inquiry is being conducted. Reflective outcome comprises the 'new perspectives' resulting from phase two, and the 'translation of those perspectives into behaviour that has been informed by reflection'. This behaviour generates new experiences and so a new reflection cycle begins.

Figure 1

Model of common elements describing the reflection process.

From model building to developing indicators for assessment of reflection

The aim of identifying common elements was to ground the assessment of reflection in existing, widely used theories. It is practically useful because each of the six items in the three phase model can be translated into an indicator of the adequacy of reflection processes (table ​2). Together, they provide a comprehensive overview of a person's ability to go through the process and are in line with the reflective skills identified by Duke and Appleton [29]. Taken individually, the indicators can provide specific feedback about components of reflection, which makes it possible to give structured, focused feedback, and direct training towards aspects of reflection that the indicators have defined as insufficient. Such training could, for example, provide exercises on describing personal thoughts and feelings or identifying learning goals. So, in summary, the modular nature of the model and its indicators makes it possible to tailor education to individual needs. But, for that, criteria to judge someone as competent in reflection are needed.

Table 2

Operational indicators of the reflection process

2. Standards to interpret reflection assessment

Here, again, there is a lack of consensus in published literature. A few researchers have attempted to rank reflections. Wong et al [30] evaluated reflection in written papers by identifying reflective activities using two coding schemes. One, based on Boud's theory, had six items: attending to feelings, association, integration, validation, appropriation and outcome of reflection. The other, based on the work of Mezirow, labelled students as: non-reflectors (no evidence of reflective thinking), reflectors (evidence of relating experience to learning opportunities) and critical reflectors (evidence of integrating reflective outcomes in professional behaviour). The researchers found Boud's categories hard to apply to written materials, resulting in less reliable coding than using Mezirow's scheme. With only three categories, however, this latter scheme had a limited capacity to discriminate between people. Kember et al [31] addressed this issue by using a finer-tuned coding scheme based on the work of Mezirow. Their seven categories ranged from unreflective thinking (habitual action, introspection and thoughtful action) to reflective thinking (content reflection, process reflection, content and process reflection and premise reflection). They dealt with the complexity of the coding scheme by providing guidelines for assessors, which resulted in an acceptable interrater reliability (Cronbach alpha 0.74). Boenink et al [10] used an alternative approach, which ranked reflections from 1-10. Their scale was based on the number of perspectives students described in short written reflective reactions to a case vignette describing a challenging situation. The advantage of this approach was the limited need to make interpretations when identifying the perspectives. The scale was limited, however, by measuring only one aspect of reflection (being aware of the frame of reference used). Duke and Appleton [29] developed a broader marking grid to score reflective reports. It assessed eight skills that support reflection, identified by a literature review, on five-level scales, 'anchored' and linked to a grade (A, B+, B, C and F). By providing grades, these authors were the first to set standards for reflective skills. Despite having based the reflection skills that were included in the scale on a literature review, however, the authors did not disclose how they linked the levels to grades.

Coding schemes have also been used to evaluate reflection in interviews. Boyd [32] assessed reflective judgement using a coding scheme based on seven stages of intellectual development described by King and Kitchener: Pre-reflective thinking (stages 1-3); quasi-reflective thinking (stages 4 and 5); and reflective thinking (stages 6 and 7). Measurements made with the scale had an interrater reliability of 0.76 (Cronbach alpha).

Based on the approach coding schemes can be divided into two groups. A first approach ranks reflections according to levels, ranging from descriptive and/or unreflective to reflective or critical reflective based on the used theory [30-32]. The other approach is the identification of phases in the reflection process considering items of reviewing an experience, analysis and reflective outcome based on the used model of reflection [29,30]. This discrepancy is a complicating factor for interpreting results as levels and phases are incompatible.

Notwithstanding limited ability to compare the findings in the reported studies, because of the variety in used the scales and models of reflection, their results share a common feature. Within their own scale, all studies demonstrate learners to have very limited mastery of reflection, indicating an apparent room for improvement. Inadequate reflection has a negative effect on practice [3,18,33], presumably because learners with a limited ability to reflect let 'tunnel vision' stop them questioning their behaviour in response to significant positive and negative experiences [18,34]. That situation need not be left unchallenged because there is research showing reflection can be influenced positively by training [14,32,35], but the minimum level of reflection needed to have a positive effect on practice remains to be defined.

Until standards have been formulated that can identify practitioners whose level of reflection is adequate, it seems reasonable to clarify to stakeholders (curriculum developers, students, practitioners, assessors) what reflection skills are expected and urge learners to develop them as far as possible. We offer the presented model of common elements as a way of doing that. In promoting reflective learning, however, a balance has to be struck between developing an ability to reflect and increasing the frequency of reflection. It has been argued that critically analysing personal practice after every experience can cause a disabling level of uncertainty [36,37]. Future standards will therefore have to consider the balance between the quality of reflection and its efficient and systematic application in practice, but not to the stage of being counterproductive.

3. Factors that complicate assessment

The metacognitive nature of reflection is an important complicating factor of reflection assessment [4]. It implies a thinking process only accessible to the reflecting person and hence only observable by assessors through that person's interpretative descriptions. Subjects are most often asked to 'translate' their reflections into written words, which are assessed against coding schemes or scoring grids [29-31,38-40]. Other suggested methods to 'visualise' reflections include the verbalisation in interviews [32,41,42], written responses to vignettes [10], or reflective writings in portfolios [34,43]. Assessors' dependency on a person's interpretative description is a serious threat to the validity of assessments of reflection because they have to judge selective descriptions without being able to verify their adequacy. Accordingly this approach fails to detect bias caused by a lack of (un)intentional hindsight and introspection ability [44,45], reflections being determined by the requirements of the assessment and selectivity and/or incompleteness of aspects they portray [44]. Interviews have the advantage that they can pose clarifying questions and monitor a reflecting person's reactions, but they still leave assessors to ground their judgements in potentially subjective and selective narrative accounts of reflective activity. There are two related problems in that. Although the semantic skill of describing reflections is considered integral to effective reflection [46], skills other than pure reflective skills are needed to turn reflection into writing and/or speech, which has a self-evident effect on reflective narratives [44]. The other problem lies in a decrease of motivation caused by the non-alignment between the written approach to assessment and a learners preferred learning style [12]. Findings of Sandars and Homer [47] suggest the discrepancy between 'net generation' students learning preference of group-based and technological multimedia activities (blogs, social networks, digital storytelling) and the text based approaches to reflective learning. Moreover, supporting learners to reflect with the creative use of multimedia, will likely increase their commitment to reflect and stimulate even more efficient reflection [48].

Self-assessment questionnaires have the advantage of circumventing indirect observation [13,14,49,50], but their requirement to introspect accurately introduces another validity threat [22,51], because it is then unclear if it is reflection or the ability to introspect that is being tested. Eva and Regehr [45] concluded that it is best not to build solely on self-assessment approaches as they tend to be inaccurate and they recommended triangulating introspection with other forms of feedback. Assessor-based methods could meet this requirement, providing assessors could be relied upon to provide valid feedback.

Since there are such serious validity threats, the question remains whether it is possible to assess reflection at all. Two elements appear to be important. In search for a valid approach, Bourner [24] suggested the content and the process of reflection should be viewed as two separate entities. While the content is a barrier to assessment because of its subjective nature, the process has a more general character. He transferred this approach from the assessment of critical thinking where the use of questions to analyse ideas, evidence and assertions demonstrates a person's capacity for critical thinking [24]. Similarly Bourner proposed that observable items, like the ability to formulate learning goals, should be used to demonstrate a person's capacity for reflecting. This approach demonstrates some parallels with the content specificity of clinical reasoning [52]. However, opposed to elements in reflections such as learning goals or plans for future actions which meaning for the learner is subjective, content specific knowledge has a more objective character.

Furthermore, reflections are intimately linked to their triggering situations [16,18,19,53] so information about this initial event can provide an objective frame of reference to verify elements of the reflection. For example, when someone describes his communication as good, the real-time presence of an assessor or video-recording of the event could give supporting information [54]. Finding a feasible way of obtaining a rich picture of events that precede the reflection that has to be assessed is an important topic for future development.

4. Internal and external contextual factors affecting reflection assessment

The results of assessments of reflection are influenced by contextual factors as well as people's ability to reflect. Our argument now turns to those modulating factors. Motivation is considered to be an important mediator of learning and achievement in medical education [55,56]. The expectancy-value model proposed by Wigfield and Eccles identifies the subjective value of a task to a person and their expectation of performing it successfully as main predictors of task performance [57]. Applied to reflection, it predicts that the perceived importance of reflection for (professional) practice will determine the time and effort a person is willing to invest in it; those who do not expect a positive return are unlikely to reflect profoundly and critically [4]. This motivational model also explains how personal factors like prior experience of reflective learning and a person's understanding of the reflection process will influence motivation and consequently reflective behaviour. Hence introductory sessions are important to frame the value and intended outcomes of reflection [4]. Furthermore the expectancy-value model also stresses external variables, which might include aspects of teaching and/or assessment. It is reciprocal in nature. If involvement in reflective activities results in perceived better performance (internal) and/or external appraisal, rewards, or reinforcement, a feedback loop starts to operate.

Whereas reflection was traditionally conceived of as a strictly individual process, ideas are shifting towards conceptualising it as a process facilitated by social interaction [4,45]. A stimulating environment in which supervisors and peers give learners regular feedback and ask thought-provoking questions can, from that point of view, be expected to improve reflection. With non-judgemental questions, facilitators can encourage to fully explore the situation, to consider alternative perspectives and solutions, and to uncover taken-for-granted assumptions [3]. Furthermore, situations and reflection upon can provoke strong emotions and negative thoughts which could potentially form a barrier obstructing efficient reflection. A facilitator can help to assimilate these strong emotions and refocus on the reflection process [12,16]. To fully explore reflective thoughts, feelings and possible emotions, it is crucial to create a safe environment established between the reflecting person and the facilitator(s) [3]. Next to supporting others, being a facilitator is also reported as even more effective for a person's own reflections [58]. Schön, however, warned that an unbalanced relationship between learner and coach and an undue influence of contextual factors could hinder reflective practice, as it could lead to defensiveness [18]. In line with this emphasis on contextual factors, Schaub et al developed a scale to assess teachers' competence in encouraging reflective learning [59]. It asks learners to identify whether teachers support self-insights, create a safe environment, and encourage self-regulation.

Because of their influence on reflections contextual factors should be accounted for in educational and assessment approaches. In education it will help to develop effective educational strategies and predict their results to match the intended outcome. In assessment considering contextual factors will contribute to the interpretation of results and in the understanding of the reflection process. Hence we suggest to consider internal and external contextual factors in education and assessment.

Summary

Whilst it is generally accepted that the ability to reflect is an important attribute for healthcare professionals, there is considerable uncertainty about how best to foster it in educational practice. Lack of an agreed way of assessing reflection is a very important factor contributing to this uncertainty. There is, however, clearly discernible common ground between reflection theories. By defining that common ground, we have been able to assemble an eclectic model, which sees reflection as comprised of: 1. reviewing experience; 2. critical analysis; and 3. reflective outcomes. A way of reliably measuring reflection is needed so summative judgements can be made and learners can receive effective feedback but one has not, yet, been developed. Standards defining an essential minimum level of reflective ability are also needed. Until they are we urge to develop and communicate a local consensus between stakeholders (students, practitioners, teachers, supervisors, curriculum developers) about what is expected in exercises and formal tests.

Because reflection is a metacognitive process, it can only be assessed indirectly; through written reflections in vignettes or portfolios, or spoken expressions in interviews. These methods do not allow assessors to verify information related to the reflections reported, which is a serious limitation. The widespread use of self-assessment questionnaires shares both that validity problem and the inherent limitations of self-assessment. To counter these validity threats, it has been proposed that assessment should focus on the process rather than the subjectively coloured content of reflection. In addition, as reflections are intimately entangled with their triggering situational context, we suggest where possible to consider objective information about this triggering situation allowing assessors to verify described reflections. The reflection process is influenced by internal (eg. motivation, expectancy and prior experiences with reflection) and external factors (formative or summative character of assessment, presence of facilitators and introduction to the assessment). Awareness of these factors are important to develop effective educational strategies, interpreting assessment results and finally the increase in understanding about the reflection process. Based on the preceding discussion, we offer the following practical guidelines for educating and assessing reflection.

1. Clearly define the concept of reflection and verify that all stakeholders (curriculum developers, students, assessors and supervisors) adopt the same definition and intended outcomes.

2. Be specific about what level of reflection skills is expected, identifying good and inadequate reflection and communicate this to all stakeholders.

3. Be aware of possible bias in self-assessment methods, caused by inadequate ability to introspect.

4. Provide assessors with a perspective on the situation triggering the reflection to create the ability to verify the described reflections in an objective frame of additional information.

5. Consider and report contextual factors when assessing reflection and/or when engaging in reflective education in support to interpret the outcomes.

Competing interests

The authors declare that they have no competing interests.

Authors' contributions

SK conceptualized the idea and SK, TD were involved in writing the initial drafts. All authors were involved in the revised drafts and made essential contributions to this paper and critically reviewed and approved the final manuscript.

References

  • Collins J. Education Techniques for Lifelong Learning Lifelong Learning in the 21st Century and Beyond. Radiographics. 2009;29(2):613–622. doi: 10.1148/rg.292085179.[PubMed][Cross Ref]
  • Andersen RS, Hansen RP, Sondergaard J, Bro F. Learning based on patient case reviews: an interview study. BMC Medical Education. 2008;8:43. doi: 10.1186/1472-6920-8-43.[PMC free article][PubMed][Cross Ref]
  • Plack MM, Greenberg L. The Reflective Practitioner: Reaching for Excellence in Practice. Pediatrics. 2005;116(6):1546–1552. doi: 10.1542/peds.2005-0209.[PubMed][Cross Ref]
  • Sandars J. The use of reflection in medical education: AMEE Guide No. 44. Medical Teacher. 2009;31(8):685–695. doi: 10.1080/01421590903050374.[PubMed][Cross Ref]
  • Li STT, Paterniti DA, Co JPT, West DC. Successful Self-Directed Lifelong Learning in Medicine: A Conceptual Model Derived From Qualitative Analysis of a National Survey of Pediatric Residents. Academic Medicine. 2010;85(7):1229–1236. doi: 10.1097/ACM.0b013e3181e1931c.[PubMed][Cross Ref]
  • General Medical Council. Tomorrow's Doctors. London: GMC; 2009. http://www.gmc-uk.org/TomorrowsDoctors_2009.pdf_39260971.pdf [accessed October 28th 2010]
  • Scottish Deans' Medical Curriculum Group. Learning Outcomes for the Medical Undergraduate in Scotland: A Foundation for Competent and Reflective Practitioners 3rd Edition. 2007. http://www.scottishdoctor.org/resources/scottishdoctor3.doc [accessed January 17th 2011] [PubMed]
  • Nederlandse Federatie van Universitair Medische Centra. Raamplan artsen opleiding. 2009. http://www.nfu.nl/fileadmin/documents/Raamplan_Artsopleiding_2009.pdf [accessed January 17th 2011]
  • Cowpe J, Plasschaert A, Harzer W, Vinkka-Puhakka H, Walmsley AD. Profile and competences for the European dentist - update 2009. http://www.adee.org/cms/uploads/adee/ProfileCompetencesGraduatingEuropeanDentist1.pdf [accessed March 3rd 2011] [PubMed]
  • Boenink AD, Oderwald AK, de Jonge P, van Tilburg W, Smal JA. Assessing student reflection in medical practice. The development of an observer-rated instrument: reliability, validity and initial experiences. Medical Education. 2004;38(4):368–377. doi: 10.1046/j.1365-2923.2004.01787.x.[PubMed][Cross Ref]
  • Mann K, Gordon J, MacLeod A. Reflection and reflective practice in health professions education: a systematic review. Advances in Health Sciences Education. 2009;14(4):595–621. doi: 10.1007/s10459-007-9090-2.[PubMed][Cross Ref]
  • Grant A, Kinnersley P, Metcalf E, Pill R, Houston H. Students' views of reflective learning techniques: an efficacy study at a UK medical school. Medical Education. 2006;40(4):379–588. doi: 10.1111/j.1365-2929.2006.02415.x.[PubMed][Cross Ref]
  • Aukes LC, Geertsma J, Cohen-Schotanus J, Zwierstra RP, Slaets JPJ. The development of a scale to measure personal reflection in medical practice and education. Medical Teacher. 2007;29:177–182. doi: 10.1080/01421590701299272.[PubMed][Cross Ref]
  • Sobral DT. An appraisal of medical students' reflection-in-learning. Medical Education. 2000;34(3):182–187. doi: 10.1046/j.1365-2923.2000.00473.x.[PubMed][Cross Ref]
  • Dewey J. How we think. Boston: DC Heath; 1910.
  • Boud D, Keogh R, Walker D. Reflection: Turning experience into learning. London: Kogan Page; 1985.
  • Moon JA. Reflection in learning and professional development: theory and practice. London: Kogan Page; 1999.
  • Schön DA. The reflective practitioner: How professionals think in action. New York: Basic Books; 1983.
  • Schön DA. Educating the reflective practitioner. San Francisco: Jossey-Bass Inc. Publishers; 1987.
  • Kolb DA. Experiential learning: Experience as a source for learning and development. New Jersey: Prentice Hall; 1984.
  • Mezirow J and Associates. Learning as Transformation: Critical Perspectives on a Theory in Progress. San Francisco: Jossey-Bass; 2000.
  • Atkins S, Murphy K. Reflection - A Review of the Literature. Journal of Advanced Nursing. 1993;18(8):1188–1192. doi: 10.1046/j.1365-2648.1993.18081188.x.[PubMed][Cross Ref]
  • Korthagen F, Vasalos A. Levels in reflection: core reflection as a means to enhance professional growth. Teachers and Teaching: theory and practice. 2005;11(1):47–71. doi: 10.1080/1354060042000337093.[Cross Ref]
  • Bourner T. Assessing reflective learning. Education + Training. 2003;45(5):267–272. doi: 10.1108/00400910310484321.[Cross Ref]
  • Mamede S, Schmidt HG. The structure of reflective practice in medicine. Medical Education. 2004;38(12):1302–1308. doi: 10.1111/j.1365-2929.2004.01917.x.[PubMed][Cross Ref]
  • Stockhausen L. The Clinical Learning Spiral - A Model to Develop Reflective Practitioners. Nurse Education Today. 1994;14(5):363–371. doi: 10.1016/0260-6917(94)90031-0.[PubMed][Cross Ref]
  • Greenwood J. Reflective practice: a critique of the work of Argyris and Schön. Journal of Advanced Nursing. 1993;18(8):1183–1187. doi: 10.1046/j.1365-2648.1993.18081183.x.[PubMed][Cross Ref]
  • Cheetham G, Chivers G. A New Look at Competent Professional Practice. Journal of European Industrial Training. 2000;24(7):374–383. doi: 10.1108/03090590010349827.[Cross Ref]
  • Duke S, Appleton J. The use of reflection in a palliative care programme: a quantitative study of the development of reflective skills over an academic year. Journal of Advanced Nursing. 2000;32(6):1557–1568. doi: 10.1046/j.1365-2648.2000.01604.x.[PubMed][Cross Ref]
  • Wong FKY, Kember D, Chung LYF, Yan L. Assessing the Level of Student Reflection from Reflective Journals. Journal of Advanced Nursing. 1995;22(1):48–57. doi: 10.1046/j.1365-2648.1995.22010048.x.[PubMed][Cross Ref]
  • Kember D, Jones A, Loke A. Determining the level of reflective thinking from students' written journals using a coding scheme based on the work of Mezirow. International Journal of Lifelong Education. 1999;18(1):18–30. doi: 10.1080/026013799293928.[Cross Ref]
  • Boyd LD. Development of reflective judgement in the pre-doctoral dental clinical curriculum. European Journal of Dental Education. 2008;12(3):149–158. doi: 10.1111/j.1600-0579.2008.00511.x.[PubMed][Cross Ref]
  • Evans AW, McKenna C, Oliver M. Self-assessment in medical practice. Journal of the Royal Society of Medicine. 2002;95(10):511–513. doi: 10.1258/jrsm.95.10.511.[PMC free article][PubMed][Cross Ref]
  • Pinsky LE, Monson D, Irby DM. How excellent teachers are made: reflecting on success to improve teaching. Advances in Health Sciences Education. 1998;3:207–215. doi: 10.1023/A:1009749323352.[PubMed][Cross Ref]
  • Driessen EW, van Tartwijk J, Overeem K, Vermunt JD, van der Vleuten CPM. Conditions for successful reflective use of portfolios in undergraduate medical education. Medical Education. 2005;39(12):1230–1235. doi: 10.1111/j.1365-2929.2005.02337.x.[PubMed][Cross Ref]
  • Burnard P. Reflections on reflection. Nurse Education Today. 2005;25(2):85–86. doi: 10.1016/j.nedt.2004.11.001.[PubMed][Cross Ref]
  • Aukes LC. PhD thesis. University of Groningen; 2008. Personal reflection in medical education.
  • Carr S, Carmody D. Experiential learning in women's health: medical student reflections. Medical Education. 2006;40(8):768–774. doi: 10.1111/j.1365-2929.2006.02536.x.[PubMed][Cross Ref]
  • Brady DW, Corbie-Smith G, Branch WT. "What's Important to You?": The Use of Narratives To Promote Self-Reflection and To Understand the Experiences of Medical Residents. Annals of Intern Medicine. 2002;137(3):220–223.[PubMed]
  • Howe A, Barrett A, Leinster S. How medical students demonstrate their professionalism when reflecting on experience. Medical Education. 2009;43(10):942–951. doi: 10.1111/j.1365-2923.2009.03456.x.[PubMed][Cross Ref]
  • Hatton N, Smith D. Reflection in Teacher-Education - Towards Definition and Implementation. Teaching and Teacher Education. 1995;11(1):33–49. doi: 10.1016/0742-051X(94)00012-U.[Cross Ref]
  • Peden-McAlpine C, Tomlinson PS, Forneris SG, Genck G, Meiers SJ. Evaluation of a reflective practice intervention to enhance family care. Journal of Advanced Nursing. 2005;49(5):494–501. doi: 10.1111/j.1365-2648.2004.03322.x.[PubMed][Cross Ref]
  • Tigelaar DEH, Dolmans DHJM, de Grave WS, Wolfhagen IHAP, Van derVleuten CPM. Portfolio as a tool to stimulate teachers' reflections. Medical Teacher. 2006;28(3):277–282. doi: 10.1080/01421590600607013.[PubMed][Cross Ref]
  • Hargreaves J. So how do you feel about that? Assessing reflective practice. Nurse Education Today. 2004;24(3):196–201. doi: 10.1016/j.nedt.2003.11.008.[PubMed][Cross Ref]
  • Eva KW, Regehr G. Self-assessment in the health professions: A reformulation and research agenda. Academic Medicine. 2005;80(10):S46–S54.[PubMed]
  • Pee B, Woodman T, Fry H, Davenport ES. Appraising and assessing reflection in students' writing on a structured worksheet. Medical Education. 2002;36(6):575–585. doi: 10.1046/j.1365-2923.2002.01227.x.[PubMed][Cross Ref]
  • Sandars J, Homer M. Reflective learning and the Net Generation. Medical Teacher. 2008;30(9-10):877–879. doi: 10.1080/01421590802263490.[PubMed][Cross Ref]
  • Sandars J, Murray C, Pellow A. Twelve tips for using digital storytelling to promote reflective learning by medical students. Medical Teacher. 2008;30(8):774–777. doi: 10.1080/01421590801987370.[PubMed][Cross Ref]
  • Mamede S, Schmidt H. Correlates of reflective practice in medicine. Advances in Health Sciences Education. 2005;10(4):327–337. doi: 10.1007/s10459-005-5066-2.[PubMed][Cross Ref]
  • Kember D, Leung DYP, Jones A. Development of a Questionnaire to Measure the Level of Reflective Thinking. Assessment and evaluation in higher education. 2000;25(4):381–395. doi: 10.1080/713611442.[Cross Ref]
  • Kruger J, Dunning D. Unskilled and Unaware of It: How Difficulties in Recognizing One's Own Incompetence Lead to Inflated Self-Assessments. Journal of Personality and Social Psychology. 1999;77(6):1121–1134.[PubMed]
  • Wimmers PF, Splinter TAW, Hancock GR, Schmidt HG. Clinical competence: General ability or case-specific? Advances in Health Sciences Education. 2007;12(3):299–314. doi: 10.1007/s10459-006-9002-x.[PubMed][Cross Ref]
  • Korthagen F, Lagerwerf B. Reframing the relationship between teacher thinking and teacher behaviour: levels in learning about teaching. Teachers and Teaching: theory and practice. 1996;2(2):161–190. doi: 10.1080/1354060960020202.[Cross Ref]
  • Liimatainen L, Poskiparta M, Karhila P, Sjogren A. The development of reflective learning in the context of health counselling and health promotion during nurse education. Journal of Advanced Nursing. 2001;34(5):648–658. doi: 10.1046/j.1365-2648.2001.01794.x.[PubMed][Cross Ref]
  • Artino AR, La Rochelle JS, Durning SJ. Second-year medical students' motivational beliefs, emotions, and achievement. Medical Education. 2010;44

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *