Background: The physical therapy profession has been perceived as one that bases its practice largely on anecdotal evidence and that uses treatment techniques for which there is little scientific support. Physical therapists have been urged to increase evidence-based practice behaviors as a means to address this perception and to enhance the translation of knowledge from research evidence into clinical practice. However, little attention has been paid to the best ways in which to support clinicians’ efforts toward improving evidence-based practice.
Objectives: The purpose of this study was to identify, implement, and evaluate the effectiveness of strategies aimed at enhancing the ability of 5 pediatric physical therapists to integrate scientific research evidence into clinical decision making.
Design: This study was a formative evaluation pilot project.
Methods: The participants in this study collaborated with the first author to identify and implement strategies and outcomes aimed at enhancing their ability to use research evidence during clinical decision making. Outcome data were analyzed with qualitative methods.
Results: The participants were able to implement several, but not all, of the strategies and made modest self-reported improvements in evidence-based practice behaviors, such as reading journal articles and completing database searches. They identified several barriers, including a lack of time, other influences on clinical decision making, and a lack of incentives for evidence-based practice activities.
Conclusions: The pediatric physical therapists who took part in this project had positive attitudes toward evidence-based practice and made modest improvements in this area. It is critical for the profession to continue to investigate optimal strategies to aid practicing clinicians in applying research evidence to clinical decision making.
Physical therapists have been criticized for not using available research to inform their clinical decision making.1–12 The profession has been perceived as one that bases its practice largely on anecdotal evidence and that uses treatment techniques for which there is little scientific support.6,8–14 This issue was identified as early as 1969 by Eugene Michels in a presidential address delivered to the membership of the American Physical Therapy Association. Michels called on physical therapists to move away from practice based solely on the suggestions of colleagues or personal experience and toward practice based on scientific research.15 The importance of using research evidence to guide physical therapist practice has received much attention in the decades since Michels’ address. Numerous authors have stated that physical therapists have a moral, professional, and ethical obligation to provide evidence-based service and to move away from interventions based solely on anecdotal testimonies, expert opinion, or physiologic rationale.1–5,8–11,14,16–27 The American Physical Therapy Association has identified evidence-based practice as an important goal for the profession in its “Vision 2020” statement.28
Despite this attention, minimal research has been aimed at identifying optimal ways in which to assist clinicians in translating knowledge generated by scientific research into clinical practice and decision making. Clinicians have been urged to increase evidence-based practice behaviors, such as writing clinical questions, completing database searches, obtaining primary research articles and systematic review articles from peer-reviewed journals, and analyzing those articles for their level of evidence and quality.1,5,7,11,21,29,30 Clinicians have been expected to use a decision-making paradigm that integrates patient preferences, clinical circumstances, personal experience, and scientific evidence into an optimal clinical decision for an individual patient.29,31,32 However, little attention has been paid to the best ways in which to support physical therapist clinicians’ efforts in these areas. It may be simplistic to believe that simply publishing high-quality research will result in knowledge from that research being translated easily into routine practice.29
The concept of knowledge translation provides a framework that may be helpful in considering the challenges that clinicians are likely to face when attempting to implement evidence-based practice. Knowledge translation has been defined as the exchange, synthesis, and ethically sound application of knowledge—within a complex system of interactions among researchers and users— to accelerate capture of the benefits of research.33 Key aspects of this framework are acknowledging the user as an active problem solver and as a constructor of his or her own knowledge rather than as a passive receptacle of information and expertise.34,35 In addition, behavior change for the user is rarely a linear process that proceeds logically from knowledge dissemination to alterations in behavior and subsequent improved outcomes. Instead, it is much more likely to be dynamic, iterative, nonlinear, and emergent.36 Finally, the level of interaction and trust between the researcher generating knowledge, and the clinician using that knowledge, and the clinician's perception of the relevance of the research, also are important aspects of the knowledge translation framework.37
Other challenges to the ethically sound application of knowledge through evidence-based practice have been identified. Time constraints are almost universally identified by health care practitioners as a primary limiting factor.5,16,20,38–42 Clinicians across a variety of health care settings refer to the pressures of the health care environment and administrators’ emphasis on productivity as factors that directly inhibit their ability to search for, gather, read, and integrate scientific information relevant to daily practice.16,20,38,39,43,44 Physical therapists in settings not affiliated with teaching or research institutions often face challenges in accessing relevant scientific evidence.16 Older and more-experienced physical therapists may struggle with implementing evidence-based practice behaviors more so than their younger counterparts.20 The enormous volume of research literature, which continues to expand, also constitutes a barrier for practitioners. Approximately 30,000 biomedical journals are published each year, and one estimate is that a decision maker needs to read, on average, 19 articles each day to stay up-to-date in his or her field.45 In addition, physical therapists often have difficulty applying research findings to individual patients and are unclear about whether there is research evidence to support or refute the use of therapeutic interventions.20 Negative attitudes about research may further compound the difficulties in the implementation of evidence-based practice in physical therapy.21
The minimal research completed to date on strategies to enhance physical therapists’ evidence-based practice knowledge, attitudes, and behaviors has produced mixed results. Stevenson et al46 found that physical therapists rated taking courses as the most important method of staying up-to-date in clinical practice and that research literature and Web-based information were ranked as least important. This attitude did not change despite the use of a training program focused on educating the participants in evidence-based practice principles.46 Other investigations demonstrated that changes in underlying knowledge about evidence-based practice occur more readily than changes in evidence-based practice behaviors.47,48 For health care practitioners in general, passive dissemination strategies and one-time continuing education sessions generally are ineffective. In contrast, strategies that are interactive, multifaceted, and targeted toward barriers to change are more likely to be successful in eliciting the use of knowledge for behavior change.48–54 Also needed are a supportive organizational culture, commitment, and a credible change agent.36
It is not surprising that the evidence thus far suggests that physical therapists continue to make clinical decisions primarily on the basis of knowledge gained from peers, continuing education conferences, and entry-level education rather than knowledge translated from research evidence.13,14,20,39,55–57 The purpose of this research project was to identify, implement, and evaluate the effectiveness of strategies aimed at enhancing the ability of 5 pediatric physical therapists to integrate scientific research evidence into clinical decision making. The significance of this study is the focus on pediatric physical therapists and the use of interactive, multifaceted, and targeted strategies developed in collaboration with the study participants and aimed toward barriers to change that are unique to this practice setting.
This 3-phase formative evaluation pilot project was designed to evaluate the effectiveness of a program aimed at improving the evidence-based practice of a group of pediatric physical therapists working in school settings. Formative evaluations focus on ways of improving the effectiveness of a program, a policy, an organization, a product, or a staff unit. Such evaluations have a formal design, and the data are collected, analyzed, or both, at least in part, by an evaluator.58(p221) Input on the program was solicited from the 5 participants throughout the present project and specifically during the identification of feasible evidence-based practice strategies and relevant individual and group outcomes. This process was based, in part, on a participatory or action research approach, in which the purpose is to produce new knowledge that is directly pertinent and beneficial to the setting where the investigation takes place. The outcomes may or may not be relevant or transfer to other, similar settings.59–63 Therefore, participatory research emphasizes the production of knowledge to elicit change and improvement in the lives of those involved in the research process and is done with participants, never to or on subjects.62,63
To identify potential participants for this project, we contacted the owner of a physical therapy private practice to gauge interest in identifying ways to enhance employees’ knowledge and use of evidence-based practice. This private practice employed more than 15 physical therapists, most on a part-time basis and most working in a variety of school placements and with minimal regular interaction with other physical therapists. The owner indicated an interest in this topic and collaborated with the first author in using a stratified purposeful sampling strategy58(p240) to identify employees potentially willing and able to take part in the project. The stratification was based on years of work experience as a pediatric physical therapist. Previous research had identified differences in attitudes and beliefs toward evidence-based practice between younger, less-experienced physical therapists and their older, more-experienced colleagues.20
Six potential participants, including the practice owner, were identified initially. When approaching the employees, the owner emphasized the voluntary nature of the project and the fact that there was no requirement to participate. One individual declined, and the remaining 5 are henceforth referred to as “participants.” Table 1 shows the demographic information for the participants. The therapists worked in a variety of pediatric school settings and had no regular interaction with each other during their daily clinical practice. Interaction with the practice owner also was minimal and consisted mainly of phone and e-mail contact. All of the participants read and signed an informed consent form before participating in the project.
Phase 1: Establishment of Strategies and Outcome Measures
During phase 1 of this 3-phase project, the participants completed a self-report evidence-based practice questionnaire that was developed by Jette et al20 and aimed at describing their evidence-based knowledge, beliefs, attitudes, and behaviors. In addition, the participants completed individual and group interviews with the first author; the interviews focused on the constructs of clinical decision making and the use of scientific research to guide clinical practice. The information from these activities was summarized and shared with all of the participants, leading to a team meeting during which specific individual and group strategies for enhancing evidence-based practice and project outcome measures were established. The strategies were intended to enhance each participant's ability to use research evidence in daily clinical practice. The strategies were an evidence-based practice workshop, enhanced practice Web site resources, and an online evidence-based practice exercise.
Three participant outcome measures also were identified; these measures were evidence-based practice ranking, individual goals and goal attainment scaling (GAS), and a self-report survey of baseline and follow-up evidence-based practice knowledge and behaviors. In addition to these outcome measures, the participants completed follow-up group and individual interviews with the first author and the questionnaire developed by Jette et al.20 Both were designated as study outcomes and described the participants’ perceptions of evidence-based practice at the conclusion of the project. Table 2 shows the strategies and outcomes, and the Figure shows an overview of the project.
Phase 2: Implementation of Strategies and Outcome Measures
Evidence-based practice workshop.
At the request of the participants, the first author provided a workshop aimed at improving participants’ skills relating to evidence-based practice. This 4-hour workshop took place during the participants’ non–work time (a Saturday morning). Before the workshop, the participants developed individual goals relating to their knowledge and implementation of evidence-based practice. The objectives of the workshop were developed to address these goals, with an emphasis on accessing and analyzing a variety of research evidence and integrating the knowledge gathered from that evidence into clinical decision making. The objectives of the workshop are shown in Appendix 1.
Enhanced practice Web site resources.
The participants also expressed a desire for additional activities to assist in the application of knowledge and skills acquired during the workshop. Activities included posting clinical questions and case scenarios on the practice Web site and generating critically appraised topics that could also be posted and made available to all practice employees.64 This use of the Web site as an evidence-based practice resource did not exist at the time of the workshop, and the practice owner indicated a willingness to pursue this activity with her Web page consultant.
Online evidence-based practice exercise.
Another strategy that followed the workshop was an online evidence-based practice exercise. The purpose of this exercise was to provide a guided practice opportunity for new skills learned during the workshop. The exercise included several phases separated by 3- or 4-day intervals, was conducted through group e-mail communication with all of the participants, and was led by the first author. During the first phase, the first author created a hypothetical clinical case and developed a clinical question based on that case. During the second phase, the first author explicitly described the search strategies used to gather evidence to answer the clinical question. He then identified the research articles that were most appropriate to obtain and analyze to answer the clinical question. During the final phase, the first author shared his critical analysis of the research articles and his answer to the clinical question, based on the evidence. The participants were encouraged to work along with the first author and to compare their efforts with his. This strategy was designed so that the process would be repeated over the 6-month time frame of the study, with one of the participants taking on the leadership role in identifying the clinical question, performing the search, and generating an evidence-based answer to the question.
Evidence-based practice ranking.
The first outcome measure chosen by the participants was a self-identified evidence-based practice ranking. The baseline ranking was determined during the initial individual interview and was presented as follows: “If you could place yourself on a continuum of evidence-based practice, with 1 being completely not being an evidence-based practitioner and 10 being an optimal evidence-based practitioner, where would you place yourself today?” The participants felt that an increase in this ranking would represent an improvement in knowledge and use of evidence-based practice.
Individual goals and GAS.
Each participant set specific individual goals relating to evidence-based practice. Goals may affect performance by focusing attention, directing effort, increasing motivation, and enabling the development of strategies to achieve objectives. Therefore, in the context of this project, setting individual goals may have served as an intervention strategy in addition to providing an outcome measure. In accordance with a suggestion from the first author, the participants used a GAS framework to establish individual goals.65,66 Researchers have used GAS as an option for establishing and monitoring individual goals in a variety of subject areas, including mental health, occupational therapy, physical therapy, special education, professional development, and rehabilitation.67–71 This framework requires that the identified goal be assigned a score of 0. Additional scores of +1 and +2 are assigned to outcomes that represent increases or improvements relative to the score of 0. Conversely, scores of −1 and −2 indicate either less than optimal progress or no progress toward the goal, respectively.65,66,68 This process takes goal achievement further by allowing a calibration of the degree of success, recognizing partial completion and additional achievement, as opposed to the “all-or-none” approach of most goal-setting systems.65,66,68 In the present project, each participant identified at least 2 goals that were measurable and attainable within a 6-month time frame. Once those goals were established, the participant worked with the first author to generate outcomes (individual goals) corresponding to the −2, −1, +1, and +2 scores. At the conclusion of the project, each participant reported his or her score for each individual goal.
Self-report survey of knowledge and behaviors.
The participants also identified the importance of establishing a measurable outcome for baseline and follow-up evidence-based practice knowledge and behaviors. A 10-item questionnaire originally developed and applied by Connolly et al56 was used for this purpose. This questionnaire is a self-report measure of knowledge and behaviors relating to research and includes items concerning comfort level and confidence in reading and applying research findings, personal habits in reading professional literature, and beliefs about the importance of research to the profession. The questionnaire also attempts to measure a perceived source of authority for clinical decision making and beliefs about how research is viewed by physical therapist colleagues.56 The authors described a brief validation process for the use of this questionnaire to measure changes over time in entry-level physical therapist students’ attitudes and perceptions about research in physical therapy.56 Baseline and follow-up scores for each participant were compared for differences. On the basis of the categorization of the individual items in the questionnaire, several items were combined so that, for example, the participants’ self-reported knowledge and behaviors relating to research could be evaluated for changes between the beginning and the end of the project.
Phase 3: Follow-up Semistructured Interviews
Finally, semistructured individual interviews and a group interview were conducted during phase 3 of this project, along with readministration of the evidence-based practice questionnaire developed by Jette et al20 to each participant. The purpose of this phase was to provide an opportunity for the participants to reflect on the impact of the project on their professional practice, their participation in the establishment of strategies and outcomes, and future directions for research and practice. Readministration of the questionnaire of Jette et al20 provided additional descriptive data about the participants’ beliefs, attitudes, and behaviors relating to evidence-based practice and allowed a comparison of quantitative data between phase 1 and phase 3.
Because of the small sample size, the quantitative data were analyzed descriptively in conjunction with the qualitative interview data to describe the participants’ outcomes at the end of the 6-month program. The individual and group interviews were transcribed verbatim, and the transcripts for each participant were analyzed by the first author to identify broad, overarching initial impressions. A qualitative data analysis expert (fourth author) worked concurrently to review the interviews and the first author's initial impressions. The interview transcripts and initial impressions were then sent to each participant for review to further ensure the accuracy of this initial stage of analysis. This “member checking”72 represented a first effort toward enhancing the trustworthiness of the data analysis and interpretation.
After the initial member checking, the ATLAS.ti* qualitative data analysis program was used to aid in managing the volume of data. The first author began open coding of the interview transcripts.72 A coded piece of data is the smallest item of analyzed data in qualitative research. The process involved reading each participant's individual interview transcript and the group interview transcript line by line and highlighting phrases, sentences, groups of sentences, small paragraphs, or a combination of these that contained a meaningful, distinct thought pertaining to evidence-based practice. Each distinct thought was labeled with a 1- or 2-word code that enabled the first author to later retrieve, sort, and organize data into larger categories containing similar ideas. The data analysis expert reviewed the coded data and verified agreement with the first author's analysis of the data.
After all of the data had been coded, the first author reassembled the coded data into larger, synthesized units of meaning. The data were grouped into categories of similar information labeled with a phrase or sentence that reflected the content of information in a category. For example, the category “application and utilization of evidence-based practice” included the following codes: EBP-angst, EBP-practices-Internet, EBP-practices, EBP needs, barriers, complacency, and ranking.
Next, the categories for each participant were organized and synthesized to aid in cross-case analysis across each of the 5 participants. This cross-case analysis and the concomitant analysis of the quantitative data from the other outcomes led to the emergence of several overarching themes for the project.
The themes that emerged at the conclusion of phase 3 included sustained positive attitudes and beliefs about evidence-based practice; variable implementation of the strategies developed during the initial collaboration phase; variable performance for individual goals; persistent barriers, including a lack of time and a lack of incentives for evidence-based practice activities; and a desire for user-friendly evidence-based clinical practice guidelines.
Positive Attitudes and Beliefs About Evidence-Based Practice
Table 3 shows the baseline and final mean scores for items relating to attitudes, beliefs, and knowledge about evidence-based practice on the questionnaire developed by Jette et al.20 The participants sustained very positive attitudes throughout the project. All of the participants believed that research evidence is useful and that it aids in clinical decision making, as indicated by the following comments:
Participant P: “I think (pediatric physical therapists) absolutely need to be (evidence-based practitioners) if they are going to be respected as a profession out there.”
Participant A: “So that's why I think (evidence-based practice) is very important, so that we see outcomes faster and our treatments are better, and that's why I think that it should be important to always keep up on the research.”
Variable Implementation of Strategies
The first author provided a 4-hour evidence-based practice workshop during phase 1 of this project. This workshop was provided at no cost but occurred outside of regular work hours. Despite an effort to schedule the workshop at a convenient time, 2 of the 5 participants were unable to attend because of family obligations (participant K) and illness (participant A). Subsequently, both did participate in a detailed phone conference with the first author to discuss the material covered in the workshop. Specific comments from the participants reflected a general sense that the workshop was helpful but not sufficient to lead to changes in daily practice.
Participant P: “I thought it was a great introduction, but you know there is just too much material to get it in one setting.”
Participant R: “I agree that (the workshop) was helpful, but I still feel lost out there on my own, and more supervised practice is definitely needed.”
Participant K: “I thought it was very helpful, especially for me, who hasn’t had any exposure to that formal training in 5 to 6 years. So that was much more helpful to me as a refresher, but then I, too, see that it was—it answered a lot of my questions, but, even still, when I went to do it myself with all the notes in front of me, it just appeared that I needed some more practice. And finding the time to do that is difficult. But I did find (the workshop) very helpful.”
Upgrade of Web site resources.
The upgrade of the practice Web site to allow postings and online case discussions did not occur because of time constraints for the practice owner.
Online practice exercise.
The online practice exercise after the workshop did occur and was led by the first author. However, there was minimal interaction among the participants during this activity, and there were no subsequent practice exercises, as originally suggested. During the follow-up interviews, the participants attributed this result to several factors. A lack of time was a consistent issue. Also, several of the participants alluded to the fact that the clinical case scenario developed by the first author was not relevant to their practice needs at that time and therefore was not a worthwhile time investment. None of the participants expressed a willingness to take over and lead the online interaction process once the first author completed the first clinical case scenario.
Participant L: “It's still so varied with the presentation that it's very difficult to take a blanket statement about cerebral palsy and apply it to any of the kids or take results from a piece of evidence that you might get and apply it.”
Participant P: “I guess what I find is that the desire is there to do it but the follow through time (was not).”
Variable Performance Relating to Evidence-Based Practice Knowledge and Behaviors
The participants’ rankings and GAS scores are shown in Appendix 2. Overall, the participants reported no progress on 4 goals, minimal progress on 3 goals, and achievement of the remaining 6 goals. Self-reported ranking improved for each of the participants, including an improvement from 1 of 10 to 4.5 of 10 for participant R. In addition to rankings and GAS scores, mean baseline and final quantitative scores for all of the participants on the self-report evidence-based practice questionnaire (Jette et al20) and on the questionnaire about knowledge and behaviors relating to research (Connelly et al56) are shown in Tables 3, 4, and 5. The scores reflected some improvement in knowledge relating to research and evidence-based practice across all participants combined. However, there was minimal improvement in self-reported evidence-based practice behaviors. This result also was reflected in several comments from the participants.
Participant K: “But, you know, I just wonder how long … how many searches do you have to do … how much time does this take … you know … how many times is it going to take me—45 minutes or an hour— to find something when I just don’t have that time to give.”
Participant R: “My lack of comfort with doing it just on my own, too. Like if I knew what I was doing and I thought, ‘OK, I have 20 minutes to sit down and do this,’ and I can do it, but I think [sigh] I have 20 minutes, but I don’t even hardly know where to begin.”
Participant L: “When I sat down and really thought about this, I have looked up a lot of different things; it just doesn’t seem like I have. I mean I was surprised when I wrote down what I could remember of what I had gotten on them, so that was kind of nice to really think about it.”
Other comments reflected the numerous challenges faced by therapists during clinical decision making and clinical practice. In addition, there were general perceptions among the participants that many of their professional colleagues were not regularly using research evidence to guide practice and that there were few incentives from their clinical environment (mainly elementary and secondary schools) to carry out evidence-based practice activities.
Participant A: “Right now—it's not realistic unless you get a ton of cancellations and you’re sitting there and you’re actually caught up with your paperwork, then there's a chunk of time. Right now—where I am just in my family life—I just don’t have the time in the evening.”
Participant P: “So I mean, so on one hand they may verbally encourage it, but there's not that, when they’re looking at that financial—what they’re paying us—they’re not considering that piece into that, you know. They’re still, they want us to come [provide therapy] and go. So you know I think that perhaps adds to the whole complexity of the situation.”
Participant L: “You know there really isn’t any time when that's part of our job. You know that it's considered to be on our time ourselves—it's just time you have to make, and I think that makes it difficult.”
Participant A: “Are they evidence-based practitioners? I don’t think all of them are.”
Participant K: “One of the issues I do see with pediatric therapists is the majority of them are part time … they are off on summers. So that's an issue, too. Working full time versus part time—your whole mind-set, your whole availability.”
Suggestions for Future Directions
Finally, at the conclusion of this project, the participants identified several strategies that may be effective in supporting evidence-based practice by clinicians. These strategies included user-friendly evidence-based practice guidelines, perhaps generated and updated by the professional association. In addition, each of the participants felt strongly that continuing professional education should be mandatory to renew licensure.
Participant P: “I have been an advocate of (mandatory continuing education) since 1981, when I became a therapist [laughter]. I could never understand why this wasn’t mandatory. And one of the things they said was, ‘Well, you are a professional, you should want to do it.’ Well, that's unrealistic. People are only going to do what they have to do.”
Consistent with the results of several other studies, the participants in the present study reported positive attitudes about evidence-based practice throughout the 6-month time frame.12,16,20,39,46,56,73 The participants frequently referred to the benefits of research evidence as a means to provide support for clinical decisions. The participants believed that using research evidence was likely to increase confidence in decision making, improve effectiveness, and enhance the stature of the physical therapy profession.
On the basis of previous research suggesting that an interactive, multifaceted, and targeted approach is more effective in eliciting behavior change, the first author and the participants collaborated to develop strategies and outcomes aimed at improving evidence-based practice knowledge and behaviors.48–54 Despite this collaboration and despite the sustained positive attitudes of the participants toward evidence-based practice, the implementation of the strategies was variable. The participants indicated that the evidence-based practice workshop was helpful but not sufficient for sustaining behavior change. The strategy of updating the Web site was not implemented because of time constraints for the practice owner. The workshop follow-up activity was not sustained beyond the initial effort of the first author.
With regard to outcomes, the results of the present study are similar to those of previous work; that is, knowledge may improve, but changes in clinicians’ behaviors are less likely to occur.46–48 The present project was modestly successful in improving the participants’ evidence-based practice knowledge and behaviors. Five of the 13 GAS goals established by the participants for behavior change were achieved. The participants’ self-reported rankings as evidence-based practitioners all improved as well. The items reflecting evidence-based practice behaviors on the questionnaire developed by Jette et al20 showed minimal improvement. According to the questionnaire developed by Connolly et al,56 the participants’ knowledge and behaviors were improved at the conclusion of the present project.
The knowledge translation framework suggests that a user of knowledge is an active problem solver and a constructor of his or her own knowledge and that behavior change is rarely linear.34–36 All of the participants indicated some satisfaction with their improvements as well as some frustration that more-substantial progress did not occur and that they were unable to take advantage of all of the group strategies. Improvements were attributed to several factors. The evidence-based practice workshop provided baseline knowledge for some of the participants, whereas others described it as more of a refresher. The individual goals that emerged as a result of participation in the research project led to increased attention to and awareness of evidence-based practice issues and to a variety of individual strategies unique to each participant. For example, participant L began bringing research journals to work each day and used spare time to read articles. Participant A indicated that the project provided the impetus to more consistently use the skills that she had learned in her entry-level education. Participant K believed that the project—in particular, the workshop—provided her with some additional tools and an increased willingness to initiate evidence-based practice activities, such as database searches. The self-identified goals and the regular interaction of the participants and the first author served as incentives for all of the participants to focus on and improve evidence-based practice. Previous research indicated that a credible change agent is a critical component of a successful knowledge translation process.36 The first author in this project may have fulfilled the role of a change agent for the participants.
Additional important considerations are the multiple influences on the daily clinical decisions made by the participants in the present study. Clinical decision making relates to the thought processes associated with a clinician's examination and treatment of a patient or client. It is a process in which information is appraised, viable options are identified, and a choice is made. The goal is wise action or the best clinical judgment in a specific context.74 Throughout the present project, the participants reported that multiple influences and constraints on clinical decision making affected their ability to translate research evidence into practice. These constraints and influences, identified mainly through the individual and group interviews, are summarized in Appendix 3.
Clinical decision making for practitioners is extremely challenging. Awareness of available research evidence and insight into the relevance of that research for a particular child are critical, but not sufficient. Skilled pediatric physical therapists must also be able to communicate information effectively to various constituencies, including families, other caregivers, teachers, and other health care providers, and to advocate for an optimal course of action. All of the influences on decision making must be taken into consideration as elements of evidence-based practice and of expert practice in pediatric physical therapy.75–79
Along with these multiple influences on clinical decision making, several barriers to evidence-based practice were identified. A lack of incentives for evidence-based practice was a significant issue for the participants in the present study. Although the lack of incentives was described in different ways, it was clear that this issue had a strong influence on evidence-based practice activities and on the use of research evidence to guide clinical decision making. Despite the effort of the physical therapy profession to move toward evidence-based practice,1,3,4,14,17–22 working in a school setting was not perceived as being supportive of evidence-based practice by the participants. A lack of reimbursement for time to complete evidence-based practice activities during the daily routine of physical therapists working part time in school settings was an important factor. In the present project, all of the strategies occurred outside of work hours. Also, as described by participant P, a lack of reimbursement for ongoing professional development and amassed expertise also contributed to this notion of a lack of incentives. Finally, several of the participants described a lack of evidence-based practice activities among their physical therapist colleagues. Most of the evidence to date indicates that this observation is consistent with the behavior of many physical therapist clinicians.12–14,20,38,39,46,56,80–82
A recommendation from all of the participants was the development of evidence-based clinical practice guidelines that are available and accessible within their daily routines. All of the participants identified lack of time as a consistent barrier and discussed the positive appeal of a condensed summary of evidence. In a recent review article relating to the practice of medicine, electronic guidelines, also described as decision support systems, were found to significantly improve physicians’ clinical practice in 68% of the research studies reviewed.83 Four features of the guidelines were identified as independent predictors of improved clinical practice: automatic provision of decision support as part of clinician work flow, provision of recommendations rather than just assessments, provision of decision support at the time and location of decision making, and computer-based decision support.83
An additional recommendation from all of the participants was a requirement for mandatory continuing education credits for licensure. Currently, requirements for continuing education for physical therapists vary widely; most states have no mandatory requirements.84 The participants in the present project felt strongly that such a requirement is necessary to ensure all physical therapists are actively participating in ongoing professional development. In addition, several participants referred to the importance of continuing education conferences being interactive, clinically relevant, and evidence based.
The nature of this formative evaluation pilot project does not permit generalization to a larger population. Instead, the focus was on describing, in detail, the phenomenon of evidence-based practice, the development and implementation of strategies aimed at improving evidence-based practice skills, and the outcomes of those strategies for a group of pediatric physical therapists. No effort was made to control for extraneous factors that may have affected the participants’ attitudes, beliefs, and behaviors relating to evidence-based practice during the course of this project. A thorough description of the data analysis and interpretation processes and the measures aimed at enhancing the trustworthiness of those processes in this project was provided. This information may or may not be relevant to other circumstances.
Finally, the 6-month time frame may have been inadequate to permit substantial behavior change. Most of the work in this area has measured outcomes at between 3 and 6 months. However, a longer time period for individual and group strategies to have had an effect may have led to more-substantial changes in the outcomes in this project.
Research with physical therapists from different practice areas and work settings is a necessary next step. Further investigations into the use of change agents to facilitate and support ongoing knowledge translation in all physical therapy settings are warranted. Investigations also should integrate the identification and use of incentives for physical therapists as an element of knowledge translation. The development and use of decision support systems and clinical practice guidelines that provide summaries of research evidence may hold great promise, especially in relation to the use of technology as an aspect of routine clinical practice. Evaluation of the effectiveness of continuing education and other professional development activities is needed for the physical therapy profession. Finally, investigating the effects of these various activities—including the use of change agents, incentives, clinical practice guidelines, and continuing professional education—on patient outcomes also is critical.
Evidence-Based Practice Workshop Objectives
After participating in the workshop (including follow-up activities), attendees will be able to:
Define evidence-based practice
Discuss the relevance of “evidence” and evidence-based practice to pediatric physical therapy
Distinguish between a background question and a foreground question
Develop a clinical question in the patient/intervention/comparison/outcome format
Identify and access appropriate resources for obtaining research evidence relating to physical therapist practice
Use American Physical Therapy Association resources, Internet resources, or both to develop an evidence-based answer to a clinical question
Understand basic research and statistics terminology
Use understanding of research and statistics to analyze the strength of evidence
Diagnosis, prognosis, and intervention evidence
Levels of evidence and grades of recommendation
American Academy of Cerebral Palsy and Developmental Medicine ranking system for group and single-subject designs
Formulate the answer to the clinical question into a critically appraised topic document or Matrix spreadsheet
Apply the results of clinical research to physical therapist practice
Constraints and Influences on Clinical Decisions in the School Setting
Input and goals from the child and family
Data collected from the child during the examination and the intervention
Cognitive, behavioral, and motor skill levels of the child
Response of the child to the intervention (trial and error); boredom and motivation (over the course of the entire school year)
School environment (eg, amount of space, equipment, school schedule, and availability of adaptive physical education)
Skills and knowledge of the teacher(s) and classroom staff
Other professionals in the educational setting (eg, occupational therapists and adaptive physical education teachers)
Individualized education plan and related service status
Dr Schreiber, Dr Stern, and Dr Marchetti provided concept/idea/research design and writing. Dr Schreiber provided data collection. Dr Schreiber and Dr Provident provided project management and fund procurement. Dr Marchetti and Dr Provident provided consultation (including review of manuscript before submission). The authors acknowledge Gregory Frazer, PhD, and Paula Sammarone Turocy, EdD, ATC, for their assistance with this project.
Institutional review board approval for this study was obtained through Duquesne University.
↵* ATLAS.ti Scientific Software Development GmbH, Hardenbergstrasse 7, D-10623 Berlin, Germany.
- Received August 27, 2008.
- Accepted May 6, 2009.
- American Physical Therapy Association