Abstract

Objective. Determine whether lectures by national experts and a publicly available online program with similar educational objectives can improve knowledge, attitudes, and beliefs (KAB) important to chronic pain management.

Design. A pretest–posttest randomized design with two active educational interventions in two different physician groups and a third physician group that received live education on a different topic to control for outside influences, including retesting effects, on our evaluation.

Participants. A total of 136 community-based primary care physicians met eligibility criteria. All physicians attended the educational program to which they were assigned. Ninety-five physicians (70%) provided complete data for evaluation.

Measurements. Physician responses to a standardized 50-item pain management KAB survey before, immediately after, and 3 months following the interventions.

Results. The study groups and the 41 physicians not providing outcomes information were similar with respect to age, sex, race, percent engaged in primary care, and number of patients seen per week. Physician survey scores improved immediately following both pain education programs (live: 138.0→150.6, P < 0.001; online: 143.6→150.4, P = 0.007), but did not change appreciably in the control group (139.2→142.5, P > 0.05). Findings persisted at 3 months. Satisfaction measures were high (4.00–4.72 on 1–5 scale) and not significantly different (P = 0.072–0.893) between groups.

Conclusions. When used under similar conditions, national speakers and a publicly available online CME program were associated with improved pain management KAB in physicians. The benefits lasted for 3 months. These findings support the continued use of these pain education strategies.

Introduction

In his 2005 President's Message to the membership of the American Academy of Pain Medicine (AAPM), Scott Fishman noted that increasing national attention was being focused on chronic pain and called on his fellow pain specialists to “… help train the clinicians at the front line of healthcare who are faced with managing the majority of these (chronic pain) cases.” He specifically referred to the AAPM's “essentials courses,” a lecture-based program geared to meet the needs of primary care (generalist) physicians [1].

It is not known how effective these types of organized training courses are in improving pain management knowledge, attitudes, and other relevant attributes of generalist physicians. In one of the few published evaluations of a pain education program, Elliott developed a sophisticated community-based cancer pain education program and tested it in six Minnesota communities [2]. Following the 15-month intervention, the study found only slight, statistically insignificant improvements in physicians' and nurses' knowledge and attitudes [3]. Before embarking on a major educational initiative in chronic pain management, it would be helpful to confirm that existing approaches can succeed.

There are also practical and theoretical reasons to consider pain management training options other than conventional live lectures. We and others have shown that asynchronous, interactive, computer-based education delivered via the Internet (online education) can be educationally effective [4,5]. Online education costs little to distribute, is continuously available, does not require the expense of travel or lost practice time by the attendee, and can be easily monitored and updated as new findings emerge. Several online pain management CME programs are available to physicians, but, despite their potential economic and educational benefits, to our knowledge, none of these have been evaluated for their effectiveness in improving educational outcomes in generalist physicians.

There is also the practical question of what constitutes educational success? Some would argue that the best measure of a successful physician educational intervention is “… an improvement in the health status of a population of patients due to a change in physician behavior”[6]. However, this goal can be difficult to measure and subject to many outside influences. This same author also notes that “… the ultimate goal for a CME activity is to improve the competence of physicians to manage patients” while defining “competence” as possessing “… the knowledge, skills, and attitudes to perform as expected”[7]. This view is consistent with that of others, including Kirkpatrick [8], that the immediate goal of an educational activity should be to improve knowledge, attitudes, and, possibly, skills. In order to develop and enhance physician pain management education programs, it would be helpful to have experience with practical measures of physician attributes that are important in pain management.

Therefore, several important questions merit consideration if we wish to improve the pain management training of frontline clinicians. First, given the considerable investment in the “expert speaker” approach, is there reason to believe that this strategy can improve relevant educational outcomes? Second, given the cost and convenience advantages of online education, is there evidence that this strategy can improve educational outcomes in chronic pain management. Finally, what can we learn about outcomes measurement that may help others evaluate and improve their own pain education programs?

Methods

Study Goals and Design

We sought to determine whether pain management expert speakers, presenting their standard lectures in a typical CME setting, could produce measurable and durable improvements in the pain management knowledge, attitudes, and beliefs (KAB) of community-based primary care physicians, as measured by a standardized, self-administered survey. We also sought to determine whether a currently available [9], evidence-based online pain management CME program, developed by a different group of experts and based on current recommendations for online education design, could favorably affect these outcomes.

The primary study hypothesis was that physicians receiving one of two types of pain management CME would show immediate and long-term (3 months) improvement in pain management expertise compared with a control group. We used a pretest–posttest randomized design with two active educational interventions in two different physician groups and a third physician group that received live education on a different topic to control for outside influences, including retesting effects, on our evaluation. We analyzed results from all physicians who completed three required KAB surveys. This project was determined to be exempt from the Federal Policy for the Protection of Human Subjects by Argus IRB.

Participants

Study participants were self-selected from among 228 physicians attending a 2-day Southern California CME meeting on geriatric care in September, 2005. To be eligible for the study, physicians had to be in active, nonacademic medical practice (i.e., community-based) and treat patients with chronic pain. Participants were told that they would be randomly assigned to one of three 4-hour programs taking place on the second day of the meeting, that they would receive 4 hours of free CME credit for completing an identical pretest and posttest on the day of the event, and that they would be paid a $200 stipend for completing a second identical posttest 3 months following the event. Except for the pain management education programs included in this study, physicians attending the meeting did not receive specific pain management education during the meeting.

Educational Interventions

Following enrollment, participants were randomly assigned based on blind name draw, to one of three adjacent hotel meeting rooms offering either: 1) live lectures on chronic pain management by three national experts (Fishman, McCarberg, and Slatkin; see Acknowledgements); 2) an online CME pain program, administered using laptop computers with headphones connected via high speed access to the Internet; or 3) live presentations on palliative and end-of-life care by three national experts (Fine, Fishman, Slatkin). During the events, four study authors (Harris, Elliott, Davis, Chabal) monitored the presentations to confirm participation and detect whether physicians changed meeting rooms.

The two pain educational programs were based on current best practices in managing chronic nonmalignant pain and had identical educational goals: 1) to enhance ability to diagnose common nonmalignant chronic pain syndromes; 2) to improve ability to assess and manage functional status in chronic pain patients; 3) to increase skill and confidence in managing long-term pain medications; and 4) to improve management of referral and ancillary care providers for chronic pain patients [10,11]. To enhance consistency, both programs illustrated important pain management concepts by discussing four common syndromes: 1) back pain; 2) headache; 3) fibromyalgia; and 4) neuropathic pain. One of the authors (Fine) who did not contribute to either program reviewed the online program first and then worked with the live experts to assure that their presentations covered the relevant concepts. The intent of this work was to evaluate two currently available “best practice” approaches to accomplishing the same educational goals, not to compare two technologies. Therefore, the content of the two programs, while similar, was not identical. Instead, the content was adjusted to the delivery medium and the live speakers were encouraged to use their best slides, materials, and audience participation techniques to accomplish the desired educational goals.

The online CME program was developed as part of an NIH-funded study to create an objective, marketable, online CME program in pain management for primary care physicians. Before developing the online program, a group of national experts on pain management, pain policy, and primary care education (see Acknowledgments) met and reviewed existing online pain education programs. The group sought to create an educational program that would be engaging and clinically relevant to primary care physicians, while emphasizing a multidisciplinary, evidence-based, and functional approach to chronic pain. As part of this work, the expert group agreed upon the four broad educational goals noted above as well as a number of potential KAB assessment items to measure the effectiveness of a chronic pain education program.

The content for the online education program was prepared by Elliott, Davis, and Chabal. All content was reviewed by at least two of these authors. Wherever possible, the program used published guidelines and collaborative reviews to document recommendations. The authors developed six interactive, multimedia virtual patient (VP) cases based on common pain syndromes that were the primary teaching elements of the program. They supplemented these cases with text-based minitutorials that were available as a group and also via hyperlinks from within the cases. The minitutorials provided “on-demand” education on subjects such as analgesics, the pathophysiology of back pain, opioid contracts, and the like. The program also contained hyperlinks to useful Websites and 23 downloadable practice tools, such as patient handouts and pain assessment surveys. The final program was quite large. The six VP cases averaged 15,000 words each and the 70 minitutorials averaged 300 words each. There were 451 references with hyperlinks to either the original article or the PubMed abstract. The program was approved for up to 14.5 AMA Category 1 CME Credits™.

The online program was primarily text-based, but it did use video segments to illustrate key patient behaviors, such as seen in neuropathic pain and opioid addiction. Each VP case included a number of imbedded text-based questionsand author comments. The questions primarily addressed user behaviors at a specific point in the case, that is, “what would you do in this situation?” The cases also included video game elements, such as an external “clock” to duplicate the time pressures of clinical practice, although users were not charged time for viewing any of the educational resources. In general, the program was based on current design principles for effective online education programs [12].

During the trial, participants assigned to the online program entered their names into their laptop computer station and were required to complete a short opening case dealing with acute back pain. After completing this case, they could view any of the other five VP cases and as much (or little) supporting material as they wished. Study participants were not able to access the online CME program after the 4-hour intervention.

The palliative care (control) presentation was led by one of the authors (Fine). It included two 1-hour lectures on palliative care and quality and two expert-led workshops dealing with advance care planning, delivery of bad news, and withholding or withdrawing treatment. This topic and format were chosen to provide an engaging active control in a related content area; however, there was no discussion of pain management techniques during the palliative care sessions.

Educational Outcome Measures

We used scores on a brief, self-administered 50-item KAB survey as the primary educational outcome measure. The original 142 items for this survey were suggested by the authors of the online CME program and other members of the expert consensus panel. As noted in a companion article, the survey refinement process proceeded independently of the development of the online CME program [13]. This survey, the KnowPain-50, has good internal consistency in different physician groups, correlates with physician treatment of chronic nonmalignant pain as measured by unannounced standardized patients, and discriminates well between physicians with apparent differences in expertise in treating chronic pain. Individual KnowPain-50 item scores range from 1 to 5, with an overall scoring range of 0–250 [13]. All study participants were requested to complete a paper version of the survey before randomization on the day of the study, immediately following the educational program they attended, and 3 months later by mail. We also asked physicians to evaluate each program's educational quality by answering eight additional opinion questions immediately following their program.

Analytic Techniques

We conducted a repeated measures analysis of variance (RMANOVA) with one between-subjects factor (conference group attended) and one within-subjects factor (pretest–posttest follow-up) to assess changes in the three groups over the three time periods. To further understand any observed differences, we performed post hoc analyses using the Bonferroni technique (a method of adjusting for multiple comparisons) to evaluate changes in KnowPain-50 scores over time within each group. For the sampling bias verifications, we compared the three final groups and the dropouts by performing crosstabulation analyses and a chi-squared test of association to examine those variables with nominal data. For the satisfaction ratings, we performed one way analyses of variance to examine how the groups rated their experience.

Results

Participant Characteristics and Flow through the Study

Overall, 154 physicians agreed to participate in the study and were randomized. Although all physicians stated they were “… in the active practice of medicine” at the time of enrollment, pretest data showed that 18 of these physicians did not meet our a priori standard of treating more than 10 patients per week. These individuals were evenly distributed within the three groups and were excluded from further analysis, leaving an initial study population of 136 physicians. All eligible physicians attended the educational program to which they were assigned. We received a complete set of three usable surveys (more than 90% of items completed) from 95 of these physicians (70%) and this group was the final study population. Flow through the study is shown in Figure 1. The number of dropouts was evenly balanced between the three groups (29–31%).

Figure 1

Flow of participants through study.

Figure 1

Flow of participants through study.

As shown in Table 1, there were no significant differences in self-reported demographic characteristics between any of the three study groups or the 41 dropouts. The overall percentage of physicians who listed their race as “Asian” was higher in this study population than in the U.S. physician population in general [14]. All participants except one practiced in California. Most physicians (60–72%) practiced family or internal medicine. Two physicians indicated a secondary specialty in pain medicine; one of these persons was assigned to the Internet CME program and one to the palliative care program. Study results were not affected by exclusion of data from these two individuals; thus their results are included in the analyses.

Table 1

Self-reported demographic characteristics of study participants and eligible nonparticipants

 Live Pain Lecture (N = 32) Online Pain Program (N = 30) Palliative Care Program (N = 33) Dropouts* (N = 41) 
Average age (SD) 48.8 (11.2) 51.9 (11.5) 49.3 (10.9) 51.9 (12.6) 
Male (%) 59.4 63.3 63.6 70.7 
White (%) 32.3 56.7 45.5 43.6 
Asian (%) 58.1 40.0 48.5 46.2 
Other race or none (%)  9.6  3.3  6.0 10.2 
Family medicine (%) 31.3 36.7 33.3 56.5 
Internal medicine (%) 40.6 23.3 36.4 17.4 
Other specialty (%) 12.5 23.3 6.1 13.1 
No specialty noted (%) 15.6 16.7 24.2 13.0 
Average number of patients per week (SD) 79.1 (36.9) 84.0 (35.2) 87.9 (35.8) 84.1 (37.1) 
 Live Pain Lecture (N = 32) Online Pain Program (N = 30) Palliative Care Program (N = 33) Dropouts* (N = 41) 
Average age (SD) 48.8 (11.2) 51.9 (11.5) 49.3 (10.9) 51.9 (12.6) 
Male (%) 59.4 63.3 63.6 70.7 
White (%) 32.3 56.7 45.5 43.6 
Asian (%) 58.1 40.0 48.5 46.2 
Other race or none (%)  9.6  3.3  6.0 10.2 
Family medicine (%) 31.3 36.7 33.3 56.5 
Internal medicine (%) 40.6 23.3 36.4 17.4 
Other specialty (%) 12.5 23.3 6.1 13.1 
No specialty noted (%) 15.6 16.7 24.2 13.0 
Average number of patients per week (SD) 79.1 (36.9) 84.0 (35.2) 87.9 (35.8) 84.1 (37.1) 
*

Dropouts were eligible physicians who provided this information, but did not complete three required surveys. Chi square, ANOVA found no significant differences between groups in any demographic characteristic, P = 0.289–0.809.

Effects of Educational Interventions on Pain Survey Scores

Table 2 presents the mean KnowPain-50 scores for each group at each time. The pattern is one of rapid and sustained increases in scores for the two intervention groups and a mild linear increase in scores for the control group. RMANOVA analysis demonstrated a significant group (intervention) by time interaction (F = 2.704, P = 0.032, observed power = 0.742), indicating a change among the group KnowPain-50 scores over the study period that was unlikely to be due to chance. This finding is consistent with our hypothesis that participation in the educational programs would be associated with changes in KnowPain-50 scores. The between group main effect, which reflects the overall mean scores for each group, showed a trend in the expected direction, but was not significant (F =0.809, P = 0.448, observed power = 0.185, group means: Pain Lecture = 146.5, Online = 147.8, Palliative Care = 142.2). The within-subjects (time) main effect, which reflects the mean scores for each test/time, was significant (F = 23.542, P < 0.001, observed power = 1.000, means: pretest = 140.2, posttest 1 = 147.8, posttest 2 = 148.4).

Table 2

KnowPain-50 scores in study participants before and after educational programs

Study Group Pretest Mean Score (SD) Posttest Mean Score Day of Program (SD) Posttest Mean Score 3 Months after Program (SD) 
Live pain lecture (N = 32) 138.0 (17.51) 150.6 (21.38) 151.0 (19.43) 
P < 0.001* P < 0.001* 
Online pain program (N = 30) 143.6 (19.78) 150.4 (18.63) 149.5 (21.44) 
P = 0.007* P = 0.036* 
Palliative care program (N = 33) 139.2 (18.75) 142.5 (20.50) 144.8 (22.00) 
P = 0.434* P = 0.053* 
Study Group Pretest Mean Score (SD) Posttest Mean Score Day of Program (SD) Posttest Mean Score 3 Months after Program (SD) 
Live pain lecture (N = 32) 138.0 (17.51) 150.6 (21.38) 151.0 (19.43) 
P < 0.001* P < 0.001* 
Online pain program (N = 30) 143.6 (19.78) 150.4 (18.63) 149.5 (21.44) 
P = 0.007* P = 0.036* 
Palliative care program (N = 33) 139.2 (18.75) 142.5 (20.50) 144.8 (22.00) 
P = 0.434* P = 0.053* 
*

P value for post hoc pairwise comparison of pretest vs posttest mean scores for same program (row comparison), adjusted for multiple comparisons.

Table 2 also presents the results of post hoc pairwise analyses of the observed changes in KnowPain-50 scores, with adjustments for multiple comparisons. These analyses found that the increases in posttest scores for the two pain education programs were significantly greater than their corresponding pretest scores. In contrast, the small increases we observed in posttest KnowPain-50 scores for the control group over the pretest scores did not reach statistical significance at either time (although the improvement by posttest 2 was close, P = 0.053). We believe that these patterns are most consistent with a conclusion that physicians who participated in either of the two pain education programs experienced an immediate improvement in their pain management KAB and that this improvement persisted for at least 3 months. Based on data from the control group, if retesting or other environmental effects contributed to these changes, such contributions were unlikely to account for the full magnitude and pattern of the changes we observed.

Participant Satisfaction

The participant assessments of the three programs were informative. As shown in Table 3, there were six statements related to both programs and two that were relevant to either the live lecture/workshops (faculty interaction) or the online program (ease of use). All statements were positively framed, using a 1–5-point Likert-type scale, where 5 indicated “Strong Agreement” with the statement and 1 indicated “Strong Disagreement.” The mean satisfaction scores for all programs were ≥4.0, demonstrating that all programs were well-received. There were no significant differences between any satisfaction measures across the three groups. When we recomputed these data including 23 additional survey results from the 41 dropouts, the conclusions did not change.

Table 3

Mean educational program satisfaction score

Item Mean Score* (SD) P Value (ANOVA) 
Live Pain Lecture (N = 30) Internet Pain Program (N = 30) Palliative Care Program (N = 32) 
This program will enable me to improve my practice. 4.60 (0.563) 4.53 (0.507) 4.50 (0.508) 0.893 
The information validated things I already do in my practice. 4.20 (0.664) 4.00 (0.643) 4.13 (0.619) 0.561 
I had ample opportunity to interact with the faculty (live presentations only). 4.10 (0.673) NA 4.28 (0.772) 0.395 
The online program was easy to use (computer presentation only). NA 4.27 (0.640) NA NA 
The program held my interest. 4.53 (0.571) 4.07 (0.980) 4.22 (0.792) 0.072 
The program was free of commercial bias. 4.43 (0.728) 4.47 (0.571) 4.72 (0.457) 0.226 
The program directors and coordinators were actively involved. 4.60 (0.498) 4.40 (0.724) 4.63 (0.554) 0.447 
Overall, this program met the stated learning objectives. 4.67 (0.479) 4.47 (0.629) 4.47 (0.567) 0.453 
Item Mean Score* (SD) P Value (ANOVA) 
Live Pain Lecture (N = 30) Internet Pain Program (N = 30) Palliative Care Program (N = 32) 
This program will enable me to improve my practice. 4.60 (0.563) 4.53 (0.507) 4.50 (0.508) 0.893 
The information validated things I already do in my practice. 4.20 (0.664) 4.00 (0.643) 4.13 (0.619) 0.561 
I had ample opportunity to interact with the faculty (live presentations only). 4.10 (0.673) NA 4.28 (0.772) 0.395 
The online program was easy to use (computer presentation only). NA 4.27 (0.640) NA NA 
The program held my interest. 4.53 (0.571) 4.07 (0.980) 4.22 (0.792) 0.072 
The program was free of commercial bias. 4.43 (0.728) 4.47 (0.571) 4.72 (0.457) 0.226 
The program directors and coordinators were actively involved. 4.60 (0.498) 4.40 (0.724) 4.63 (0.554) 0.447 
Overall, this program met the stated learning objectives. 4.67 (0.479) 4.47 (0.629) 4.47 (0.567) 0.453 
*

Mean based on 1–5 scale where 5 = “Strongly Agree” and 1 = “Strongly Disagree.”

Test for differences between programs.

Use of the Online Program

As the online pain management education program contained more material than could be used in a 4-hour session, we investigated how physicians used it under these time constraints. All physicians completed at least four case studies and most (78%) completed all six. The average time spent on each case was 19.6 minutes. Physicians viewed very few of the 70 minitutorials during the session (average number viewed per user 13.2, range 0–28). During the session, we observed that all physicians were quite engrossed in the activity. When a refreshment break was announced, none of the physicians ceased their work. A repeated comment after the session was that the online education program was “intense.”

Discussion

This study found that lecture presentations by highly regarded national experts and a publicly available online CME program, dealing with the management of chronic nonmalignant pain and delivered under comparable conditions, improved physician KAB related to the management of chronic pain. These effects persisted for at least 3 months and appeared to be related to participation in the educational programs. This finding provides empiric evidence that an expert speaker and an online education approach can improve the pain management KAB of frontline health care clinicians. To our knowledge, this is the first study comparing any form of online CME and lectures delivered by expert speakers, with a concurrent control, tested under identical circumstances.

While we believe that the improvements in KnowPain-50 scores were durable, that is lasting at least 3 months, we noted that the initial (pretest) scores were somewhat higher in the online CME group than the other two groups, despite randomization. Thus, although the 3-month mean scores in the two pain education groups were quite similar, 151.0 for the live program vs 149.5 for the online program, and both exceed the 3-month mean scores of the control group by ≥4.7 points, the magnitude of improvement over baseline in the online CME group was only slightly greater than that seen in the control group (5.9 vs 5.6) at 3 months. A statistical technique for adjusting for the higher baseline scores would be an analysis of covariance, but we did not specify this analysis in advance. Accordingly, we present the data we actually obtained and note that another trial might be helpful to confirm the durability of the results, particularly for the online CME program.

Were the changes we observed “educationally relevant?” The educational literature often compares behavioral interventions using standardized mean difference as a measure of effect size and notes that educational programs do not usually report effects as large as seen in other types of behavioral interventions [15]. The immediate changes we observed in physician KAB (KnowPain-50) scores, 12.6 points for the live lectures and 6.8 points for the online program, correspond to effect sizes of 0.65 and 0.35, respectively, which Cohen defines as “medium” effect sizes [16]. The effect sizes we observed were larger than those seen in 64% of the CME studies reported by Mansouri [17]. In comparing the specific KnowPain-50 scores to those from other physicians, the pre-intervention KnowPain-50 scores of the study participants were similar to those recorded by physician users of an online CME Website while the post-intervention scores moved 20–32% closer to those of pain experts and approximated the scores of academic physicians [13]. We believe these types of improvements are educationally relevant, but note there is little information in the medical education literature that provides a useful point of reference. For example, does a medium effect size improvement on a test or KAB survey correlate with a measurable improvement in clinical performance? This is an important research question that has not been answered. We acknowledge, therefore, that further studies will be helpful to define the educational relevance of specific KnowPain-50 scores and changes in scores.

There are other limitations to our findings. This study found a positive answer to the research question, “do comparably designed pain management education programs using the current ‘best practice’ approaches of two available delivery media improve educational outcomes when delivered under comparable conditions?” This study was not, however, intended or powered to determine whether one program was superior to the other under such circumstances. Moreover, by constraining online CME use to a limited time in a group setting, the study did not tell us how well the online CME program might perform under more typical use circumstances.

There are concerns with the internal validity of the findings due to the loss of study outcome data. Although the participation rate was high for a long-term study of CME, there was still a 30% dropout rate. However, the dropouts were evenly distributed across all three study groups and had similar demographic characteristics as the study groups; thus, we have no a priori evidence that this loss of participants introduced bias into the study. The study participants were almost entirely Southern California community physicians, with a much higher percentage of Asian Americans than the population of U.S. physicians in general, which may affect the generalizability of the results. While there is no reason to believe that these physicians are more or less receptive to pain education than any other group of physicians, it would be useful to confirm the study findings in different physician populations.

A potential source of measurement bias was that the expert authors of the online program contributed to the development of the assessment instrument, the KnowPain-50. This could have favored the online program. As we have noted, we developed the KnowPain-50 because of a lack of available and relevant outcomes measurement tools [13]. We took steps to separate online course creation from the KnowPain-50 development processes and to refer to external pain management guidelines and recommendations for both, but we cannot guarantee that the authors of the online program did not consider the items they had recommended for the assessment instrument as they developed their course materials. As we intended for the assessment tool and the education program to be based on current standards for pain management, we expected overlap. However, the study was not designed to demonstrate the superiority of the online program to the live program, which would pose the gravest threat from measurement bias. Rather, it was planned to measure both of these programs against a control intervention using an objective outcomes tool. This design reduces, but does not eliminate, a measurement bias threat. As the KnowPain-50 demonstrated positive findings for both intervention programs, but not for the control program, this suggests that it can provide such an objective measure. Nonetheless, others will have to evaluate the tool themselves to confirm its objectivity and validity for this use.

This study provides a rationale for the continued use and evaluation of live and online approaches to pain management education. It confirms that the speakers and the online program we studied, under the study conditions, appeared to provide durable educational benefits. The findings do not demonstrate that all live speakers and all online pain management programs will be effective. We hope our results encourage others to develop innovative approaches to pain management education and to seek better ways to measure and improve their effectiveness.

Acknowledgments

We thank three unnamed reviewers of Pain Medicine for their constructive comments on the manuscript.

In addition to the authors (except Dr. Fine), the additional members of the expert consultant team who helped develop the online curriculum were:

Paul Gordon, MD—Director, Preparation for Clinical Medicine, U of AZ College of Medicine, Tucson, AZ;

Randa Kutob, MD, MPH—Clerkship Director, Department of Family Medicine, Tucson, AZ;

Marc Hoffing, MD—COO Desert Medical Group, Palm Springs, CA;

Stuart Levine, MD—Chief Medical Officer, Prospect Medical Holdings, Culver City, CA;

Aaron Gilson, PhD—Pain & Policy Study Group, University of WI, Madison, WI.

We are grateful to the following pain and palliative care experts for their help in delivering the live presentations:

Scott Fishman, MD—Chief, Division of Pain Medicine, U of CA Davis, Davis, CA;

Bill McCarberg, MD—Founder, Chronic Pain Management Program, Kaiser Permanente, San Diego, CA;

Neal Slatkin, MD—Director, Pain and Palliative Care, City of Hope Medical Center, Duarte, CA.

We owe thanks to SCAN Healthplan for its support of the trial.

The development of the online CME program and the research study were supported by Small Business Innovation and Research (SBIR) grants, R43-NS045361 and R44-NS045361 from the National Institute of Neurological Disorders and Stroke. The sponsor had no role in the design and conduct of the study, the collection, management, analysis, and interpretation of the data, or in the preparation, review, or approval of the manuscript. The Principle Investigator (JMH) had full access to all the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.

References

1
Fishman
SM.
President's message: Pushing the pain medicine horizon
.
Pain Med
 
2005
;
6
:
280
1
.
2
Elliott
TE
Murray
DM
Oken
MM
et al
The Minnesota cancer pain project: Design methods and educational strategies
.
J Cancer Educ
 
1995
;
10
:
102
12
.
3
Elliott
TE
Murray
DM
Oken
MM
et al
Improving cancer pain management in communities: Main results from a randomized controlled trial
.
J Pain Symptom Manage
 
1997
;
13
(
4
):
191
203
.
4
Harris
JM
Salasche
SJ
Harris
RB.
Can Internet-based CME improve physicians' skin cancer knowledge and skills?
J Gen Intern Med
 
2001
;
16
:
50
6
.
5
Short
LM
Surprenant
ZJ
Harris
JM.
A community-based trial of online intimate partner violence CME
.
Am J Prev Med
 
2006
;
30
:
181
5
.
6
Moore
DE
Jr
.
A framework for outcomes evaluation in the continuing professional development of physicians
. In
Davis
D
Barnes
BE
Fox
R
, eds.
The Continuing Professional Development of Physicians—From Research to Practice
 .
Chicago, IL
:
AMA Press
;
2003
:
251
.
7
Moore
DE
Jr
.
A framework for outcomes evaluation in the continuing professional development of physicians
. In
Davis
D
Barnes
BE
Fox
R
, eds.
The Continuing Professional Development of Physicians—From Research to Practice
 .
Chicago, IL
:
AMA Press
;
2003
:
253
.
8
Kirkpatrick
DL
Kirkpatrick
JD.
Evaluating Training Programs: The Four Levels
 ,
3rd edition
.
San Francisco
:
Berrett Koehler
;
2006
.
9
Chabal
C
Davis
BE
Elliott
TE
.
Improving Outcomes in Chronic Pain—An online CME program
 . Available at: http://www.vlh.com. (Accessed December 4, 2007).
10
Jacobson
L
Mariano
AJ.
General considerations of chronic pain
. In:
Loeser
JD
Butler
SH
Chapman
CR
Turk
DC
, eds.
Bonica's Management of Pain
 .
Philadelphia, PA
:
Lippincott Williams and Wilkins
;
2001
:
241
54
.
11
Elliott
TE.
Chronic noncancer pain management
.
Minn Med
 
2001
;
84
:
28
34
.
12
Casebeer
LL
Strasser
SM
Spettell
CM
et al
Designing tailored Web-based instruction to improve practicing physicians' preventive practices
.
J Med Internet Res
 
2003
;
5
(
3
):
e20
.
13
Harris
JM
Jr
Fulginiti
JV
Gordon
PR
et al
KnowPain-50: A tool for assessing physician pain management education
.
Pain Med
 
2007
; doi: .
14
American Medical Association
.
Physician Characteristics and Distribution in the US, 2007 Edition
 . Chicago, IL:
AMA Press
;
2007
. Available at: http://www.ama-assn.org/ama/pub/category/12930.html (Accessed December 4, 2007).
15
Valentine
JC
Cooper
H.
Effect Size Substantive Interpretation Guidelines: Issues in the Interpretation of Effect Sizes
 . Washington, DC: What Works Clearinghouse;
2003
. Available at: http://ies.ed.gov/ncee/wwc/pdf/essig.pdf. (Accessed December 4, 2007).
16
Cohen
J.
Statistical Power Analysis for the Behavioral Sciences
 , 2nd edition. Hillsdale, NJ:
Lawrence Earlbaum Associates
;
1988
.
17
Mansouri
M
Lockyer
J.
A meta-analysis of continuing medical education effectiveness
.
J Contin Educ Health Prof
 
2007
;
27
:
6
15
.
Dr. Harris is the President of and a shareholder in Medical Directions, Inc. (MDI) a company that develops and markets online CME, including the pain management program described in this report. Drs. Chabal, Davis, Elliott, Kutob, Gilson, Gordon, Hoffing, and Levine were paid consultant fees by MDI for their participation, pursuant to the terms of the SBIR research grants.