Abstract

Background

Wearable sensors that monitor physical behaviors are increasingly adopted in clinical research. Older adult research participants have expressed interest in tracking and receiving feedback on their physical behaviors. Simultaneously, researchers and clinical trial sponsors are interested in returning results to participants, but the question of how to return individual study results derived from research-grade wearable sensors remains unanswered. In this study, we (1) assessed the feasibility of returning individual physical behavior results to older adult research participants and (2) obtained participant feedback on the returned results.

Methods

Older adult participants (N = 20; ages 67–96) underwent 14 days of remote monitoring with 2 wearable sensors. We then used a semiautomated process to generate a 1-page report summarizing each participant’s physical behaviors across the 14 days. This report was delivered to each participant via email, and they were asked to evaluate the report.

Results

Participants found the reports easy to understand, health-relevant, interesting, and visually pleasing. They had valuable suggestions to improve data interpretability and raised concerns such as comparisons with measures derived from their consumer-grade sensors.

Conclusions

We have demonstrated the feasibility of returning individual physical behavior results from research-grade devices to older research participants, and our results indicate that this practice is well-received. Further research to develop more efficient and scalable systems to return results to participants, and to understand the preferences of participants in larger, more representative samples, is warranted.

There has been a shift in clinical research toward patient-centricity, which involves prioritizing the patient voice and improving patient engagement (1). In parallel, there has been an increase in the adoption of digital health technologies (DHTs) within clinical research (2–4). DHTs such as wearable sensors can facilitate patient-centricity; for example, tracking data from wearable sensors can lead patients to feel more engaged with their health and can minimize in-person visits, thus reducing patient burden (5,6). Moreover, because wearable sensors can measure physical behaviors (eg, physical activity, sedentary behaviors) passively, continuously, and remotely, they have the capacity to capture real-world data as a proxy for lived experiences of patients that may not be captured through established in-clinic assessments. In some cases, wearable sensors are paired with assessments such as ecological momentary assessments (EMAs), in which patients are asked to report on their behaviors and experiences in their real-world environments. Alongside the increased capacity to collect large amounts of real-world physical behavior data, participants in clinical research and care, particularly older adults, have expressed interest in tracking their physical behaviors using DHTs and receiving their wearable sensor data as feedback (5,7,8).

Receiving information on physical behaviors may be health-relevant, as past research has shown increases in physical activity and decreases in sedentary behaviors among participants who receive feedback on these behaviors (9). Thus, understanding how to return physical behavior results derived from DHTs in ways that are feasible and valuable to older adult participants is important. In fact, several organizations (10) have advocated for the return of individual study results in an effort to increase patient-centricity within clinical trials. In 2018, the National Academies of Sciences, Engineering, and Medicine released guidance for researchers to understand participant value and preferences for returning results by conducting a review of the current evidence-base, consulting with advisory boards, and/or engaging with research participants (10). Although this guidance provides recommendations on how to advance the practice of returning individual study results, its implementation remains rare and inconsistent.

Barriers to implementing the practice of returning individual study results include concerns of misinterpretation, ethical challenges, logistical issues, and lack of researcher knowledge (11,12). However, research participants have indicated they believe the benefits outweigh potential negative consequences (13), reporting that receiving results may allow for a greater sense of data ownership, perceived value of participation, likelihood of participating in future research, and health management (14,15). In fact, participants expect results to be returned and are less likely to participate in research if results are not returned (16). Moreover, participants have expressed that this practice may cultivate trust with researchers and can facilitate discussions about their health with providers and family members (15–18). The perceived benefits and desire to share results are also shared by clinical researchers (12). Of note, past studies in this domain inquired about the preferences and value of receiving individual study results under hypothetical scenarios, without participants having received actual results (14,15). Very few studies have actively returned individual study results and obtained feedback from participants (16,18). Furthermore, these studies have focused on returning health information in general, with an emphasis on genetic information and/or lab results (16,18) rather than results derived from DHTs. Although these studies can help inform the development of the process for returning DHT results, they limit our understanding of what, how, when, and why to return individual study results derived from wearable sensors to participants.

Here, we examined the feasibility of and obtained older adults’ feedback on returning individual study physical behavior and EMA results following participation in a study involving 14 days of remote monitoring with wearable sensors. Findings will serve as a foundation for better understanding how to return individual study results derived from wearable sensors to research participants.

Method

This work was part of a larger study to develop novel methods to detect walking behavior with wearable sensors in older adults. As part of this study, participants underwent 14 days of remote monitoring with wearable sensors, after which a summary of their results was returned to them. Here, we describe the participants in the larger study, as well as aspects of the study related to returning results to participants.

Participants

Participants were recruited from western Massachusetts newspaper ads, social media ads, and flyers. Potential participants were screened for eligibility via REDCap and over the phone; eligible individuals were at least 65, comfortable using a smartphone or tablet, and English speakers. Individuals were excluded if they had a diagnosis resulting in disordered gait and/or mobility impairments requiring full-time use of a walking aid. Individuals identified as eligible visited the lab for in-person assessments (see “Demographic and Mobility Assessments”). Only individuals scoring 4-12 on the Short Physical Performance Battery (SPPB) were eligible for subsequent remote monitoring. All participants who completed remote monitoring were eligible to receive a summary of their results. The study protocol was approved by the University of Massachusetts Institutional Review Board (#3566). All participants provided written, informed consent prior to participation and received monetary compensation for participation.

Demographic and Mobility Assessments

Participants completed a demographic questionnaire, as well as the SPPB (19) to assess physical function. The SPPB is comprised of balance, gait speed, and chair stand tests, with scores ranging from 0 to 12; higher scores indicate better function.

Remote Monitoring

Participants were given 2 actigraphy sensors to wear continuously for the subsequent 14 days: (1) activPAL 4 + (PAL Technologies Ltd., Glasgow, UK), which was worn on the midline of the right thigh and (2) ActiGraph CentrePoint Insight Watch (CPIW; ActiGraph LLC, Pensacola, FL), which was worn on the nondominant wrist. After remote monitoring, participants returned both sensors to the lab.

Ecological Momentary Assessments

On each day of remote monitoring, participants were asked to complete a series of ecological momentary assessments (EMAs) delivered via REDCap to their email address at 5pm. The EMAs included the question “Overall, would you define today as: better than typical, typical, or worse than typical.” If EMAs were not completed within 90 minutes, up to 2 reminders were delivered at 90-minute intervals.

Physical Behavior Reports

Once sensors were returned, we generated an individualized 1-page report summarizing each participant’s physical behaviors across the remote monitoring period. The process of report generation and delivery is described below (Figure 1, Supplementary Material). A physical behavior report for a randomly selected participant is shown in Figure 1.

Example of a physical behavior report summarizing a participant's physical behaviors across the remote monitoring period.
Figure 1.

Physical behavior report returned to a randomly selected participant after 14 days of remote monitoring. MEADOW-AD = Measuring Ecological Assessments Derived from Wearables in Alzheimer’s Disease. Labels A, B, C, and D are for the purposes of describing report components and were not included on reports delivered to participants.

Data preparation

activPAL data were processed to determine valid days and physical behavior classifications in PALbatch software (v8.11.1.63). Steps and time spent in different physical behaviors on valid days were exported. For CPIW data, a day was considered valid if it included ≥ 10 hours of wear (20). Step counts were exported in 60-second epochs from ActiGraph’s CentrePoint Platform. EMA responses were exported from REDcap. A custom R script was then used to calculate metrics, create plots, and generate reports according to the following steps.

Metric calculation and plot creation

Average minutes/day spent stepping, standing, and sitting (Figure 1A) were calculated based on exported activPAL data from valid days. activPAL was used for these calculations based on its ability to discriminate postural changes. For calculating daily and hourly steps (Figures 1B-D), CPIW data were used rather than activPAL due to activPAL battery life being shorter than 14 days. For 3 participants, ActiGraph’s step algorithm yielded implausibly low step count estimates on valid days (median daily steps ≤ 500); for these cases, activPAL data were used to calculate daily and hourly steps for all valid days.

The percent of total steps taken per hour, across all valid days, was calculated and used to create the bar plot in Figure 1C. Hourly steps were used to determine the time of day (morning, afternoon, evening, night) when participants accumulated the most steps. EMA responses were tallied to determine the number of days on which participants rated their day as “Better than typical,” “Typical,” and “Worse than typical.” Subsequently, average step counts from valid days were used to create the bar plot in Figure 1D depicting average steps on each day category. For invalid days (Figure 1B) or when a participant made 0 EMA responses for a particular category (Figure 1D), “Not enough data” was used instead of data visuals.

Report generation

Relevant metrics and plots were saved as files. A parameterized R Markdown template was used to produce participant-specific physical behavior reports that were then rendered to PDF using the `rmarkdown` R package (Version 2.22 (21)).

Distribution and Evaluation of Physical Behavior Reports

Each report was uploaded to REDCap and automatically emailed to the respective participant, along with a questionnaire to evaluate the report. On the evaluation, participants reported how much they agreed with these statements on a 5-point Likert scale: “The report was easy for me to understand,” “The report was relevant to my health,” “The report was interesting to me,” and “The report was visually pleasing to me.” In addition, participants used a Net Promoter Scale to indicate how likely they were to recommend the report to a friend from 0 = very unlikely to 10 = very likely. Finally, participants were asked for open-ended feedback (“Do you have any suggestions to improve the quality or content of the report you received?”). Participants who did not initially respond to the automated email were contacted again by email. Distribution of the reports, evaluation, and reminder occurred via email given that older adults have reported that they would prefer to receive individual study results via email rather than phone or text (14–16).

Analysis

Descriptive statistics were calculated to summarize participants’ evaluations. All analyses were performed with R (Version 4.2.1).

Results

Participants

A total of 95 individuals were screened for eligibility via REDCap, of whom 34 were further screened over the phone. Twenty participants (ages 67–96, 53% female) completed the study. One participant did not complete remote monitoring, leaving 19 who received physical behavior reports and were included for analysis. A summary of demographics, physical function, data available for reports, and metrics included on reports, is presented in Table 1. On average, participants received reports 4.2 days after returning sensors to the laboratory (SD = 5.3 days; range = 0.8–19.7 days).

Table 1.

Participant Characteristics (N = 19).

N (%)Mean (SD)Range
Demographics and cognition
Age (years)74.9 (7.3)67–96
65–74 years11 (58%)
75–84 years6 (32%)
≥ 85 years2 (10%)
SexFemale10 (53%)
Male9 (47%)
RaceWhite17 (90%)
Black1 (5%)
Multiracial1 (5%)
EthnicityHispanic or Latino1 (5%)
Not Hispanic or Latino18 (95%)
Education (years)18.1 (2.9)12–22
≤ 12 years1 (5%)
13–16 years7 (37%)
17–20 years6 (32%)
>20 years5 (26%)
Body mass index (kg/m2)28.3 (4.6)23.1–37.5
Montreal Cognitive Assessment score27.3 (1.8)22–30
>2518 (95%)
≤251 (5%)
Physical function
Short Physical Performance Battery score9.5 (2.0)5–12
Sensor wear time and valid days during remote monitoring
Daily wear time (hours), activPAL22.6 (0.7)21.3–23.7
Number of valid days, activPAL6.5 (1.4)4–10
Daily wear time (hours), CPIW23.0 (2.1)14.8–24.0
Number of valid days, CPIW14.0 (0.0)14–14
Average daily physical behavior measures during remote monitoring
Step count3894 (3027)532–13930
Time spent walking78.6 (32.9)23–160
Time spent standing190.5 (55.2)85–292
Time spent sitting562.2 (89.7)439–777
Most active timeMorning4 (21%)
Afternoon15 (79%)
Ecological momentary assessments during remote monitoring
Number of evening EMAs missed1.7 (1.9)0–7
Self-reported day qualityBetter than typical2.4 (3.5)0–14
Typical9.2 (4.0)0–14
Worse than typical0.7 (1.1)0–4
N (%)Mean (SD)Range
Demographics and cognition
Age (years)74.9 (7.3)67–96
65–74 years11 (58%)
75–84 years6 (32%)
≥ 85 years2 (10%)
SexFemale10 (53%)
Male9 (47%)
RaceWhite17 (90%)
Black1 (5%)
Multiracial1 (5%)
EthnicityHispanic or Latino1 (5%)
Not Hispanic or Latino18 (95%)
Education (years)18.1 (2.9)12–22
≤ 12 years1 (5%)
13–16 years7 (37%)
17–20 years6 (32%)
>20 years5 (26%)
Body mass index (kg/m2)28.3 (4.6)23.1–37.5
Montreal Cognitive Assessment score27.3 (1.8)22–30
>2518 (95%)
≤251 (5%)
Physical function
Short Physical Performance Battery score9.5 (2.0)5–12
Sensor wear time and valid days during remote monitoring
Daily wear time (hours), activPAL22.6 (0.7)21.3–23.7
Number of valid days, activPAL6.5 (1.4)4–10
Daily wear time (hours), CPIW23.0 (2.1)14.8–24.0
Number of valid days, CPIW14.0 (0.0)14–14
Average daily physical behavior measures during remote monitoring
Step count3894 (3027)532–13930
Time spent walking78.6 (32.9)23–160
Time spent standing190.5 (55.2)85–292
Time spent sitting562.2 (89.7)439–777
Most active timeMorning4 (21%)
Afternoon15 (79%)
Ecological momentary assessments during remote monitoring
Number of evening EMAs missed1.7 (1.9)0–7
Self-reported day qualityBetter than typical2.4 (3.5)0–14
Typical9.2 (4.0)0–14
Worse than typical0.7 (1.1)0–4

Note: CPIW = ActiGraph CentrePoint Insights Watch; EMA = ecological momentary assessment; SD = standard deviation.

Table 1.

Participant Characteristics (N = 19).

N (%)Mean (SD)Range
Demographics and cognition
Age (years)74.9 (7.3)67–96
65–74 years11 (58%)
75–84 years6 (32%)
≥ 85 years2 (10%)
SexFemale10 (53%)
Male9 (47%)
RaceWhite17 (90%)
Black1 (5%)
Multiracial1 (5%)
EthnicityHispanic or Latino1 (5%)
Not Hispanic or Latino18 (95%)
Education (years)18.1 (2.9)12–22
≤ 12 years1 (5%)
13–16 years7 (37%)
17–20 years6 (32%)
>20 years5 (26%)
Body mass index (kg/m2)28.3 (4.6)23.1–37.5
Montreal Cognitive Assessment score27.3 (1.8)22–30
>2518 (95%)
≤251 (5%)
Physical function
Short Physical Performance Battery score9.5 (2.0)5–12
Sensor wear time and valid days during remote monitoring
Daily wear time (hours), activPAL22.6 (0.7)21.3–23.7
Number of valid days, activPAL6.5 (1.4)4–10
Daily wear time (hours), CPIW23.0 (2.1)14.8–24.0
Number of valid days, CPIW14.0 (0.0)14–14
Average daily physical behavior measures during remote monitoring
Step count3894 (3027)532–13930
Time spent walking78.6 (32.9)23–160
Time spent standing190.5 (55.2)85–292
Time spent sitting562.2 (89.7)439–777
Most active timeMorning4 (21%)
Afternoon15 (79%)
Ecological momentary assessments during remote monitoring
Number of evening EMAs missed1.7 (1.9)0–7
Self-reported day qualityBetter than typical2.4 (3.5)0–14
Typical9.2 (4.0)0–14
Worse than typical0.7 (1.1)0–4
N (%)Mean (SD)Range
Demographics and cognition
Age (years)74.9 (7.3)67–96
65–74 years11 (58%)
75–84 years6 (32%)
≥ 85 years2 (10%)
SexFemale10 (53%)
Male9 (47%)
RaceWhite17 (90%)
Black1 (5%)
Multiracial1 (5%)
EthnicityHispanic or Latino1 (5%)
Not Hispanic or Latino18 (95%)
Education (years)18.1 (2.9)12–22
≤ 12 years1 (5%)
13–16 years7 (37%)
17–20 years6 (32%)
>20 years5 (26%)
Body mass index (kg/m2)28.3 (4.6)23.1–37.5
Montreal Cognitive Assessment score27.3 (1.8)22–30
>2518 (95%)
≤251 (5%)
Physical function
Short Physical Performance Battery score9.5 (2.0)5–12
Sensor wear time and valid days during remote monitoring
Daily wear time (hours), activPAL22.6 (0.7)21.3–23.7
Number of valid days, activPAL6.5 (1.4)4–10
Daily wear time (hours), CPIW23.0 (2.1)14.8–24.0
Number of valid days, CPIW14.0 (0.0)14–14
Average daily physical behavior measures during remote monitoring
Step count3894 (3027)532–13930
Time spent walking78.6 (32.9)23–160
Time spent standing190.5 (55.2)85–292
Time spent sitting562.2 (89.7)439–777
Most active timeMorning4 (21%)
Afternoon15 (79%)
Ecological momentary assessments during remote monitoring
Number of evening EMAs missed1.7 (1.9)0–7
Self-reported day qualityBetter than typical2.4 (3.5)0–14
Typical9.2 (4.0)0–14
Worse than typical0.7 (1.1)0–4

Note: CPIW = ActiGraph CentrePoint Insights Watch; EMA = ecological momentary assessment; SD = standard deviation.

Summary of Physical Behavior Report Evaluations

On average, participants completed the evaluation 3.7 days after receiving their report (SD = 8.9 days; range = 0–32.5 days). Of the 19 participants who received a report, 15 completed evaluations of their report. Two of these participants completed evaluations after receiving a reminder. Figure 2 shows the distribution of participants’ responses. The mean response to the Net Promoter Scale was 7.5 (SD = 3.0; range = 0–10).

Barplots summarizing participants' responses on the physical behavior report evaluation.
Figure 2.

Summary of participant responses to physical behavior report evaluation.

Eight participants provided open-ended feedback. One participant’s feedback was not relevant to the reports, leaving relevant feedback from 7 participants (Table 1, Supplementary Material). Participants noted additional measures of interest: calories burned, movement while lying, and sleep. One participant indicated that including percentages of time in each behavior would have improved digestibility. There were suggestions to include more contextual information, including normal variation in behavior within and across days, as well as information on how changes in behavior can improve health. One participant noted the bar plot was difficult to understand. Finally, two participants noted that steps and time spent active on the report were lower than values reported by their own consumer-grade sensors.

Discussion

In this analysis, we examined the feasibility of returning individual physical behavior reports to older adults and obtained their feedback on the reports following participation in a study involving 14 days of remote monitoring with research-grade DHTs. We found that the process of returning results was feasible and elicited positive and useful feedback from participants. These findings provide insights on how to return individual physical behavior results derived from wearable sensors to older adult research participants and set the stage for future research to better understand the value of this practice.

We implemented a semi-automated process for delivering results to participants, which took less than 5 days on average from when participants returned their sensors. We leveraged R Markdown, a reproducible reporting tool, as well as automated systems in REDCap to notify researchers when prepared data were available and to email reports to participants. We did not ask participants whether they would have liked to receive the reports more quickly, which could have led to higher compliance on report evaluations. Further work to understand whether faster return of results would be preferred by participants, and to develop approaches to expedite the time from sensor receipt to report generation, is warranted. Although complete automation could expedite the process, we also found that a semi-automated approach allowed for data inspection.

Another aspect of feasibility of interest was whether we would gather sufficient sensor data to generate the reports. Of the 19 participants, all had at least 4 valid days of data from both sensors (22). It is important to note that the current study used data from two wearable sensors to generate physical behavior reports. This was due to activPAL’s limited battery life (< 7 days), as well as several cases in which the wrist-worn CPIW provided less plausible step count estimates than the thigh-worn activPAL. These cases occurred for data from individuals with low SPPB scores, which is in line with a previous study showing less accurate gait sequence detection for wrist versus lower back sensors in individuals with a range of clinical conditions (23). Future work should consider sensor battery life relative to study duration, as well as participant characteristics and algorithmic implications, when considering how to effectively return DHT-derived study results to participants. In particular, for studies involving older adults, researchers should ensure that measures are derived using algorithms that have been validated in older populations.

Overall, participants rated the reports positively and provided valuable suggestions on how to improve the reports. For instance, participants reported that they would have liked to have received information on other aspects of physical behavior, including calories burned and sleep. One limitation of this work is that we did not ask about the meaningfulness of individual behaviors (eg, health impact), and/or whether receiving feedback on specific behaviors would elicit healthy behavior change. Additional work is needed to identify whether other metrics of physical behavior (eg, sedentary bouts, minutes spent in different levels of physical activity intensity) may be more meaningful and/or actionable for participants. For example, a specific population may be focused on understanding how decreasing sedentary behaviors impacts their health, whereas another may be focused on understanding how increasing activity affects health. Similarly, some individuals may feel that reducing sedentary behaviors is more achievable than increasing physical activity. Future work to understand the measures that are meaningful and/or actionable for older participants is warranted.

Participant feedback also included that it would be helpful to understand the variation in their behaviors within and across days, and that it would be beneficial to indicate how this variation in behavior impacted their health. We included visualizations of steps per day as well as steps per hour, but participants may benefit from further explanations and/or information on variation in other behaviors. One possibility to explore in future reports is the feasibility and value of including information about how participants compare to individuals from their own and other age groups (ie, normative data), if that information is available.

Two participants also noted discrepancies between the metrics on the report and those from their own sensors. It is well established that consumer-grade sensors (eg, Fitbit) can overestimate activity compared to research-grade sensors (eg, activPAL) in older adults (24). However, this is likely unknown among the public. Among older adults, aesthetic appeal, cost, and the desire to try new technology, rather than performance expectations, drive their choice of consumer-grade sensors (25), suggesting they may be unaware of the performance capabilities of these sensors. If providing participants with individual results derived from research-grade sensors, it may be necessary to explain their differences relative to consumer-grade sensors.

This study has several limitations to note. The sample was small and included mostly healthy, non-Hispanic White individuals. Future studies with larger, more heterogeneous samples are warranted to better understand how value can be delivered to participants in this process and to investigate how factors such as age, education level, and cognition impact the feasibility and value of the process. In addition, being comfortable using a smartphone or tablet, as well as having email access, were requirements for participating in the study, both of which likely eliminated individuals with lower technology literacy. Understanding the feasibility of this process as well as its value in individuals with a broader range of health and technology literacy levels, as well as varying levels of familiarity with activity monitoring technologies, will be important in future work. Moreover, further work should investigate whether participants would benefit from receiving training on how to interpret reports they receive, given that past participants have indicated it may be helpful to do so (16). Further, research to understand other aspects of the experience of receiving results, such as potential positive and negative consequences, is warranted. Finally, we did not implement an automated system for reminding participants to complete the report evaluations; additional research is warranted to examine whether deploying multiple automated reminders is feasible and improves compliance.

Conclusion

Regulators and drug developers have expressed interest in returning individual study results to clinical research participants, yet no practical frameworks to guide implementation exist. To our knowledge, this is the first study to examine the feasibility of and obtain feedback on returning individual physical behavior results to older adults. Our results suggest that a 1-page report can provide older adult research participants with an easily interpretable and visually pleasing way to receive feedback on their physical behaviors following study participation. However, more work is needed to better understand what, when, and why to return individual study results back to older adult research participants. We are currently identifying ways to improve the scalability of implementing this practice and to better understand the value of this process for older adult research participants.

Funding

This work was supported by the National Institute on Aging at the National Institutes of Health and the Massachusetts Artificial Intelligence and Technology Center for Connected Care in Aging & Alzheimer’s Disease (grant number 5P30AG073107-02 Pilot A2). Additionally, M.A.B. is supported by National Institutes of Aging 5P30AG073107-02 Pilot A3, 5P30AG073107-02 Pilot A8, 5P30AG073107-02 Pilot A18, 5P30AG073107-02 Pilot B6; Army Research Laboratory Cooperative Agreement #W911NF2120208, DEVCOM Cooperative Agreement # W911QX23D0009.

Conflict of Interest

S.L.B., K.S.L., J.M.B., R.M.T., I.C., and K.L. are employees of VivoSense, Inc. I.C. is on the Editorial Board of Karger Digital Biomarkers and the Scientific Advisory Board for IMI IDEA FAST, and has received fees for lectures and consulting on digital health at ETH Zürich and FHNW Muttenz. The other authors declare no conflict.

Acknowledgments

The authors wish to thank Marissa Graham, Sarah Friedman, Ramzi Majaj, and Jackson Ciccarello for assistance with data collection and processing.

References

1.

Ibeh
CV
,
Elufioye
OA
,
Olorunsogo
T
, et al.
Data analytics in healthcare: a review of patient-centric approaches and healthcare delivery
.
World J Adv Res Rev.
2024
;
21
(
2
):
1750
1760
. https://doi.org/

2.

Masanneck
L
,
Gieseler
P
,
Gordon
WJ
,
Meuth
SG
,
Stern
AD.
Evidence from ClinicalTrials.gov on the growth of Digital Health Technologies in neurology trials
.
Npj Digit Med
.
2023
;
6
(
1
):
1
5
. https://doi.org/

3.

Marra
C
,
Chen
JL
,
Coravos
A
,
Stern
AD.
Quantifying the use of connected digital products in clinical research
.
Npj Digit Med
.
2020
;
3
(
1
):
50
. https://doi.org/

4.

Marra
C
,
Gordon
WJ
,
Stern
AD.
Use of connected digital products in clinical research following the COVID-19 pandemic: a comprehensive analysis of clinical trials
.
BMJ Open
.
2021
;
11
(
6
):
e047341
. https://doi.org/

5.

Bove
LA.
Increasing patient engagement through the use of wearable technology
.
J Nurse Pract
.
2019
;
15
(
8
):
535
539
. https://doi.org/

6.

Jafleh
EA
,
Alnaqbi
FA
,
Almaeeni
HA
,
Faqeeh
S
,
Alzaabi
MA
,
Al Zaman
K.
The role of wearable devices in chronic disease monitoring and patient care: A comprehensive review
.
Cureus.
2024
;
16
(
9
):
e68921
. https://doi.org/

7.

Azodo
I
,
Williams
R
,
Sheikh
A
,
Cresswell
K.
Opportunities and challenges surrounding the use of data from wearable sensor devices in health care: qualitative interview study
.
J Med Internet Res.
2020
;
22
(
10
):
e19542
. https://doi.org/

8.

Davidson
JL
,
Jensen
C.
What health topics older adults want to track: a participatory design study
. Proceedings of the 15th International ACM SIGACCESS Conference on Computers and Accessibility; Bellevue, Washington.
Association for Computing Machinery
;
2013
:
1
8
. https://doi.org/

9.

Braakhuis
HEM
,
Berger
MAM
,
Bussmann
JBJ.
Effectiveness of healthcare interventions using objective feedback on physical activity: a systematic review and meta-analysis
.
J Rehabil Med.
2019
;
51
(
3
):
151
159
. https://doi.org/

10.

National Academies of Sciences, Engineering, and Medicine, Health and Medicine Division, Board on Health Sciences Policy, Committee on the Return of Individual-Specific Research Results Generated in Research Laboratories
.
Returning Individual Research Results to Participants: Guidance for a New Research Paradigm
. In:
Downey
AS
,
Busta
ER
,
Mancher
M
,
Botkin
JR
, eds.
National Academies Press (US)
;
2018
. Accessed
May 7, 2024
. http://www.ncbi.nlm.nih.gov/books/NBK513173/

12.

Long
CR
,
Purvis
RS
,
Flood-Grady
E
, et al.
Health researchers’ experiences, perceptions and barriers related to sharing study results with participants
.
Health Res Policy Syst
.
2019
;
17
(
1
):
25
. https://doi.org/

13.

Mello Michelle
M
,
Lieou
V
,
Goodman Steven
N.
Clinical trial participants’ views of the risks and benefits of data sharing
.
N Engl J Med.
2018
;
378
(
23
):
2202
2211
. https://doi.org/

14.

Mangal
S
,
Niño de Rivera
S
,
Choi
J
, et al.
Returning study results to research participants: Data access, format, and sharing preferences
.
Int J Med Inf.
2023
;
170
:
104955
. https://doi.org/

15.

Purvis
RS
,
Abraham
TH
,
Long
CR
,
Stewart
MK
,
Warmack
TS
,
McElfish
PA.
Qualitative study of participants’ perceptions and preferences regarding research dissemination
.
AJOB Empir Bioeth
.
2017
;
8
(
2
):
69
74
. https://doi.org/

16.

Melvin
CL
,
Harvey
J
,
Pittman
T
,
Gentilin
S
,
Burshell
D
,
Kelechi
T.
Communicating and disseminating research findings to study participants: formative assessment of participant and researcher expectations and preferences
.
J Clin Transl Sci.
2020
;
4
(
3
):
233
242
. https://doi.org/

17.

Long
CR
,
Stewart
MK
,
Cunningham
TV
,
Warmack
TS
,
McElfish
PA.
Health research participants’ preferences for receiving research results
.
Clin Trials.
2016
;
13
(
6
):
582
591
. https://doi.org/

18.

Chan
PA
,
Lewis
KL
,
Biesecker
BB
, et al.
Preferences for and acceptability of receiving pharmacogenomic results by mail: a focus group study with a primarily African-American cohort
.
J Genet Couns.
2021
;
30
(
6
):
1582
1590
. https://doi.org/

19.

Guralnik
JM
,
Simonsick
EM
,
Ferrucci
L
, et al.
A short physical performance battery assessing lower extremity function: association with self-reported disability and prediction of mortality and nursing home admission
.
J Gerontol
.
1994
;
49
(
2
):
M85
M94
. https://doi.org/

20.

Tudor-Locke
C
,
Camhi
S
,
Troiano
R.
A catalog of rules, variables, and definitions applied to accelerometer data in the National Health and Nutrition Examination Survey, 2003–2006
.
Prev Chronic Dis.
2012
;
9
:
110332
. https://doi.org/

21.

Grolemund
YX
,
Allaire
JJ
,
Garrett
G.
R Markdown: The Definitive Guide
. Accessed
July 19, 2024
. https://bookdown.org/yihui/rmarkdown/

22.

Migueles
JH
,
Cadenas-Sanchez
C
,
Ekelund
U
, et al.
Accelerometer data collection and processing criteria to assess physical activity and other outcomes: a systematic review and practical considerations
.
Sports Med (Auckland, N.Z.).
2017
;
47
(
9
):
1821
1845
. https://doi.org/

23.

Kluge
F
,
Brand
YE
,
Micó-Amigo
ME
, et al.
Real-world gait detection using a wrist-worn inertial sensor: validation study
.
JMIR Form Res
.
2024
;
8
(
1
):
e50035
. https://doi.org/

24.

Tedesco
S
,
Sica
M
,
Ancillao
A
,
Timmons
S
,
Barton
J
,
O’Flynn
B.
Accuracy of consumer-level and research-grade activity trackers in ambulatory settings in older adults
.
PLoS One.
2019
;
14
(
5
):
e0216891
. https://doi.org/

25.

Chen
J
,
Wang
T
,
Fang
Z
,
Wang
H.
Research on elderly users’ intentions to accept wearable devices based on the improved UTAUT model
.
Front Public Health.
2023
;
10
:
1035398
. https://doi.org/

Author notes

S. L. Bachman and Krista S. Leonard-Corzo are co-first author.

This is an Open Access article distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivs licence (https://creativecommons.org/licenses/by-nc-nd/4.0/), which permits non-commercial reproduction and distribution of the work, in any medium, provided the original work is not altered or transformed in any way, and that the work is properly cited. For commercial re-use, please contact [email protected] for reprints and translation rights for reprints. All other permissions can be obtained through our RightsLink service via the Permissions link on the article page on our site—for further information please contact [email protected].
Decision Editor: Lewis A Lipsitz, MD, FGSA
Lewis A Lipsitz, MD, FGSA
Decision Editor
(Medical Sciences Section)
Search for other works by this author on: