-
PDF
- Split View
-
Views
-
Cite
Cite
Shelby L Bachman, Krista S Leonard-Corzo, Jennifer M Blankenship, Michael A Busa, Corinna Serviente, Matthew W Limoges, Robert T Marcotte, Ieuan Clay, Kate Lyden, Returning Individualized Wearable Sensor Results to Older Adult Research Participants: A Pilot Study, The Journals of Gerontology: Series A, Volume 80, Issue 5, May 2025, glaf027, https://doi.org/10.1093/gerona/glaf027
- Share Icon Share
Abstract
Wearable sensors that monitor physical behaviors are increasingly adopted in clinical research. Older adult research participants have expressed interest in tracking and receiving feedback on their physical behaviors. Simultaneously, researchers and clinical trial sponsors are interested in returning results to participants, but the question of how to return individual study results derived from research-grade wearable sensors remains unanswered. In this study, we (1) assessed the feasibility of returning individual physical behavior results to older adult research participants and (2) obtained participant feedback on the returned results.
Older adult participants (N = 20; ages 67–96) underwent 14 days of remote monitoring with 2 wearable sensors. We then used a semiautomated process to generate a 1-page report summarizing each participant’s physical behaviors across the 14 days. This report was delivered to each participant via email, and they were asked to evaluate the report.
Participants found the reports easy to understand, health-relevant, interesting, and visually pleasing. They had valuable suggestions to improve data interpretability and raised concerns such as comparisons with measures derived from their consumer-grade sensors.
We have demonstrated the feasibility of returning individual physical behavior results from research-grade devices to older research participants, and our results indicate that this practice is well-received. Further research to develop more efficient and scalable systems to return results to participants, and to understand the preferences of participants in larger, more representative samples, is warranted.
There has been a shift in clinical research toward patient-centricity, which involves prioritizing the patient voice and improving patient engagement (1). In parallel, there has been an increase in the adoption of digital health technologies (DHTs) within clinical research (2–4). DHTs such as wearable sensors can facilitate patient-centricity; for example, tracking data from wearable sensors can lead patients to feel more engaged with their health and can minimize in-person visits, thus reducing patient burden (5,6). Moreover, because wearable sensors can measure physical behaviors (eg, physical activity, sedentary behaviors) passively, continuously, and remotely, they have the capacity to capture real-world data as a proxy for lived experiences of patients that may not be captured through established in-clinic assessments. In some cases, wearable sensors are paired with assessments such as ecological momentary assessments (EMAs), in which patients are asked to report on their behaviors and experiences in their real-world environments. Alongside the increased capacity to collect large amounts of real-world physical behavior data, participants in clinical research and care, particularly older adults, have expressed interest in tracking their physical behaviors using DHTs and receiving their wearable sensor data as feedback (5,7,8).
Receiving information on physical behaviors may be health-relevant, as past research has shown increases in physical activity and decreases in sedentary behaviors among participants who receive feedback on these behaviors (9). Thus, understanding how to return physical behavior results derived from DHTs in ways that are feasible and valuable to older adult participants is important. In fact, several organizations (10) have advocated for the return of individual study results in an effort to increase patient-centricity within clinical trials. In 2018, the National Academies of Sciences, Engineering, and Medicine released guidance for researchers to understand participant value and preferences for returning results by conducting a review of the current evidence-base, consulting with advisory boards, and/or engaging with research participants (10). Although this guidance provides recommendations on how to advance the practice of returning individual study results, its implementation remains rare and inconsistent.
Barriers to implementing the practice of returning individual study results include concerns of misinterpretation, ethical challenges, logistical issues, and lack of researcher knowledge (11,12). However, research participants have indicated they believe the benefits outweigh potential negative consequences (13), reporting that receiving results may allow for a greater sense of data ownership, perceived value of participation, likelihood of participating in future research, and health management (14,15). In fact, participants expect results to be returned and are less likely to participate in research if results are not returned (16). Moreover, participants have expressed that this practice may cultivate trust with researchers and can facilitate discussions about their health with providers and family members (15–18). The perceived benefits and desire to share results are also shared by clinical researchers (12). Of note, past studies in this domain inquired about the preferences and value of receiving individual study results under hypothetical scenarios, without participants having received actual results (14,15). Very few studies have actively returned individual study results and obtained feedback from participants (16,18). Furthermore, these studies have focused on returning health information in general, with an emphasis on genetic information and/or lab results (16,18) rather than results derived from DHTs. Although these studies can help inform the development of the process for returning DHT results, they limit our understanding of what, how, when, and why to return individual study results derived from wearable sensors to participants.
Here, we examined the feasibility of and obtained older adults’ feedback on returning individual study physical behavior and EMA results following participation in a study involving 14 days of remote monitoring with wearable sensors. Findings will serve as a foundation for better understanding how to return individual study results derived from wearable sensors to research participants.
Method
This work was part of a larger study to develop novel methods to detect walking behavior with wearable sensors in older adults. As part of this study, participants underwent 14 days of remote monitoring with wearable sensors, after which a summary of their results was returned to them. Here, we describe the participants in the larger study, as well as aspects of the study related to returning results to participants.
Participants
Participants were recruited from western Massachusetts newspaper ads, social media ads, and flyers. Potential participants were screened for eligibility via REDCap and over the phone; eligible individuals were at least 65, comfortable using a smartphone or tablet, and English speakers. Individuals were excluded if they had a diagnosis resulting in disordered gait and/or mobility impairments requiring full-time use of a walking aid. Individuals identified as eligible visited the lab for in-person assessments (see “Demographic and Mobility Assessments”). Only individuals scoring 4-12 on the Short Physical Performance Battery (SPPB) were eligible for subsequent remote monitoring. All participants who completed remote monitoring were eligible to receive a summary of their results. The study protocol was approved by the University of Massachusetts Institutional Review Board (#3566). All participants provided written, informed consent prior to participation and received monetary compensation for participation.
Demographic and Mobility Assessments
Participants completed a demographic questionnaire, as well as the SPPB (19) to assess physical function. The SPPB is comprised of balance, gait speed, and chair stand tests, with scores ranging from 0 to 12; higher scores indicate better function.
Remote Monitoring
Participants were given 2 actigraphy sensors to wear continuously for the subsequent 14 days: (1) activPAL 4 + (PAL Technologies Ltd., Glasgow, UK), which was worn on the midline of the right thigh and (2) ActiGraph CentrePoint Insight Watch (CPIW; ActiGraph LLC, Pensacola, FL), which was worn on the nondominant wrist. After remote monitoring, participants returned both sensors to the lab.
Ecological Momentary Assessments
On each day of remote monitoring, participants were asked to complete a series of ecological momentary assessments (EMAs) delivered via REDCap to their email address at 5pm. The EMAs included the question “Overall, would you define today as: better than typical, typical, or worse than typical.” If EMAs were not completed within 90 minutes, up to 2 reminders were delivered at 90-minute intervals.
Physical Behavior Reports
Once sensors were returned, we generated an individualized 1-page report summarizing each participant’s physical behaviors across the remote monitoring period. The process of report generation and delivery is described below (Figure 1, Supplementary Material). A physical behavior report for a randomly selected participant is shown in Figure 1.

Physical behavior report returned to a randomly selected participant after 14 days of remote monitoring. MEADOW-AD = Measuring Ecological Assessments Derived from Wearables in Alzheimer’s Disease. Labels A, B, C, and D are for the purposes of describing report components and were not included on reports delivered to participants.
Data preparation
activPAL data were processed to determine valid days and physical behavior classifications in PALbatch software (v8.11.1.63). Steps and time spent in different physical behaviors on valid days were exported. For CPIW data, a day was considered valid if it included ≥ 10 hours of wear (20). Step counts were exported in 60-second epochs from ActiGraph’s CentrePoint Platform. EMA responses were exported from REDcap. A custom R script was then used to calculate metrics, create plots, and generate reports according to the following steps.
Metric calculation and plot creation
Average minutes/day spent stepping, standing, and sitting (Figure 1A) were calculated based on exported activPAL data from valid days. activPAL was used for these calculations based on its ability to discriminate postural changes. For calculating daily and hourly steps (Figures 1B-D), CPIW data were used rather than activPAL due to activPAL battery life being shorter than 14 days. For 3 participants, ActiGraph’s step algorithm yielded implausibly low step count estimates on valid days (median daily steps ≤ 500); for these cases, activPAL data were used to calculate daily and hourly steps for all valid days.
The percent of total steps taken per hour, across all valid days, was calculated and used to create the bar plot in Figure 1C. Hourly steps were used to determine the time of day (morning, afternoon, evening, night) when participants accumulated the most steps. EMA responses were tallied to determine the number of days on which participants rated their day as “Better than typical,” “Typical,” and “Worse than typical.” Subsequently, average step counts from valid days were used to create the bar plot in Figure 1D depicting average steps on each day category. For invalid days (Figure 1B) or when a participant made 0 EMA responses for a particular category (Figure 1D), “Not enough data” was used instead of data visuals.
Report generation
Relevant metrics and plots were saved as files. A parameterized R Markdown template was used to produce participant-specific physical behavior reports that were then rendered to PDF using the `rmarkdown` R package (Version 2.22 (21)).
Distribution and Evaluation of Physical Behavior Reports
Each report was uploaded to REDCap and automatically emailed to the respective participant, along with a questionnaire to evaluate the report. On the evaluation, participants reported how much they agreed with these statements on a 5-point Likert scale: “The report was easy for me to understand,” “The report was relevant to my health,” “The report was interesting to me,” and “The report was visually pleasing to me.” In addition, participants used a Net Promoter Scale to indicate how likely they were to recommend the report to a friend from 0 = very unlikely to 10 = very likely. Finally, participants were asked for open-ended feedback (“Do you have any suggestions to improve the quality or content of the report you received?”). Participants who did not initially respond to the automated email were contacted again by email. Distribution of the reports, evaluation, and reminder occurred via email given that older adults have reported that they would prefer to receive individual study results via email rather than phone or text (14–16).
Analysis
Descriptive statistics were calculated to summarize participants’ evaluations. All analyses were performed with R (Version 4.2.1).
Results
Participants
A total of 95 individuals were screened for eligibility via REDCap, of whom 34 were further screened over the phone. Twenty participants (ages 67–96, 53% female) completed the study. One participant did not complete remote monitoring, leaving 19 who received physical behavior reports and were included for analysis. A summary of demographics, physical function, data available for reports, and metrics included on reports, is presented in Table 1. On average, participants received reports 4.2 days after returning sensors to the laboratory (SD = 5.3 days; range = 0.8–19.7 days).
N (%) . | Mean (SD) . | Range . | ||
---|---|---|---|---|
Demographics and cognition | ||||
Age (years) | 74.9 (7.3) | 67–96 | ||
65–74 years | 11 (58%) | |||
75–84 years | 6 (32%) | |||
≥ 85 years | 2 (10%) | |||
Sex | Female | 10 (53%) | ||
Male | 9 (47%) | |||
Race | White | 17 (90%) | ||
Black | 1 (5%) | |||
Multiracial | 1 (5%) | |||
Ethnicity | Hispanic or Latino | 1 (5%) | ||
Not Hispanic or Latino | 18 (95%) | |||
Education (years) | 18.1 (2.9) | 12–22 | ||
≤ 12 years | 1 (5%) | |||
13–16 years | 7 (37%) | |||
17–20 years | 6 (32%) | |||
>20 years | 5 (26%) | |||
Body mass index (kg/m2) | 28.3 (4.6) | 23.1–37.5 | ||
Montreal Cognitive Assessment score | 27.3 (1.8) | 22–30 | ||
>25 | 18 (95%) | |||
≤25 | 1 (5%) | |||
Physical function | ||||
Short Physical Performance Battery score | 9.5 (2.0) | 5–12 | ||
Sensor wear time and valid days during remote monitoring | ||||
Daily wear time (hours), activPAL | 22.6 (0.7) | 21.3–23.7 | ||
Number of valid days, activPAL | 6.5 (1.4) | 4–10 | ||
Daily wear time (hours), CPIW | 23.0 (2.1) | 14.8–24.0 | ||
Number of valid days, CPIW | 14.0 (0.0) | 14–14 | ||
Average daily physical behavior measures during remote monitoring | ||||
Step count | 3894 (3027) | 532–13930 | ||
Time spent walking | 78.6 (32.9) | 23–160 | ||
Time spent standing | 190.5 (55.2) | 85–292 | ||
Time spent sitting | 562.2 (89.7) | 439–777 | ||
Most active time | Morning | 4 (21%) | ||
Afternoon | 15 (79%) | |||
Ecological momentary assessments during remote monitoring | ||||
Number of evening EMAs missed | 1.7 (1.9) | 0–7 | ||
Self-reported day quality | Better than typical | 2.4 (3.5) | 0–14 | |
Typical | 9.2 (4.0) | 0–14 | ||
Worse than typical | 0.7 (1.1) | 0–4 |
N (%) . | Mean (SD) . | Range . | ||
---|---|---|---|---|
Demographics and cognition | ||||
Age (years) | 74.9 (7.3) | 67–96 | ||
65–74 years | 11 (58%) | |||
75–84 years | 6 (32%) | |||
≥ 85 years | 2 (10%) | |||
Sex | Female | 10 (53%) | ||
Male | 9 (47%) | |||
Race | White | 17 (90%) | ||
Black | 1 (5%) | |||
Multiracial | 1 (5%) | |||
Ethnicity | Hispanic or Latino | 1 (5%) | ||
Not Hispanic or Latino | 18 (95%) | |||
Education (years) | 18.1 (2.9) | 12–22 | ||
≤ 12 years | 1 (5%) | |||
13–16 years | 7 (37%) | |||
17–20 years | 6 (32%) | |||
>20 years | 5 (26%) | |||
Body mass index (kg/m2) | 28.3 (4.6) | 23.1–37.5 | ||
Montreal Cognitive Assessment score | 27.3 (1.8) | 22–30 | ||
>25 | 18 (95%) | |||
≤25 | 1 (5%) | |||
Physical function | ||||
Short Physical Performance Battery score | 9.5 (2.0) | 5–12 | ||
Sensor wear time and valid days during remote monitoring | ||||
Daily wear time (hours), activPAL | 22.6 (0.7) | 21.3–23.7 | ||
Number of valid days, activPAL | 6.5 (1.4) | 4–10 | ||
Daily wear time (hours), CPIW | 23.0 (2.1) | 14.8–24.0 | ||
Number of valid days, CPIW | 14.0 (0.0) | 14–14 | ||
Average daily physical behavior measures during remote monitoring | ||||
Step count | 3894 (3027) | 532–13930 | ||
Time spent walking | 78.6 (32.9) | 23–160 | ||
Time spent standing | 190.5 (55.2) | 85–292 | ||
Time spent sitting | 562.2 (89.7) | 439–777 | ||
Most active time | Morning | 4 (21%) | ||
Afternoon | 15 (79%) | |||
Ecological momentary assessments during remote monitoring | ||||
Number of evening EMAs missed | 1.7 (1.9) | 0–7 | ||
Self-reported day quality | Better than typical | 2.4 (3.5) | 0–14 | |
Typical | 9.2 (4.0) | 0–14 | ||
Worse than typical | 0.7 (1.1) | 0–4 |
Note: CPIW = ActiGraph CentrePoint Insights Watch; EMA = ecological momentary assessment; SD = standard deviation.
N (%) . | Mean (SD) . | Range . | ||
---|---|---|---|---|
Demographics and cognition | ||||
Age (years) | 74.9 (7.3) | 67–96 | ||
65–74 years | 11 (58%) | |||
75–84 years | 6 (32%) | |||
≥ 85 years | 2 (10%) | |||
Sex | Female | 10 (53%) | ||
Male | 9 (47%) | |||
Race | White | 17 (90%) | ||
Black | 1 (5%) | |||
Multiracial | 1 (5%) | |||
Ethnicity | Hispanic or Latino | 1 (5%) | ||
Not Hispanic or Latino | 18 (95%) | |||
Education (years) | 18.1 (2.9) | 12–22 | ||
≤ 12 years | 1 (5%) | |||
13–16 years | 7 (37%) | |||
17–20 years | 6 (32%) | |||
>20 years | 5 (26%) | |||
Body mass index (kg/m2) | 28.3 (4.6) | 23.1–37.5 | ||
Montreal Cognitive Assessment score | 27.3 (1.8) | 22–30 | ||
>25 | 18 (95%) | |||
≤25 | 1 (5%) | |||
Physical function | ||||
Short Physical Performance Battery score | 9.5 (2.0) | 5–12 | ||
Sensor wear time and valid days during remote monitoring | ||||
Daily wear time (hours), activPAL | 22.6 (0.7) | 21.3–23.7 | ||
Number of valid days, activPAL | 6.5 (1.4) | 4–10 | ||
Daily wear time (hours), CPIW | 23.0 (2.1) | 14.8–24.0 | ||
Number of valid days, CPIW | 14.0 (0.0) | 14–14 | ||
Average daily physical behavior measures during remote monitoring | ||||
Step count | 3894 (3027) | 532–13930 | ||
Time spent walking | 78.6 (32.9) | 23–160 | ||
Time spent standing | 190.5 (55.2) | 85–292 | ||
Time spent sitting | 562.2 (89.7) | 439–777 | ||
Most active time | Morning | 4 (21%) | ||
Afternoon | 15 (79%) | |||
Ecological momentary assessments during remote monitoring | ||||
Number of evening EMAs missed | 1.7 (1.9) | 0–7 | ||
Self-reported day quality | Better than typical | 2.4 (3.5) | 0–14 | |
Typical | 9.2 (4.0) | 0–14 | ||
Worse than typical | 0.7 (1.1) | 0–4 |
N (%) . | Mean (SD) . | Range . | ||
---|---|---|---|---|
Demographics and cognition | ||||
Age (years) | 74.9 (7.3) | 67–96 | ||
65–74 years | 11 (58%) | |||
75–84 years | 6 (32%) | |||
≥ 85 years | 2 (10%) | |||
Sex | Female | 10 (53%) | ||
Male | 9 (47%) | |||
Race | White | 17 (90%) | ||
Black | 1 (5%) | |||
Multiracial | 1 (5%) | |||
Ethnicity | Hispanic or Latino | 1 (5%) | ||
Not Hispanic or Latino | 18 (95%) | |||
Education (years) | 18.1 (2.9) | 12–22 | ||
≤ 12 years | 1 (5%) | |||
13–16 years | 7 (37%) | |||
17–20 years | 6 (32%) | |||
>20 years | 5 (26%) | |||
Body mass index (kg/m2) | 28.3 (4.6) | 23.1–37.5 | ||
Montreal Cognitive Assessment score | 27.3 (1.8) | 22–30 | ||
>25 | 18 (95%) | |||
≤25 | 1 (5%) | |||
Physical function | ||||
Short Physical Performance Battery score | 9.5 (2.0) | 5–12 | ||
Sensor wear time and valid days during remote monitoring | ||||
Daily wear time (hours), activPAL | 22.6 (0.7) | 21.3–23.7 | ||
Number of valid days, activPAL | 6.5 (1.4) | 4–10 | ||
Daily wear time (hours), CPIW | 23.0 (2.1) | 14.8–24.0 | ||
Number of valid days, CPIW | 14.0 (0.0) | 14–14 | ||
Average daily physical behavior measures during remote monitoring | ||||
Step count | 3894 (3027) | 532–13930 | ||
Time spent walking | 78.6 (32.9) | 23–160 | ||
Time spent standing | 190.5 (55.2) | 85–292 | ||
Time spent sitting | 562.2 (89.7) | 439–777 | ||
Most active time | Morning | 4 (21%) | ||
Afternoon | 15 (79%) | |||
Ecological momentary assessments during remote monitoring | ||||
Number of evening EMAs missed | 1.7 (1.9) | 0–7 | ||
Self-reported day quality | Better than typical | 2.4 (3.5) | 0–14 | |
Typical | 9.2 (4.0) | 0–14 | ||
Worse than typical | 0.7 (1.1) | 0–4 |
Note: CPIW = ActiGraph CentrePoint Insights Watch; EMA = ecological momentary assessment; SD = standard deviation.
Summary of Physical Behavior Report Evaluations
On average, participants completed the evaluation 3.7 days after receiving their report (SD = 8.9 days; range = 0–32.5 days). Of the 19 participants who received a report, 15 completed evaluations of their report. Two of these participants completed evaluations after receiving a reminder. Figure 2 shows the distribution of participants’ responses. The mean response to the Net Promoter Scale was 7.5 (SD = 3.0; range = 0–10).

Summary of participant responses to physical behavior report evaluation.
Eight participants provided open-ended feedback. One participant’s feedback was not relevant to the reports, leaving relevant feedback from 7 participants (Table 1, Supplementary Material). Participants noted additional measures of interest: calories burned, movement while lying, and sleep. One participant indicated that including percentages of time in each behavior would have improved digestibility. There were suggestions to include more contextual information, including normal variation in behavior within and across days, as well as information on how changes in behavior can improve health. One participant noted the bar plot was difficult to understand. Finally, two participants noted that steps and time spent active on the report were lower than values reported by their own consumer-grade sensors.
Discussion
In this analysis, we examined the feasibility of returning individual physical behavior reports to older adults and obtained their feedback on the reports following participation in a study involving 14 days of remote monitoring with research-grade DHTs. We found that the process of returning results was feasible and elicited positive and useful feedback from participants. These findings provide insights on how to return individual physical behavior results derived from wearable sensors to older adult research participants and set the stage for future research to better understand the value of this practice.
We implemented a semi-automated process for delivering results to participants, which took less than 5 days on average from when participants returned their sensors. We leveraged R Markdown, a reproducible reporting tool, as well as automated systems in REDCap to notify researchers when prepared data were available and to email reports to participants. We did not ask participants whether they would have liked to receive the reports more quickly, which could have led to higher compliance on report evaluations. Further work to understand whether faster return of results would be preferred by participants, and to develop approaches to expedite the time from sensor receipt to report generation, is warranted. Although complete automation could expedite the process, we also found that a semi-automated approach allowed for data inspection.
Another aspect of feasibility of interest was whether we would gather sufficient sensor data to generate the reports. Of the 19 participants, all had at least 4 valid days of data from both sensors (22). It is important to note that the current study used data from two wearable sensors to generate physical behavior reports. This was due to activPAL’s limited battery life (< 7 days), as well as several cases in which the wrist-worn CPIW provided less plausible step count estimates than the thigh-worn activPAL. These cases occurred for data from individuals with low SPPB scores, which is in line with a previous study showing less accurate gait sequence detection for wrist versus lower back sensors in individuals with a range of clinical conditions (23). Future work should consider sensor battery life relative to study duration, as well as participant characteristics and algorithmic implications, when considering how to effectively return DHT-derived study results to participants. In particular, for studies involving older adults, researchers should ensure that measures are derived using algorithms that have been validated in older populations.
Overall, participants rated the reports positively and provided valuable suggestions on how to improve the reports. For instance, participants reported that they would have liked to have received information on other aspects of physical behavior, including calories burned and sleep. One limitation of this work is that we did not ask about the meaningfulness of individual behaviors (eg, health impact), and/or whether receiving feedback on specific behaviors would elicit healthy behavior change. Additional work is needed to identify whether other metrics of physical behavior (eg, sedentary bouts, minutes spent in different levels of physical activity intensity) may be more meaningful and/or actionable for participants. For example, a specific population may be focused on understanding how decreasing sedentary behaviors impacts their health, whereas another may be focused on understanding how increasing activity affects health. Similarly, some individuals may feel that reducing sedentary behaviors is more achievable than increasing physical activity. Future work to understand the measures that are meaningful and/or actionable for older participants is warranted.
Participant feedback also included that it would be helpful to understand the variation in their behaviors within and across days, and that it would be beneficial to indicate how this variation in behavior impacted their health. We included visualizations of steps per day as well as steps per hour, but participants may benefit from further explanations and/or information on variation in other behaviors. One possibility to explore in future reports is the feasibility and value of including information about how participants compare to individuals from their own and other age groups (ie, normative data), if that information is available.
Two participants also noted discrepancies between the metrics on the report and those from their own sensors. It is well established that consumer-grade sensors (eg, Fitbit) can overestimate activity compared to research-grade sensors (eg, activPAL) in older adults (24). However, this is likely unknown among the public. Among older adults, aesthetic appeal, cost, and the desire to try new technology, rather than performance expectations, drive their choice of consumer-grade sensors (25), suggesting they may be unaware of the performance capabilities of these sensors. If providing participants with individual results derived from research-grade sensors, it may be necessary to explain their differences relative to consumer-grade sensors.
This study has several limitations to note. The sample was small and included mostly healthy, non-Hispanic White individuals. Future studies with larger, more heterogeneous samples are warranted to better understand how value can be delivered to participants in this process and to investigate how factors such as age, education level, and cognition impact the feasibility and value of the process. In addition, being comfortable using a smartphone or tablet, as well as having email access, were requirements for participating in the study, both of which likely eliminated individuals with lower technology literacy. Understanding the feasibility of this process as well as its value in individuals with a broader range of health and technology literacy levels, as well as varying levels of familiarity with activity monitoring technologies, will be important in future work. Moreover, further work should investigate whether participants would benefit from receiving training on how to interpret reports they receive, given that past participants have indicated it may be helpful to do so (16). Further, research to understand other aspects of the experience of receiving results, such as potential positive and negative consequences, is warranted. Finally, we did not implement an automated system for reminding participants to complete the report evaluations; additional research is warranted to examine whether deploying multiple automated reminders is feasible and improves compliance.
Conclusion
Regulators and drug developers have expressed interest in returning individual study results to clinical research participants, yet no practical frameworks to guide implementation exist. To our knowledge, this is the first study to examine the feasibility of and obtain feedback on returning individual physical behavior results to older adults. Our results suggest that a 1-page report can provide older adult research participants with an easily interpretable and visually pleasing way to receive feedback on their physical behaviors following study participation. However, more work is needed to better understand what, when, and why to return individual study results back to older adult research participants. We are currently identifying ways to improve the scalability of implementing this practice and to better understand the value of this process for older adult research participants.
Funding
This work was supported by the National Institute on Aging at the National Institutes of Health and the Massachusetts Artificial Intelligence and Technology Center for Connected Care in Aging & Alzheimer’s Disease (grant number 5P30AG073107-02 Pilot A2). Additionally, M.A.B. is supported by National Institutes of Aging 5P30AG073107-02 Pilot A3, 5P30AG073107-02 Pilot A8, 5P30AG073107-02 Pilot A18, 5P30AG073107-02 Pilot B6; Army Research Laboratory Cooperative Agreement #W911NF2120208, DEVCOM Cooperative Agreement # W911QX23D0009.
Conflict of Interest
S.L.B., K.S.L., J.M.B., R.M.T., I.C., and K.L. are employees of VivoSense, Inc. I.C. is on the Editorial Board of Karger Digital Biomarkers and the Scientific Advisory Board for IMI IDEA FAST, and has received fees for lectures and consulting on digital health at ETH Zürich and FHNW Muttenz. The other authors declare no conflict.
Acknowledgments
The authors wish to thank Marissa Graham, Sarah Friedman, Ramzi Majaj, and Jackson Ciccarello for assistance with data collection and processing.
References
Author notes
S. L. Bachman and Krista S. Leonard-Corzo are co-first author.