PLOS ONE
RESEARCH ARTICLE
Validation of the Mobile Application Rating
Scale (MARS)
Yannik Terhorst ID1,2*, Paula Philippi2, Lasse B. Sander3, Dana Schultchen ID4,
Sarah Paganini5, Marco Bardus6, Karla Santo7,8,9, Johannes Knitza10, Gustavo
C. Machado11,12, Stephanie Schoeppe ID13, Natalie Bauereiß2, Alexandra Portenhauser2,
Matthias Domhardt2, Benjamin Walter14, Martin Krusche15, Harald Baumeister2, EvaMaria Messner2
a1111111111
a1111111111
a1111111111
a1111111111
a1111111111
OPEN ACCESS
Citation: Terhorst Y, Philippi P, Sander LB,
Schultchen D, Paganini S, Bardus M, et al. (2020)
Validation of the Mobile Application Rating Scale
(MARS). PLoS ONE 15(11): e0241480. https://doi.
org/10.1371/journal.pone.0241480
Editor: Ethan Moitra, Brown University, UNITED
STATES
Received: May 19, 2020
Accepted: October 15, 2020
1 Department of Research Methods, Institute of Psychology and Education, University Ulm, Ulm, Germany,
2 Department of Clinical Psychology and Psychotherapy, Institute of Psychology and Education, University
Ulm, Ulm, Germany, 3 Department of Rehabilitation Psychology and Psychotherapy, Institute of Psychology,
Albert-Ludwigs-University Freiburg, Freiburg im Breisgau, Germany, 4 Department of Clinical and Health
Psychology, Institute of Psychology and Education, University Ulm, Ulm, Germany, 5 Department of Sport
Psychology, Institute of Sports and Sport Science, University of Freiburg, Freiburg, Germany, 6 Department
of Health Promotion and Community Health, Faculty of Health Sciences, American University of Beirut,
Beirut, Lebanon, 7 Academic Research Organization, Hospital Israelita Albert Einstein, São Paulo, Brazil,
8 Westmead Applied Research Centre, Westmead Clinical School, Faculty of Medicine and Health, The
University of Sydney, Sydney, Australia, 9 Cardiovascular Division, The George Institute for Global Health,
Sydney, Australia, 10 Department of Internal Medicine 3 – Rheumatology and Immunology, University
Hospital Erlangen, Friedrich-Alexander University Erlangen-Nuremberg, Erlangen, Germany, 11 Institute for
Musculoskeletal Health, Sydney, New South Wales, Australia, 12 Sydney School of Public Health, Faculty of
Medicine and Health, The University of Sydney, Sydney, New South Wales, Australia, 13 School of Health,
Medical and Applied Sciences, Appleton Institute, Physical Activity Research Group, Central Queensland
University, Rockhampton, Queensland, Australia, 14 Department of Internal Medicine I, Gastroenterology,
University Hospital Ulm, Ulm, Germany, 15 Department of Rheumatology and Clinical Immunology, Charité –
Universitätsmedizin Berlin, Berlin, Germany
* [email protected]
Abstract
Published: November 2, 2020
Copyright: © 2020 Terhorst et al. This is an open
access article distributed under the terms of the
Creative Commons Attribution License, which
permits unrestricted use, distribution, and
reproduction in any medium, provided the original
author and source are credited.
Data Availability Statement: All relevant data are
within the paper and its Supporting Information
file.
Funding: The author(s) received no specific
funding for this work.
Competing interests: EMM, YT, LS, HB developed
and run the German Mobile Health App Database
project (MHAD). The MHAD is a self-funded project
at Ulm University with no commercial interests. LS,
HB and EMM received payments for talks and
workshops in the context of e-mental-health. This
does not alter our adherence to PLOS ONE policies
Background
Mobile health apps (MHA) have the potential to improve health care. The commercial MHA
market is rapidly growing, but the content and quality of available MHA are unknown. Instruments for the assessment of the quality and content of MHA are highly needed. The Mobile
Application Rating Scale (MARS) is one of the most widely used tools to evaluate the quality
of MHA. Only few validation studies investigated its metric quality. No study has evaluated
the construct validity and concurrent validity.
Objective
This study evaluates the construct validity, concurrent validity, reliability, and objectivity, of
the MARS.
Methods
Data was pooled from 15 international app quality reviews to evaluate the metric properties
of the MARS. The MARS measures app quality across four dimensions: engagement,
PLOS ONE | https://doi.org/10.1371/journal.pone.0241480 November 2, 2020
1 / 14
PLOS ONE
on sharing data and materials. All other authors
declare no conflicts of interest.
Abbreviations: AIC, Akaike information criterion;
BIC, Bayesian information criterion; CFA,
Confirmatory factor analysis; CFI, confirmatory fit
index; CI, confidence interval; ICC, intra-class
correlation coefficient; JMIR, Journal of Medical
Internet Research; M, Mean; MARS, Mobile
Application Rating Scale; MHA, Mobile health app;
r, correlation; RCT, randomized controlled trial;
RMSEA, root mean square error of approximation;
SD, Standard deviations; SRMR, standardized root
mean square residual; TLI, Tucker-Lewis index;
UTAUT, unified theory of acceptance and use of
technology; α, Cronbach’s alpha.
Validation of the Mobile Application Rating Scale
functionality, aesthetics and information quality. Construct validity was evaluated by assessing related competing confirmatory models by confirmatory factor analysis (CFA). Noncentrality (RMSEA), incremental (CFI, TLI) and residual (SRMR) fit indices were used to
evaluate the goodness of fit. As a measure of concurrent validity, the correlations to another
quality assessment tool (ENLIGHT) were investigated. Reliability was determined using
Omega. Objectivity was assessed by intra-class correlation.
Results
In total, MARS ratings from 1,299 MHA covering 15 different health domains were included.
Confirmatory factor analysis confirmed a bifactor model with a general factor and a factor for
each dimension (RMSEA = 0.074, TLI = 0.922, CFI = 0.940, SRMR = 0.059). Reliability was
good to excellent (Omega 0.79 to 0.93). Objectivity was high (ICC = 0.82). MARS correlated
with ENLIGHT (ps 0.95 and TLI > 0.95 [36].
Akaike information criterion (AIC) and the Bayesian information criterion (BIC) were used
for model comparisons.
Fig 1. Hypothesized CFA model 1 of the MARS. Item-wise error variances are not represented in the models;
correlations between errors were not allowed.
https://doi.org/10.1371/journal.pone.0241480.g001
PLOS ONE | https://doi.org/10.1371/journal.pone.0241480 November 2, 2020
4 / 14
PLOS ONE
Validation of the Mobile Application Rating Scale
Fig 2. Hypothesized CFA model 2 of the MARS. Item-wise error variances are not represented in the models; correlations between errors were not
allowed.
https://doi.org/10.1371/journal.pone.0241480.g002
Fig 3. Hypothesized CFA model 3 of the MARS. Item-wise error variances are not represented in the models; correlations between errors were not
allowed.
https://doi.org/10.1371/journal.pone.0241480.g003
Fig 4. Hypothesized CFA model 4 of the MARS. Item-wise error variances are not represented in the models; correlations between errors were not
allowed.
https://doi.org/10.1371/journal.pone.0241480.g004
Full information maximum likelihood was used as a robust estimator given its capability to
handle missing data [37, 38]. Hubert-White robust standard errors were obtained [38]. Modification indices were used to further investigate the structure of the MARS and potential
sources of ill fit [39].
Concurrent validity. Since the MARS was designed to measure app quality, it should be
related closely to other app quality metrics. Some of the included data sets provided both ratings using the ENLIGHT instrument and the MARS. Similar to the MARS, the ENLIGHT is a
quality assessment tool for MHA [28], which assesses app quality covering seven dimensions:
a. usability (3 items), b. visual design (3 items), c. user engagement (5 items), d. content (4
items), e. therapeutic persuasiveness (7 items), f. therapeutic alliance (3 items), and g. general
subjective evaluation (3 items). Items are rated from 1 (= very poor) to 5 (= very good). The
PLOS ONE | https://doi.org/10.1371/journal.pone.0241480 November 2, 2020
5 / 14
PLOS ONE
Validation of the Mobile Application Rating Scale
intra-rater-reliability of the ENLIGHT (ICC = 0.77 to 0.98) and the internal consistency (α =
0.83 to 0.90) are excellent [28].
Correlations were used to determine the concurrent validity between the MARS and
ENLIGHT. All correlations reported in this study were calculated using correlation coefficient
r, which ranges between 0 (no relationship) to 1 (perfect relationship) or -1 (perfect negative
relationship) respectively. For all correlation analyses, the alpha-level was 5%. P-values were
adjusted for multiple testing using the procedure proposed by Holm [40].
Reliability: Internal consistency. As a variant of reliability, internal consistency was
determined. Omega was used as reliability coefficient [41]. Compared to the widely used Cronbach’s Alpha, Omega provides a more unbiased estimation of reliability [29–31]. The procedures introduced by Zhang and Yuan [42] were used to obtain robust coefficients and
bootstrapped bias-corrected confidence intervals. A reliability coefficient of < 0.50 was considered to be unacceptable, 0.51–0.59 to be poor, 0.60–0.69 to be questionable, 0.70–0.79 to be
acceptable, 0.80–0.89 to be good, and > 0.90 to be excellent [43].
Objectivity: Intra-class correlation. The MARS comes with a standardized online
training for reviewers [16]. Following the training, the MARS assessment is suggested to be
either conducted by a single rater or by two raters (pooling their ratings) [16]. Consistency
between raters was examined by calculating intra-class correlation based on a two-way
mixed-effects model [44]. A cut-off of ICC above 0.75 (Fleiss, 1999) was used to define a satisfactory inter-rater agreement. All data sets based on ratings of two reviewers were included
in this analysis.
Analysis software. The software R was used for all analyses [45], except for the intra-class
correlation, which was calculated using SPSS 24 [46]. For the CFA, the R package “lavaan”
(version: 0.5–23.1097) was deployed [47]. Omega was assessed using “coefficient alpha” [42].
Correlations were calculated using “psych” (version: 1.7.8.) [48].
Results
Sample characteristics
The literature searches identified a total of 18 international reviews that assessed the quality of
MHA using the MARS. All research groups that have published an eligible review were contacted. In total, 15 of the 18 contacted research groups responded and agreed to share their
data [3, 10, 12, 14, 15, 18, 19, 22, 24, 49–54]. The present sample consists of N = 1299 MHA.
MHA targeting physical, mental and behavioral health, as well as specific target groups were
included: anxiety (n = 104), low back pain (n = 58), cancer (n = 78), depression (n = 38), diet
(n = 25), elderly (n = 84), gastrointestinal diseases (n = 140), medication adherence (n = 9),
mindfulness (n = 103), pain (n = 147), physical activity (n = 312), post-traumatic stress disorder (n = 87), rheumatism (n = 32), weight management (n = 66), and internalizing disorder
MHA for children and youth (n = 16). For all included data sets, the MARS rating was conducted by researchers holding at least a B.Sc. degree.
The overall quality of these MHA based on the quality assessment using MARS was moderate (mean MARS score [M] = 3.74, standard deviation [SD] = 0.59). The quality of MHAs was
highest in relation to the functionality dimension (M = 4.03, SD = 0.67), followed by aesthetics
(M = 3.40, SD = 0.87), information quality (M = 3.06, SD = 0.72) and engagement (M = 2.96,
SD = 0.90) (see Fig 5).
The MARS assesses the evidence base of an app using the question “Has the app been
trialled/tested; must be verified by evidence (in published scientific literature)?”. Overall, 1230
(94.8%) of all included MHAs were rated as not evidence-based.
PLOS ONE | https://doi.org/10.1371/journal.pone.0241480 November 2, 2020
6 / 14
PLOS ONE
Validation of the Mobile Application Rating Scale
Fig 5. Quality of included MHA.
https://doi.org/10.1371/journal.pone.0241480.g005
Construct validity: Confirmatory factor analysis
None of the a-priori defined confirmatory models were confirmed by CFA. The best-fitting
model was model 3. Model 3 was further investigated using modification indices. Introducing
a correlation between items 3 and 4 (= Model 3a) yielded an acceptable model fit. Fit indices of
all models are presented in Table 1. Model 3a is presented in Fig 6.
Concurrent validity
A total of 120 MHA were rated using both the ENLIGHT instrument and the MARS. Correlations between MARS and ENLIGHT were calculated based on the respective subsample. Correlations are presented in Table 2.
Reliability: Internal consistency
The internal consistency of all sections was good to excellent (see Table 3).
Objectivity: Intra-class correlation
To calculate the agreement of raters only data sets providing ratings of both reviewers were
used. A total of 793 apps (= 15067 rated items per reviewer) were included in the intra-class
correlation analysis. Overall, intra-class correlation was good: ICC = 0.816 (95% CI: 0.810 to
0.822). Section-wise ICC is summarized in Table 4.
Table 1. Model fit.
Model
AIC
BIC
RMSEA
Model 1
49110
49437
0.110 (0.106 to 0.113)
0.095
SRMR
0.814
TLI
0.841
CFI
Model 2
49182
49497
0.115 (0.111 to 0.119)
0.098
0.811
0.837
Model 3
48132
48525
0.093 (0.088 to 0.097)
0.095
0.878
0.905
3a
47589
47987
0.074 (0.070 to 0.078)
0.059
0.922
0.940
Model 4
52102
52397
0.166 (0.162 to 0.170)
0.099
0.605
0.649
Note: AIC: Akaike information criterion; BIC: Bayesian information criterion; RMSEA: root mean square error of approximation (RMSEA); SRMR: standardized root
mean square residual; CFI: the confirmatory fit index; TLI: Tucker-Lewis index.
https://doi.org/10.1371/journal.pone.0241480.t001
PLOS ONE | https://doi.org/10.1371/journal.pone.0241480 November 2, 2020
7 / 14
PLOS ONE
Validation of the Mobile Application Rating Scale
Fig 6. Model 3a. Loadings are standardized; correlations between all latent variables were set to zero; item-wise error
variances have been excluded; Model 3a differs from the a-priori defined model 3 in the correlation between item 3
(a03) and item 4 (a04).
https://doi.org/10.1371/journal.pone.0241480.g006
Discussion
To our knowledge, the present study is the first study to evaluate the construct validity of the
MARS. Furthermore, this study builds on previous metric evaluations of the MARS [16, 25–
27] by investigating its validity, reliability, and objectivity using a large sample of MHAs covering multiple health conditions. The CFA confirmed a bi-factor model consisting of a general
g-factor and uncorrelated factors for each dimension of the MARS. Given the theoretical
Table 2. Correlations between the MARS and ENLIGHT using a subsample of apps.
MARS: Engagement
MARS: Functionality
MARS: Aesthetics
a
MARS: Information
a
r
r
r
���
���
���
ENLIGHT (n = 120)
MARS: Overall
a
a
ra
���
���
r
Usability
0.51
0.80
0.68
0.39
0.71
Design
0.63���
0.66���
0.87���
0.57���
0.84���
���
���
���
���
0.78���
���
0.82���
���
Engagement
Content
0.83
���
0.71
���
0.54
���
0.72
0.68
0.42
0.63
0.54
0.73���
Therapeutic alliance
0.56���
0.37���
0.44���
0.48���
0.58���
General subjective quality
0.69���
0.53���
0.68���
0.50���
0.74���
���
���
���
���
0.91���
0.65
���
0.47
0.74
0.83
���
0.68
Therapeutic persuasiveness
overall
���
0.52
0.81
0.64
Note:
a)
correlation coefficient r, which ranges between 0 (no relationship) to 1 (perfect relationship) or -1 (perfect negative relationship) respectively.
�
P < = 0.05,
��
P < = 0.01,
P < = 0.001.
���
https://doi.org/10.1371/journal.pone.0241480.t002
PLOS ONE | https://doi.org/10.1371/journal.pone.0241480 November 2, 2020
8 / 14
PLOS ONE
Validation of the Mobile Application Rating Scale
Table 3. Internal consistency of the MARS.
Section
Reliability: Omega (CI)
A: Engagement
0.867 (0.853 to 0.880)
B: Functionality
0.871 (0.856 to 0.886)
C: Aesthetics
0.904 (0.895 to 0.913)
1
D: Information quality
0.793 (0.773 to 0.813)
Overall1
0.929 (0.923 to 0.934)
Note:
Item 19 was excluded due to high amount of missingness (95%), as it is rated NA (not applicable) if no evaluation
1)
is present.
https://doi.org/10.1371/journal.pone.0241480.t003
background of the MARS, the latent g-factor could represent a general quality factor or a factor
accounting for shared variance introduced by the assessment methodology. Either way, the
four uncorrelated factors confirm the proposed dimensions of the MARS [16]. Thus, the interpretation of the sum score for each dimension seems legit. However, the present analysis highlights that not all items are equally good indicators for the dimensions. Hence, a weighted
average of the respective items of each of the four dimensions a) engagement, b) functionality,
c) aesthetics and d) information quality would be more adequate.
Besides the construct validity, the concurrent validity was evaluated. High correlations to
the ENLIGHT indicated a good concurrent validity. Furthermore, previous metric evaluations
in terms of reliability and objectivity [16, 25–27] were replicated with the present MHA sample. Our findings showed that both reliability and objectivity of the MARS were good to excellent. Overall, considering the validity, reliability and objectivity results the MARS seems to be
an app quality assessment tool of high metric quality.
The correlation between the MARS and the ENLIGHT instrument was high, at least in a
sub-sample of the analyzed apps. This indicates good concurrent validity between both expert
assessments. However, ENLIGHT contains a section assessing therapeutic alliance [28] which
was only moderately covered by the MARS. The integration of therapeutic alliance in the
MARS could further strengthen the quality of the MHA assessment. Especially in the context
of conventional and digitalized health care, therapeutic alliance, guidance, and therapeutic persuasiveness, are important aspects along with persuasive design [25, 28, 55, 56].
Pooling data from multiple international reviews of the quality of MHA using MARS also
provided an insight into the quality of many commercial MHA. While most MHA show high
quality in terms of functionality and aesthetics, the engagement and information quality of
MHA show high heterogeneity and an overall moderate quality. However, most striking is the
lack of evidence-based MHA. Only 5% of the MHA were evaluated in studies (e.g., feasibility,
Table 4. Objectivity of the MARS.
Section
Objectivity: ICC (95% CI)a
A: Engagement
0.790 (0.776 to 0.803)
B: Functionality
0.758 (0.740 to 0.774)
C: Aesthetics
0.769 (0.750 to 0.787)
D: Information quality
0.848 (0.839 to 0.857)
Overall
0.816 (0.810 to 0.822)
Note:
a)
Two-way mixed intra-class correlation coefficient (ICC) with 95% confidence intervals (CI).
https://doi.org/10.1371/journal.pone.0241480.t004
PLOS ONE | https://doi.org/10.1371/journal.pone.0241480 November 2, 2020
9 / 14
PLOS ONE
Validation of the Mobile Application Rating Scale
uncontrolled longitudinal designs, RCT). This lack of evidence is in line with previous research
and a major constraint in the secondary health market [3, 4, 9]. Creating an evidence-based
MHA market and addressing central issues, like 1) data safety and privacy, 2) user adherence
and 3) data integration, are core challenges that have to be solved to utilize the potential benefits of MHA in health care [57–59]. Using the MARS to make those issues transparent to health
care stakeholders and patients, as well as establishing guidelines for the developments of MHA
are both necessary and promising steps to achieve this goal [16, 57].
Limitations
Some limitations of this study need to be noted. First, the main aim of this study was to evaluate the construct validity of the MARS. By including ratings of multiple reviewers across the
world and multiple health conditions, we regard the external validity of the results as high.
Nonetheless, the results might be only valid in the present sample and not transferable to
other conditions, target groups or rating teams. Thus, the confirmed bifactor model should be
challenged in other health conditions and also non-health apps. Notably, the necessary modification to the a-priori defined bifactor model should be closely investigated, since it was introduced based on modification indices and is of an exploratory nature. Second, the evaluation
of the construct validity of the MARS might be biased due to the format of the MARS, as
throughout the MARS all items are assessed on a 5-point scale. Since there is no variation in
the item format, item-class specific variance cannot be controlled in the present evaluation. As
a result, item-class variance might be attributed to the quality factor. These issues could be
addressed in future studies by using a different item format. Also using a multi-method
approaches, for example by integrating alternative assessments like the user version of the
MARS [60] or the ENLIGHT [28] could lead to a more comprehensive assessment of the quality of MHA. Third, although reliability of the MARS was also a focus in this study (i.e., internal
consistency), there are facets of reliability which are still unexplored. For instance, re-test reliability of the MARS has never been evaluated. To investigate re-test reliability, an adequate
study design with time-shifted assessments of the same version of apps by the same reviewers
is needed. This remains to be investigated in future studies. Finally, throughout the study,
quality is discussed as a fundamental requirement for apps. However, the internal validity in
the sense whether quality is predictive, for example, for engagement, adherence, effectiveness
was not evaluated in this study. No study has yet investigated this using the MARS. Baumel
and Yom-Tov [61] examined which design aspects are essential using the ENLIGHT instrument. For instance, engagement and therapeutic persuasiveness were identified as crucial quality aspects associated with user adherence [61]. Based on the high correlation between MARS
and ENLIGHT, one could assume that their findings could also be applied to the MARS. However, this has to be confirmed in future studies. The role of quality should also be investigated
in a more holistic model containing MHA specific features (e.g., persuasive design) [62, 63],
user features (e.g., personality) and incorporating existing model such as the unified theory of
acceptance and use of technology (UTAUT) [64].
Conclusion
The MARS is a metrically well-suited instrument to assess MHA quality. Given the rapidly
growing app market, scalable solutions to make content and quality of MHA more transparent
to users and health care stakeholders are highly needed. The MARS may become a crucial part
of such solutions. Future studies could extend the present findings by investigating the re-test
reliability and predictive validity of the MARS.
PLOS ONE | https://doi.org/10.1371/journal.pone.0241480 November 2, 2020
10 / 14
PLOS ONE
Validation of the Mobile Application Rating Scale
Supporting information
S1 Dataset.
(XLSX)
Acknowledgments
The present study was only possible based on the previous work of research groups. The
authors would like to thank all researchers involved in these projects: Abraham, C., Ahmed, O.
H., Alley, S., Bachert, P., Balci, S., van Beurden, S.B., Bosch, P., Bray, N.A., Catic, S., Chalmers,
J., Chow, C.K., Direito, A., Eder, A.-S., Gnam, J.-P., Haase, I., Hayman, M., Hendrick, P., Holderied, T., Kamper, S.J., Kittler, J., Kleyer, A., Küchler, A.-M. Lee, H., Lin, J., van Lippevelde,
W., Meyer, M., Mucke, J., Pinheiro, M.B., Plaumann, K., Pryss, R., Pulla, A., Rebar, A.L., Redfern, J., Richtering, S.S., Schrondanner, J., Sewerin, P., Simon, D., Smith, J.R., Sophie, E., Spanhel, K., Sturmbauer, S., Tascilar, K., Thiagalingam, A., Vandelanotte, C., Vossen, D., Williams,
C., Wurst, R.
Author Contributions
Conceptualization: Yannik Terhorst, Eva-Maria Messner.
Data curation: Yannik Terhorst, Paula Philippi, Lasse B. Sander, Dana Schultchen, Sarah
Paganini, Marco Bardus, Karla Santo, Johannes Knitza, Gustavo C. Machado, Stephanie
Schoeppe, Natalie Bauereiß, Alexandra Portenhauser, Matthias Domhardt, Benjamin Walter, Martin Krusche, Harald Baumeister, Eva-Maria Messner.
Formal analysis: Yannik Terhorst, Paula Philippi.
Methodology: Yannik Terhorst, Paula Philippi.
Supervision: Harald Baumeister, Eva-Maria Messner.
Writing – original draft: Yannik Terhorst.
Writing – review & editing: Yannik Terhorst, Paula Philippi, Lasse B. Sander, Dana
Schultchen, Sarah Paganini, Marco Bardus, Karla Santo, Johannes Knitza, Gustavo C.
Machado, Stephanie Schoeppe, Natalie Bauereiß, Alexandra Portenhauser, Matthias Domhardt, Benjamin Walter, Martin Krusche, Harald Baumeister, Eva-Maria Messner.
References
1.
James SL, Abate D, Abate KH, Abay SM, Abbafati C, Abbasi N, et al. Global, regional, and national incidence, prevalence, and years lived with disability for 354 diseases and injuries for 195 countries and territories, 1990–2017: a systematic analysis for the Global Burden of Disease Study 2017. Lancet. 2018;
392: 1789–1858. https://doi.org/10.1016/S0140-6736(18)32279-7 PMID: 30496104
2.
Albrecht U. Chancen und Risiken von Gesundheits-Apps (CHARISMHA) [chances and risks of mobile
health applications]. Albrecht U, editor. Medizinische Hochschule Hannover; 2016.
3.
Terhorst Y, Rathner E-M, Baumeister H, Sander L. “Help from the app store?”: A systematic review of
depression apps in the German app stores. Verhaltenstherapie. 2018; 28.
4.
Donker T, Petrie K, Proudfoot J, Clarke J, Birch M-RR, Christensen H. Smartphones for smarter delivery of mental health programs: A systematic review. Journal of Medical Internet Research Journal of
Medical Internet Research; Nov 15, 2013 p. e247. https://doi.org/10.2196/jmir.2791 PMID: 24240579
5.
Ebert DD, Van Daele T, Nordgreen T, Karekla M, Compare A, Zarbo C, et al. Internet- and MobileBased Psychological Interventions: Applications, Efficacy, and Potential for Improving Mental Health: A
Report of the EFPA E-Health Taskforce. Eur Psychol. 2018; 23: 167–187. https://doi.org/10.1027/10169040/a000318
PLOS ONE | https://doi.org/10.1371/journal.pone.0241480 November 2, 2020
11 / 14
PLOS ONE
Validation of the Mobile Application Rating Scale
6.
Linardon J, Cuijpers P, Carlbring P, Messer M, Fuller-Tyszkiewicz M. The efficacy of app-supported
smartphone interventions for mental health problems: a meta-analysis of randomized controlled trials.
World Psychiatry. 2019; 18: 325–336. https://doi.org/10.1002/wps.20673 PMID: 31496095
7.
IQVIA. IQVIA Institute for Human Data Science Study: Impact of Digital Health Grows as Innovation,
Evidence and Adoption of Mobile Health Apps Accelerate—IQVIA. 2017 [cited 17 Oct 2019]. https://
www.iqvia.com/newsroom/2017/11/impact-of-digital-health-grows-as-innovation-evidence-andadoption-of-mobile-health-apps-accelerate/
8.
Weisel KK, Fuhrmann LM, Berking M, Baumeister H, Cuijpers P, Ebert DD. Standalone smartphone
apps for mental health—a systematic review and meta-analysis. npj Digit Med. 2019; 2: 118. https://doi.
org/10.1038/s41746-019-0188-8 PMID: 31815193
9.
Sucala M, Cuijpers P, Muench F, Cardoș R, Soflau R, Dobrean A, et al. Anxiety: There is an app for
that. A systematic review of anxiety apps. Depress Anxiety. 2017; 34: 518–525. https://doi.org/10.1002/
da.22654 PMID: 28504859
10.
Sander L, Schrondanner J, Terhorst Y, Spanhel K, Pryss R, Baumeister H, et al. Help for trauma from
the app stores?’ A systematic review and standardised rating of apps for Post-Traumatic Stress Disorder (PTSD). Eur J Psychotraumatol. 2019; accepted.
11.
Mathews SC, McShea MJ, Hanley CL, Ravitz A, Labrique AB, Cohen AB. Digital health: a path to validation. npj Digit Med. 2019; 2: 38. https://doi.org/10.1038/s41746-019-0111-3 PMID: 31304384
12.
Knitza J, Tascilar K, Messner E-M, Meyer M, Vossen D, Pulla A, et al. German Mobile Apps in Rheumatology: Review and Analysis Using the Mobile Application Rating Scale (MARS). JMIR mHealth
uHealth. 2019; 7: e14991. https://doi.org/10.2196/14991 PMID: 31381501
13.
Salazar A, de Sola H, Failde I, Moral-Munoz JA. Measuring the Quality of Mobile Apps for the Management of Pain: Systematic Search and Evaluation Using the Mobile App Rating Scale. JMIR mHealth
uHealth. 2018; 6: e10718. https://doi.org/10.2196/10718 PMID: 30361196
14.
Bardus M, van Beurden SB, Smith JR, Abraham C. A review and content analysis of engagement, functionality, aesthetics, information quality, and change techniques in the most popular commercial apps
for weight management. Int J Behav Nutr Phys Act. 2016; 13: 35. https://doi.org/10.1186/s12966-0160359-9 PMID: 26964880
15.
Meßner E, Terhorst Y, Catic S, Balci S, Küchler A-M, Schultchen D, et al. “Move it!” Standardised expert
quality ratings (MARS) of apps that foster physical activity for Android and iOS. 2019.
16.
Stoyanov SR, Hides L, Kavanagh DJ, Zelenko O, Tjondronegoro D, Mani M. Mobile App Rating Scale:
A New Tool for Assessing the Quality of Health Mobile Apps. JMIR mHealth uHealth. 2015; 3: e27.
https://doi.org/10.2196/mhealth.3422 PMID: 25760773
17.
Masterson Creber RM, Maurer MS, Reading M, Hiraldo G, Hickey KT, Iribarren S. Review and Analysis
of Existing Mobile Phone Apps to Support Heart Failure Symptom Monitoring and Self-Care Management Using the Mobile Application Rating Scale (MARS). JMIR mHealth uHealth. 2016; 4: e74. https://
doi.org/10.2196/mhealth.5882 PMID: 27302310
18.
Schoeppe S, Alley S, Rebar AL, Hayman M, Bray NA, Van Lippevelde W, et al. Apps to improve diet,
physical activity and sedentary behaviour in children and adolescents: a review of quality, features and
behaviour change techniques. Int J Behav Nutr Phys Act. 2017; 14: 83. https://doi.org/10.1186/s12966017-0538-3 PMID: 28646889
19.
Santo K, Richtering SS, Chalmers J, Thiagalingam A, Chow CK, Redfern J. Mobile Phone Apps to
Improve Medication Adherence: A Systematic Stepwise Process to Identify High-Quality Apps. JMIR
mHealth uHealth. 2016; 4: e132. https://doi.org/10.2196/mhealth.6742 PMID: 27913373
20.
Grainger R, Townsley H, White B, Langlotz T, Taylor WJ. Apps for People With Rheumatoid Arthritis to
Monitor Their Disease Activity: A Review of Apps for Best Practice and Quality. JMIR mHealth uHealth.
2017; 5: e7. https://doi.org/10.2196/mhealth.6956 PMID: 28223263
21.
Mani M, Kavanagh DJ, Hides L, Stoyanov SR. Review and Evaluation of Mindfulness-Based
iPhone Apps. JMIR mHealth uHealth. 2015; 3: e82. https://doi.org/10.2196/mhealth.4328 PMID:
26290327
22.
Machado GC, Pinheiro MB, Lee H, Ahmed OH, Hendrick P, Williams C, et al. Smartphone apps for the
self-management of low back pain: A systematic review. Best Pract Res Clin Rheumatol. 2016; 30:
1098–1109. https://doi.org/10.1016/j.berh.2017.04.002 PMID: 29103552
23.
Thornton L, Quinn C, Birrell L, Guillaumier A, Shaw B, Forbes E, et al. Free smoking cessation mobile
apps available in Australia: a quality review and content analysis. Aust N Z J Public Health. 2017; 41:
625–630. https://doi.org/10.1111/1753-6405.12688 PMID: 28749591
24.
Meßner E, Terhorst Y, Sander L, Schultchen D, Plaumann K, Sturmbauer S, et al. “When the fear kicks
in”- Standardized expert quality ratings of apps that aim to reduce anxiety. 2019.
PLOS ONE | https://doi.org/10.1371/journal.pone.0241480 November 2, 2020
12 / 14
PLOS ONE
Validation of the Mobile Application Rating Scale
25.
Messner E-M, Terhorst Y, Barke A, Baumeister H, Stoyanov S, Hides L, et al. Development and Validation of the German Version of the Mobile Application Rating Scale (MARS-G). JMIR m u Heal. 2019;
accepted.
26.
Domnich A, Arata L, Amicizia D, Signori A, Patrick B, Stoyanov S, et al. Development and validation of
the Italian version of the Mobile Application Rating Scale and its generalisability to apps targeting primary prevention. BMC Med Inform Decis Mak. 2016; 16: 83. https://doi.org/10.1186/s12911-016-03232 PMID: 27387434
27.
Payo RM, Álvarez MMF, Dı́az MB, Izquierdo MC, Stoyanov SR, Suárez EL. Spanish adaptation and
validation of the Mobile Application Rating Scale questionnaire. Int J Med Inform. 2019; 129: 95–99.
https://doi.org/10.1016/j.ijmedinf.2019.06.005 PMID: 31445295
28.
Baumel A, Faber K, Mathur N, Kane JM, Muench F. Enlight: A Comprehensive Quality and Therapeutic
Potential Evaluation Tool for Mobile and Web-Based eHealth Interventions. J Med Internet Res. 2017;
19: e82. https://doi.org/10.2196/jmir.7270 PMID: 28325712
29.
Dunn TJ, Baguley T, Brunsden V. From alpha to omega: A practical solution to the pervasive problem of
internal consistency estimation. Br J Psychol. 2014; 105: 399–412. https://doi.org/10.1111/bjop.12046
PMID: 24844115
30.
Revelle WW, Zinbarg R. R. Coefficients Alpha, Beta, Omega and GLB: Comments on Sijtsma. Psychometrika. 2009; 74: 145–154. Available: http://personality-project.org/revelle/publications/revelle.
zinbarg.08.pdf
31.
McNeish D. Thanks coefficient alpha, we’ll take it from here. Psychol Methods. 2018; 23: 412–433.
https://doi.org/10.1037/met0000144 PMID: 28557467
32.
Stewart LA, Tierney JF. To IPD or not to IPD? Advantages and disadvantages of systematic reviews
using individual patient data. Eval Health Prof. 2002; 25: 76–97. https://doi.org/10.1177/
0163278702025001006 PMID: 11868447
33.
Browne MW, Cudeck R. Alternative Ways of Assessing Model Fit. Sociol Methods Res. 1992; 21: 230–
258. https://doi.org/10.1177/0049124192021002005
34.
Moshagen M, Erdfelder E. A New Strategy for Testing Structural Equation Models. Struct Equ Model A
Multidiscip J. 2016; 23: 54–60. https://doi.org/10.1080/10705511.2014.950896
35.
Moshagen M. The Model Size Effect in SEM: Inflated Goodness-of-Fit Statistics Are Due to the Size of
the Covariance Matrix. Struct Equ Model A Multidiscip J. 2012; 19: 86–98. https://doi.org/10.1080/
10705511.2012.634724
36.
Hu LT, Bentler PM. Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria
versus new alternatives. Struct Equ Model. 1999; 6: 1–55. https://doi.org/10.1080/
10705519909540118
37.
Enders CK. Applied Missing Data Analysis. Library. 2010.
38.
Rosseel Y. The lavaan tutorial. 2019. http://cran.r-project.org/.
39.
MacCallum RC, Roznowski M, Necowitz LB. Model modifications in covariance structure analysis: The
problem of capitalization on chance. Psychol Bull. 1992; 111: 490–504. https://doi.org/10.1037/00332909.111.3.490 PMID: 16250105
40.
Holm S. A Simple Sequentially Rejective Multiple Test Procedure. Scand J Stat. 1979; 6: 65–70.
41.
McDonald RP. Test theory: A unified treatment. Test theory A unified treatment. 1999. p. 485.
42.
Zhang Z, Yuan K. Robust Coefficients Alpha and Omega and Confidence Intervals With Outlying
Observations and Missing Data: Methods and Software. 2016.
43.
George D, Mallery P. SPSS for Windows step by step: A simple guide and reference. 4th T4-. Boston:
Allyn & Bacon; 2003.
44.
Koo TK, Li MY. A Guideline of Selecting and Reporting Intraclass Correlation Coefficients for Reliability
Research. J Chiropr Med. 2016; 15: 155–63. https://doi.org/10.1016/j.jcm.2016.02.012 PMID:
27330520
45.
R Core Team. R: A Language and Environment for Statistical Computing. R Foundation for Statistical
Computing, Vienna, Austria. 2018. p. {ISBN} 3-900051-07-0. http://www.R-project.org/
46.
IBM. IBM SPSS Advanced Statistics 24. IBM. 2016; 184.
47.
Rosseel Y. lavaan: An R package for structural equation modeling. J Stat Softw. 2009; 30: 1–3. https://
doi.org/10.18637/jss.v030.i03 PMID: 21666874
48.
Revelle W. psych: Procedures for Psychological, Psychometric, and Personality Research. 2018.
49.
Schultchen D, Terhorst Y, Holderied T, Sander L, Baumeister H, Messner E-M. Using apps to calm
down: A systematic review of mindfulness apps in German App Stores. Prep. 2019.
PLOS ONE | https://doi.org/10.1371/journal.pone.0241480 November 2, 2020
13 / 14
PLOS ONE
Validation of the Mobile Application Rating Scale
50.
Terhorst Y, Messner E-M, Paganini S, Portenhauser A, Eder A-S, Bauer M, et al. Mobile Health Apps
for Pain? A systematic review of content and quality of pain apps in European App Stores. Prep. 2019.
51.
Bauereiß N, Bodschwinna D, Wölflick S, Sander L, Baumeister H, Messner E-M, et al. mHealth in Cancer Care—Standardised Expert Quality Ratings (MARS) of Mobile Health Applications in German App
Stores Supporting People Living with Cancer and their Caregivers. Prep. 2019.
52.
Portenhauser A, Terhorst Y, Schultchen D, Sander L, Denkinger M, Waldherr N, et al. A systematic
review and evaluation of mobile applications for the elderly. Prep. 2019.
53.
Walter B, Terhorst Y, Sander L, Schultchen D, Schmidbaur S, Messner E-M. A systematic review and
evaluation of apps for gastrointestinal diseases for iOS and android. Prep. 2019.
54.
Domhardt M, Messner E-M, Eder A-S, Sophie E, Sander L, Baumeister H, et al. Mobile-based Interventions for Depression, Anxiety and PTSD in Youth: A systematic review and evaluation of current pediatric health apps. Prep. 2019.
55.
Baumeister H, Reichler L, Munzinger M, Lin J. The impact of guidance on Internet-based mental health
interventions—A systematic review. Internet Interv. 2014; 1: 205–215. https://doi.org/10.1016/j.invent.
2014.08.003
56.
Domhardt M, Geßlein H, von Rezori RE, Baumeister H. Internet- and mobile-based interventions for
anxiety disorders: A meta-analytic review of intervention components. Depress Anxiety. 2019; 36: 213–
224. https://doi.org/10.1002/da.22860 PMID: 30450811
57.
Torous J, Andersson G, Bertagnoli A, Christensen H, Cuijpers P, Firth J, et al. Towards a consensus
around standards for smartphone apps and digital mental health. World Psychiatry. 2019; 18: 97–98.
https://doi.org/10.1002/wps.20592 PMID: 30600619
58.
Huckvale K, Torous J, Larsen ME. Assessment of the Data Sharing and Privacy Practices of Smartphone Apps for Depression and Smoking Cessation. JAMA Netw Open. 2019; 2: e192542. https://doi.
org/10.1001/jamanetworkopen.2019.2542 PMID: 31002321
59.
Grundy Q, Chiu K, Held F, Continella A, Bero L, Holz R. Data sharing practices of medicines related
apps and the mobile ecosystem: traffic, content, and network analysis. BMJ. 2019; 364: l920. https://
doi.org/10.1136/bmj.l920 PMID: 30894349
60.
Stoyanov SR, Hides L, Kavanagh DJ, Wilson H. Development and Validation of the User Version of the
Mobile Application Rating Scale (uMARS). JMIR mHealth uHealth. 2016; 4: e72. https://doi.org/10.
2196/mhealth.5849 PMID: 27287964
61.
Baumel A, Yom-Tov E. Predicting user adherence to behavioral eHealth interventions in the real world:
Examining which aspects of intervention design matter most. Transl Behav Med. 2018; 8: 793–798.
https://doi.org/10.1093/tbm/ibx037 PMID: 29471424
62.
Baumeister H, Kraft R, Baumel A, Pryss R, Messner E-M. Persuasive e-health design for behavior
change. In: Baumeister H, Montag C, editors. Mobile sensing and digital phenotyping: new developments in psychoinformatics. Berlin: Springer; 2019.
63.
Baumel A, Birnbaum ML, Sucala M. A Systematic Review and Taxonomy of Published Quality Criteria
Related to the Evaluation of User-Facing eHealth Programs. J Med Syst. 2017; 41. https://doi.org/10.
1007/s10916-017-0776-6 PMID: 28735372
64.
Venkatesh V, Morris MG, Davis GB, Davis FD. User Acceptance of Information Technology: Toward a
Unified View. MIS Q. 2003; 27: 425–478. Available: http://www.jstor.org/stable/30036540
PLOS ONE | https://doi.org/10.1371/journal.pone.0241480 November 2, 2020
14 / 14
Week 7 the Mobile Application Rating Scale (MARS)
Please browse this article as it discusses how the Mobile Application Rating Scale (MARS) was
developed.
At the end of this article, after the acknowledgements and abbreviations you will see link under the
“Multimedia Appendix 2: Mobile App Rating Scale. Click on this to download the PDF of the MARS.
Please print this mobile app rating scale and use it to evaluate the mobile app you desire to use with
your targeted population and health topic.
For each of the scored sections (sections A-E) you will identify the MEAN or average score. From this
you will calculate a mean score for sections A-D and then also a subjective mean score for section E.
This is the information you will be sharing in your discussion board. Based on your findings, you will also
be able to know if this is a health app that you would recommend and be able to post on your facebook
page.
©Stoyan R Stoyanov, Leanne Hides, David J Kavanagh, Oksana Zelenko, Dian Tjondronegoro, Madhavan
Mani. Originally published in JMIR Mhealth and Uhealth (http://mhealth.jmir.org), 11.03.2015. This is an
open-access article distributed under the terms of the Creative Commons Attribution License
(http://creativecommons.org/licenses/by/2.0/), which permits unrestricted use, distribution, and
reproduction in any medium, provided the original work, first published in JMIR mhealth and uhealth, is
properly cited. The complete bibliographic information, a link to the original publication on
http://mhealth.jmir.org/, as well as this copyright and license information must be included.
Stoyanov, S.R., Hides, L., Kavanagh, D.J., Zelenko, O., Tjondronegoro, D., Mani, M. (2015). Mobile app
rating scale: A new tool for assessing the quality of health mobile apps. JMIR mHealth uHealth,3(1), e27.
doi:10.2196/mhealth.3422
The original article can be found at: https://mhealth.jmir.org/2015/1/e27/PDF
Week 7 Assignment Part 2: Evaluating Health Programs
This week you will be continuing to work with your target population (elderly) and needed
health behavior change. The focus this week will be planning for and developing evaluation for
your part in a community health fair. This week’s assignment will incorporate some of your
work from weeks 2, 4, 5, and 6.
This week you will also find and evaluate a mobile app that can be made available at your health
fair to help motivate, track, monitor, and educate your target population. You will use the
Mobile Application Rating Scale (MARS) which can be found within the article posted in the
week 7 folder. You will likely also be using the MARS again in future weeks so keep it handy.
To help keep you on track please use the table below to help guide your writing for this
assignment. This will help you keep your writing precise, efficient, and help meet the grading
requirements. It will also help keep you focused on the task-at-hand. You can just use bullet
points for the table. In the left column, the number of sentences is what you should provide in
the actual paper. Keep it focused and brief. Please copy and paste this table into your
assignment as Appendix A, and be sure to mention it in your paper in appropriate areas.
Once you have completed the table below, you can then use that information to write your paper
in a complete and concise manner. There is no need for citations in the actual chart, but please
cite as needed in the paper. Please be sure to include the concepts from chapter 10 of your
textbook (Murdaugh, Parsons & Pender, 2019) to maximize your points.
Week 7 Assignment Part 2 Template
Identify your target population for the
health fair (one sentence)
Identify the health behavior change (one
sentence)
Where would be a couple of ideal locations
1.
to participate in a health fair for your chosen 2.
population? Why? (Ex: school, college,
public health center, county or state fair,
recreation center, etc.) (2-4 sentences)
Name of the individual health promotion
model appropriate for your target
population and health behavior from week
2/chapter 2 (also see pages 241-242
discussing theories/models as part of
program design). Why is it appropriate
for your population and health behavior? (23 sentences)
List 2-3 priority concepts from this HP
1.
model. How you will incorporate these
2.
concepts into your strategies for the health
3.
fair? (3-6 sentences)
State your identified gap from week 5.
Gap:
Identify two ideas to bridge that gap
Idea 1:
through the health fair. (2-4 sentences)
Idea 2:
Health fair objective: what is the
Goal:
overarching goal of presenting at a health
fair? (Keep it broad. Ex: educate on
benefits of weight loss; increase motivation
to exercise etc.) (2-3 sentences)
Identify 2-3 measurable outcomes. (2-6
sentences)
1. ____% of ____________ will
_______________ by ____________
(time frame)
2. ____% of ____________ will
_______________ by ____________
(time frame)
3. ____% of ____________ will
_______________ by ____________
(time frame)
Are your measurable outcomes considered
Outcome 1:
short, intermediate or long-term
Outcome 2:
outcomes? (pages 234-238, 243). This will
Outcome 3:
also help determine any follow-up you
decide to do. (1-3 sentences)
From table 10-2 (page 235) what nursing
Outcome 1:
sensitive outcome categories would each
Outcome 2:
of your measurable outcomes above meet?
Outcome 3:
(3-6 sentences)
What target population strategies will you
incorporate at the health fair to help meet
measured outcomes and overall program
objective? Please include the following:
power point (which is week 8 assignment),
mobile app info (week 7 assignment),
videos, and five social media tools (listed in
week 6 folder). In addition, what other
strategies might you incorporate? See
video in blackboard for some ideas but
remember they need to help meet your
objectives. Ex: posters, infographics,
brochures, interactive web programs/apps,
hands on demonstrations, sample products,
screenings, take home reminders (magnets,
pencils, surveys, etc.). (3-6 sentences)
What types of evaluation will you use:
1. Efficacy or effectiveness
efficacy or effectiveness; process or
2. Process or outcome
outcome; quantitative, qualitative, or mixed
3. Quantitative, qualitative, or mixed
methods. Give 2-3 sentences to justify
methods
(see pages 231-234).
How will you gather your evidence to
Evidence:
show if your outcomes were met? Can
include more than one method. Ex:
Form:
pretest/post-test; immediate survey; followup survey; interview; questionnaire. What
Intermediate or long-term evidence reminder
form will your evidence take? (paper,
and method:
online, or phone survey, mobile survey,
mobile app downloaded data, etc.). For
surveys or questionnaires will they be:
self-report, objective measures,
perspectives, or combination (see pages 240
focus the evaluation design and page 242
selecting outcomes). For intermediate or
long-term measurable outcomes, how will
you follow up? (Reminders: email or text
reminder, phone call, post card mailer.
Methods: online survey, mail in survey,
phone interview, mobile app, etc.). (6-8
sentences)
How will you show delivered dose
Delivered dose:
(satisfaction) and received dose
Received dose:
(participation)? See page 242. (1-2
sentences)
Choose a mobile app that can help with the
Name of app:
needed behavior change of your target
Overall rating:
population. Consider the accessibility,
Developer:
cultural, and socioeconomic needs when
Last update:
choosing an app (pages 243-245). You will
Cost for basic version:
need to actually obtain the app, use its
Cost for upgraded version:
various functions and then evaluate it
Platform:
using the Mobile Application Rating Scale Brief description:
(MARS) found in the week 7 folder.
Focus (SATA):
Discuss your evaluation as shown to the
Theoretical background/strategies (SATA):
right. Address Section F to give your
Affiliations:
overall evaluation of the app for your target
Groups:
population and behavior and if you feel it
Technical aspects (SATA):
meets awareness, knowledge, attitudes,
Section A mean score:
intentions to change, help seeking, and
Section B mean score:
behavior change.
Section C mean score:
For your paper, you do not need to include
Section D mean score:
all of this information. Just include: the
Sections A-D mean score:
name of the app, brief description, its
Section E mean score:
strategies, and why you think it is
Address Section F:
appropriate or not appropriate for your
1. Awareness
target population based on the MARS
2. Knowledge
evaluation. (5-8 sentences)
3. Attitudes
This can be the mobile app you include in
4. Intention to change
your week 8 assignment if you evaluate it to
5. Help seeking
meet the needs of your population and
6. Behavior change
behavior.
Conclusion: put it all together – your
thoughts about presenting at a health fair,
anything else you might consider, how you
might advertise to draw your target
population. (4-8 sentences)
Assignment Part 2: Evaluating a Health Promotion Plan
rubric
Abstract (5%)
•
•
•
Proper formatting (1.5 points)
Concise summary of paper kept to 150-250 words (1.5 points)
Clearly state hypothesis (2 points)
Introduction (5%)
•
•
Brief synopsis including target population, health behavior, identified gap and ideas to bridge (2 points)
Health fair and use to meet needs of population, ideal location for target population and behavior (3 points)
Individual Health Promotion Model (10%)
•
•
Which individual HP model will be used (2 points)
2-3 priority concepts from HP model to incorporate into strategies and how (8 points)
Health Fair Goal/Objectives and Outcomes (15 %)
•
•
•
•
Overarching goal of health fair program (3 points)
2-3 measurable outcomes (6 points)
Are outcomes short, intermediate or long-term (3 points)
Which category of nursing sensitive outcomes do measurable outcomes align with (3 points)
Health Fair Strategies (15%)
•
•
Includes strategies of Power point, Facebook, Mobile App, websites and why (5 points)
Includes other strategies and why (10 points)
Evaluation of Health Fair Program (15%)
•
•
•
•
Types of evaluation used: efficiency/efficacy, process/outcomes, quantitative/qualitative/mixed methods (3 points)
How evidence will be gathered, form of evidence and why (5 points)
Follow-up evaluation for intermediate and long-term outcomes and why (5 points)
Measuring delivered and received dose (2 points)
Mobile Application Evaluation (15%)
•
•
Name, rating, developer, cost, description, theoretical background of app (5 points)
Appropriateness of app based on scores and evaluation to targeted population and behavior as directed (10 points)
Conclusion as directed (10%)
• Must include a summarization of the paper components
APA (5%)
•
•
•
Correct use of in-text citations including direct quotes (1 point)
All references cited and all citations in references (1 points)
Proper APA formatting for title page, introduction, body, conclusion and reference page (3 points)
Writing, Grammar, and Spelling (5%)
•
•
•
Correct spelling, punctuation and sentence structure (2 points)
Writing is organized, clear, follows logical progression, and uses scholarly tone (1 points)
Writing is within defined sentence parameters as directed in table (2 points)
TOTAL GRADE
General Comments:
1
A Health Promotion Plan for the Elderly
2
Abstract
Older adults require adequate nutrients for them to lead healthy lives, yet the population seldom
observes healthy eating habits due to individual and external factors associated with ageing.
Even though they might be willing to consume nutrient-rich meals, these individuals encounter
obstacles that dissuade them from adopting healthy eating patterns, including financial and
immobility issues. This depicts the need to establish a health promotion plan focused on helping
the elderly follow appropriate dietary requirements when shopping for food. This report explores
the factors that predispose the target population to the risks of consuming foods with inadequate
nutrients. Additionally, it covers the incidence and prevalence of health care conditions
associated with malnutrition. Then, it suggests a health promotion model that considers the
unique needs of the elderly and presents steps to enhance their experience during the change
process. The health promotion plan describes the project’s outcomes and goals, interventions,
barriers to change, and timeframe to achieving the set objectives.
3
A Health Promotion Plan for the Elderly
While dietary needs transform as people age, the elderly often fail to meet their bodies’
health requirements due to health decline, mobility issues, and income constraints. Older people
might develop appetite loss and oral health decline due to ageing, limiting their food choices and
consumption (Shlisky et al., 2017). Additionally, they might encounter shopping problems or
financial constraints that further confines what they can eat. Amidst these issues, older people
have a greater need for nutrient-rich foods because their bodies become less efficient in
absorbing the required nutrition. In this regard, the population can benefit immensely from a diet
change program that guides their food decisions and choices. This paper contains a discussion of
the unique needs of older people that dictate their dietary requirements to maintain healthy
living. Additionally, it offers a health promotion model that will facilitate successful
communication to the target audience.
Etiology
Brennan (2021) claims that as people age, they lose muscle mass and become less
physically active, leading to decreased metabolism because they burn calories at a lower rate.
These changes typically begin at 40 when the body starts forming fat instead of muscle. Hence,
the population needs a balanced diet consisting of lean protein and filling foods to boost muscle
mass and increase metabolism.
Ageing predisposes people to the risks of developing chronic diseases with high
diagnosis and treatment costs (Shlisky et al., 2017). Additionally, older people receive less
support due to changing family dynamics, yet they require nutrient-rich diets because of oral
health issues. For instance, they might develop loss of appetite, dental and chewing difficulties,
4
and decreased mobility to access high-quality foods. These socioeconomic constraints lessen the
population’s diet choice and expose them to further health problems.
While older adults need to adhere to a healthy lifestyle, including nutrition choice, due to
their body demands, the population has low health literacy that inhibits their ability to enjoy
optimal health care. Geboers et al. (2018) argue that cognitive decline or poor cognitive function
that develops with Age negatively affects the health literacy degree in this population. Among
the domains that influence older people’s cognitive decline include memory, mental flexibility,
and information processing speed.
Incidence and Prevalence
Older Americans are at risk of malnutrition because of the problems that they encounter
when attempting to consume a healthy diet. In 2018, those aged 65 and above included 52.4
million, comprising 15.6% of the country’s population (Fulmer et al., 2021). Experts forecast this
figure to increase to 20% of the American population by 2030. Ethnic and racial minorities will
account for the rapid rise of the older population, with experts estimating a 135% increase
between 2017 and 2024. The increased population of older adults means that more Americans
will face nutrition issues in the future. Crogan (2017) estimates the prevalence of malnutrition
among the elder at 15%. Among the older adults who cannot leave their house, malnutrition
prevalence ranges between 5% to 44%. The malnutrition prevalence is highest among those
living in nursing homes and hospitals, ranging between 30% to 85% and 20% to 60%,
respectively.
A Health Promotion Model
I would use the Health Belief Model (HBM) to communicate the change initiative to the
target audience at a community health fair because the approach is effective in influencing
5
individual behavioural transformations. The model’s core principle is that an individual’s will to
improve their health behaviours depends on their health perceptions (Green et al., 2021). Hence,
a person’s individual beliefs toward health inform their behaviour and decisions in their lives.
The approach bases its interventions on four aspects that will determine the steps of
communicating to the audience, including perceived susceptibility to becoming sick, perceived
ill-health severity, perceived advantages of behavioural modifications, and obstacles to
behavioural change.
Based on the HBM approach, I would focus on creating awareness regarding the benefits
of lifestyle change among older adults to inspire their readiness to transform their behaviours. I
would use health promotion messages on news channels most preferred by the elderly to
encourage them to embrace a healthy lifestyle. Following this model, I would assess the
perceptions that older adults hold toward their risks of becoming ill from the failure to consume
nutrient-rich diets. The assessment would encompass their beliefs toward the severity of their
dietary habits, obstacles to consuming healthy foods, and advantages of changing their habits.
Then, I would centre the change approach on sharing messages that seek to transform the target
population’s belief toward their health and the need to make a change. Additionally, the initiative
will explore ways to overcome the obstacles that might prevent them from consuming nutrientrich fresh foods.
Current Interventions
Existing interventions focus on promoting high-quality health care for adults to address
nutrition problems. Incentives for providing quality health care include increasing payment
approaches to promote innovation and encourage efficient care (Fulmer, 2021). For example, the
Community Aging in Place – Advancing Better Living for Elders (CAPABLE) is a flexible
6
approach that offers waiver dollars for in-home care to improve care outcomes for the
population. However, such approaches adopt a general strategy of addressing older adults’ health
needs without addressing the unique obstacles and challenges that the population faces.
Plan for the Target Population
Outcome and Goals
The food that most older adults consume does not provide them with adequate nutrients
to achieve optimum health. A priority health behaviour outcome will be to maintain a healthy
nutrient supply to the body. To achieve this outcome, the population will consume daily servings
of fruits, vegetables, whole grain, fat-free daily products, and seafood.
Discuss Evidence-Based Interventions
According to National Institute of Aging (NIH) (2021), older adults can consume a
healthy diet that addresses their nutrition needs through adopting a healthy U.S.-Style eating
pattern, Mediterranean-Style eating pattern, or vegetarian eating pattern. In this program, I would
implement the Mediterranean-style eating pattern that emphasizes more consumption of seafood
and fruits and fewer dairy products. The steps that will help the target population meet the goals
of eating nutritious meals will entail planning in advance, finding recipes, and identifying
budget-friendly goods. Meal planning will allow the elderly to eat meals with various nutrients
during the day. Likewise, creating a list of affordable and nutritious foods will ensure the
population adheres to the lifestyle change intervention.
Barriers to The Stages of Change
Personal and external barriers might impede the intervention and contribute to relapse in
the target population. Petroke et al. (2017) suggest that personal barriers affect older adults’
nutrition intake, including food costs, physical limitations, and low motivation to change.
7
External barriers entail distance to markets and a lack of social cohesion. The nutrition plan must
address these barriers to effectively persuade the target population to embrace the healthy eating
initiative.
Timeframe
The timeframe to implement the nutrition plan for older adults will be three months to
allow them to observe the benefits of the change. While the target population might notice
improvements in their skin, energy levels, and digestion shortly after implementing the program,
enough time is necessary for the benefits to show on their blood tests and other health indicators.
Nonetheless, they will need education on how to calculate food nutrition content and their
bodies’ nutrient needs to attain the behavioural change goals.
Conclusively, an effective health promotion plan for older adults should cover their
unique nutrient needs and address the challenges that the population encounters to help them live
healthy lives. Due to their old age, these individuals become less physically active, lose muscle
mass, develop chronic diseases, and experience cognitive declines that negatively affect their
health. These changes predispose them to risks of malnutrition and other health care conditions
that negatively affect their wellbeing. Considering its emphasis on individual behavioural
transformation, the HBM will provide the nutrition change program with a guideline to not just
change older adults’ attitudes toward foods but also encourage them to embrace the lifestyle
change initiative.
8
References
Brennan, D. (2021). How much does your metabolism slow down as you Age? WebMD. What to
Know About Metabolism Slowing Down as You Age (webmd.com)
Crogan, N. L. (2017). Nutritional problems affecting older adults. Nursing Clinics of North
America, 52(3), 433-445. https://doi.org/10.1016/j.cnur.2017.04.005
Fulmer, T., Reuben, D. B., Auerbach, J., Fick, D. M., Galambos, C., & Johnson, K. S. (2021).
Actualizing better health and health care for older adults. Health Affairs, 40(2), 219225. https://doi.org/10.1377/hlthaff.2020.01470
Geboers, B., Uiters, E., Reijneveld, S. A., Jansen, C. J., Almansa, J., Nooyens, A. C.,
Verschuren, W. M., De Winter, A. F., & Picavet, H. S. (2018). Health literacy among
older adults is associated with their 10-years’ cognitive functioning and decline – the
Doetinchem cohort study. BMC Geriatrics, 18(1). https://doi.org/10.1186/s12877-0180766-7
Green, E. C., Murphy, E. M., & Gryboski, K. (2021). The health belief model. The Wiley
Encyclopedia of Health Psychology, 211214. https://doi.org/10.1002/9781119057840.ch68
NIH. (2021). Healthy meal planning: Tips for older adults. The National Institute of Aging.
Healthy Meal Planning: Tips for Older Adults | National Institute on Aging (nih.gov)
Shlisky, J., Bloom, D. E., Beaudreault, A. R., Tucker, K. L., Keller, H. H., Freund-Levi, Y.,
Fielding, R. A., Cheng, F. W., Jensen, G. L., Wu, D., & Meydani, S. N. (2017).
Nutritional considerations for healthy aging and reduction in age-related chronic
disease. Advances in Nutrition: An International Review Journal, 8(1), 17.226. https://doi.org/10.3945/an.116.013474
1
The Health Belief Model
Human behaviour serves as a crucial element in disease prevention and health
maintenance processes. Scientific evidence shows that healthy behaviours adoption reduces
mortality and morbidity in individuals. Notably, healthcare professionals have proposed different
behavioural change models to guide behavioural transformation in people. Behavioural change
predictors allow people to target and design health-enhancing programs through the
identification of intervening variables, moderators, and mediators. In this scenario, I would rely
on the Health Belief Model in explaining and guiding my diet change process.
2
The diet change process would revolve around the replacement of an unhealthy diet with
the existing healthy diet option. My unhealthy diet consists of highly processed food, sugary
beverages, red meat, high salt quantities, and junk food. I have realized that consuming
unhealthy food and drinks makes me highly susceptible to conditions like stroke, cancers,
saturated fat, hypertension, obesity, osteoporosis, and cardiovascular diseases. I understand that
adopting healthy means can protect me from the above-listed conditions. A healthy diet
comprises products like whole grains, unsaturated fat, vegetables, fruits, and controlled amounts
of animal proteins. For instance, I understand that consuming unsaturated fats protects people
from cholesterol buildup and heart diseases. The health improvement program would also
involve sugary drinks avoidance, adequate water intake, and reduced sodium intake.
I would investigate the diet change plan using the Health Belief Model (HBM) to ensure
that it creates health improvement and disease prevention outcomes. The HBM explains why
people reject or accept healthy behaviours or disease prevention strategies. The model uses an
assumption that people choose actions based on an understanding of their associated outcomes.
Strengths associated with the HBM have increased its popularity in healthcare intervention
programs (Pender et al., 2019). The major strength associated with the model relates to its
dependence on simplified health-related concepts. The concepts make the model very easy to test
and implement in health intervention programs. The model provides an important theoretical
framework concerned with the investigation of cognitive factors determining diverse human
behaviours. Still, the model has weaknesses like variables delineation challenges and ignorance
of social, environmental, and economic factors influencing human behaviours.
The HBM serves as a foundational element for numerous practical interventions in the
different behaviours affecting human health. Scientists have broadly categorized fields of the
3
HBM application into clinical use, sick role behaviour, and preventive health behaviour. The
model is implemented in preventive health behaviours like dental behaviours, genetic screening,
diet, contraceptive use, alcohol use, breast self-examination, vaccination, health screening, and
smoking .Sick role behaviours revolve around medical interventions needed to control specific
diseases. Healthcare professionals rely on the model in creating effective programs for
behavioural transformation in public health. Using the model allows individuals to acquire
details regarding the severity and prevalence of specific health conditions. Additionally, the
model helps people in uncovering the social, financial, and medical consequences of certain
illnesses.
Szabó and Pikó (2019) observe that the Health Belief Model can be utilized in guiding
healthy eating behaviour in people. The authors observe that healthy eating serves as an
important contributor to good health in individuals. Using the adolescent population enables the
authors to effectively test the HBM in regard to its capacity to transform individuals’ behaviours.
In the article, the authors observe that adolescence represents a sensitive developmental stage
with respect to behavioural change (Szabó & Pikó, 2019). The situation makes diet changes very
difficult to implement among the teenage population in societies. In this case, the authors
confirm that the HBM is effective in promoting healthy eating among adolescents. The model
overcomes change barriers by enlightening teenagers about the benefits associated with healthy
nutrition. For instance, the model shows that healthy nutrition protects people from conditions
like diabetes, obesity, stroke, and cardiovascular illnesses.
The HBM’s effectiveness in the diet change process can be represented graphically. In
this case, the graph integrates various factors serving as foundational elements for health
decisions in people. Factors guiding decision-making in individuals are perceived severity,
4
health value, cues to action, perceived susceptibility, and perceived benefits. In this case,
perceived susceptibility represents lifestyle diseases prevalence in individuals’ unhealthy eating
behaviours (Pender et al., 2019). Perceived severity represents consequences of unhealthy eating
habits in individuals, including reduced productivity, complications, and premature death.
Benefits associated with healthy eating include the general welfare and reduced treatment cost.
In this case, barriers associated with healthy diet adoption include meal preparation challenges,
time, and cost issues. The health value represents emotional and physical costs associated with
health behaviour adoption. Finally, cues to action represent signals prompting people to
implement healthy eating, including mass media messages and health reports.
Figure 1: The Representation of Diet Change Process
Macronutrients
Unhealthy Levels (g/day)
Healthy Levels (g/day)
Carbohydrates
450
250
Proteins
200
60
Fats
100
50
Fiber
5
25
5
Water
800
3000
Fruits and Vegetables
100
450
Salt
10
4
Figure 2: The Table Representing the Diet Change Plan
Figure 3: The Graphical Representation of Diet Change Process
The behavioural change process occurs in six different stages, namely pre-contemplation,
contemplation, preparation, action, maintenance, and relapse. I understand that these stages are
relevant to my diet change process. Currently, I am at the action stage of the diet change process
(Cherry, 2021). At this stage, I have started replacing snacks and soft drinks with fruits, whole
grains, and vegetables. However, I have encountered financial barriers in my attempt to adopt
healthy diet consumption. For instance, acquiring ingredients for a healthy meal occurs at a huge
financial cost among individuals. Eating a dish containing healthy ingredients in restaurants has
6
been an extremely endeavour for me. I understand that junk food and soft drinks are available at
affordable prices in local fast-food restaurants. I have relied on the HBM in justifying the
expenditure on healthy meals.
The HBM is highly reliable in the justification of diet change in individuals. People
utilize the model in evaluating benefits associated with healthy behaviours adoption.
Specifically, the model enables people to comprehend, describe, and predict healthy behavioural
patterns like regular exercise and healthy diet consumption. Therefore, the HBM would support
me in understanding and accepting the financial cost associated with healthy meals adoption.
7
References
Cherry, K. (2021). The 6 stages of behavior change. Verywell Mind.
https://www.verywellmind.com/the-stages-of-change-2794868
Pender, N. J., Murdaugh, C. L., & Parsons, M. A. (2019). Health promotion in nursing practice.
New York City: Pearson.
Szabó, K., & Pikó, B. (2019). Likelihood of healthy eating among adolescents based on the
health belief model. Developments in Health Sciences, 2(1), 22-27.
https://doi.org/10.1556/2066.2.2019.004
Purchase answer to see full
attachment