Context-Aware Human Activity Recognition in Healthcare Recommender Systems

Explore a systematic review of deep learning models applied to context-aware human activity recognition (HAR) in healthcare recommender systems using wearable and smartphone sensor data, covering studies from 2021 to 2024.

During the past few years, deep learning has played a big role in boosting Human Activity Recognition, specifically in healthcare recommender systems. Now that users have smartwatches, fitness trackers, and smartphones, it is possible for sensors like accelerometers, gyroscopes, and heart rate monitors to monitor them constantly. Gathering data this way allows us to build healthcare systems that understand users’ current needs and can provide intelligent advice.

Introduction

The study examines deep learning for HAR through videos. It concentrates on CNN and RNN designs for video data. This work provides strong performance in HAR and time sequence modeling, but remains focused on video-based HAR rather than wearables or healthcare.

In paper, the paper examines deep learning models used for video-based HAR, evaluating their effectiveness and how their structures have changed. It stresses strong feature gathering and the scalability of models, but it doesn’t discuss either healthcare technology or healthcare recommendations.

It examines work of HAR systems used on smartphones, smartwatches, and smart glasses. So, it reviews preprocessing methods, various types of activities, and explains sensor functions in deep learning-based recognition. The paper discusses gesture recognition and sensor information together, yet stops at 2023 and is not concerned with recommender systems.

 

In [21], experts study HAR by combining machine learning and deep learning with data collected from devices and open datasets. This research highlights the need for energy-saving models in real-time health apps, yet fails to cover them in a comprehensive way.

 

In , deep learning approaches are used to study HAR, with emphasis on CNNs, RNNs, and LSTMs as important architectures for sensors. It offers both normalization and evaluation, but only covers studies published before 2021 and doesn’t give much attention to healthcare recommender systems.

HAR in healthcare using smartphones

Researchers in the study  examine HAR in healthcare using smartphones. It examines how deep learning is used to automate finding useful data and improve results in both healthcare and fitness monitoring. This survey ends in 2021 and does not look at context-aware or recommender-based HAR systems.

HAR in healthcare using smartphones
HAR in healthcare using smartphones

In general, the current literature provides a solid background for HAR with deep learning, but many studies concentrate on visual HAR or broad usage. Context-aware HAR for healthcare recommender systems is not widely studied with recent datasets and sensor data. As a result, we can systematically explore the latest developments, difficulties, and possibilities for where this intersection will go.

 

Paper Deep Learning Context-Aware Healthcare Open Dataset Papers Till 2024
[18] Yes No No Yes No
[19] Yes No Yes Yes No
[20] Yes Partial No Yes Yes
[21] Yes No Partial Yes No
[22] Yes No No Yes No
[23] Yes No Yes Yes Yes

 

 Systematic Literature Review Protocol

A PRISMA-based method was used to guarantee that this systematic review was transparent, reproducible, and methodologically rigorous. The protocol explains the steps researchers should take, starting from selecting the study, searching for information, screening for eligibility, and collecting the final data.

Inclusion Criteria

  • Only studies that fit the mentioned conditions were included in the review.
  • Peer-reviewed articles or conference papers.
  • Published between 2021 and 2024.
  • Concentrating on using deep learning, HAR, and different technologies in healthcare services.
  • Written in English.
  • Studies that use or test deep learning models in the field of HAR.

 

Exclusion Criteria

  • We excluded these kinds of studies:
  • Papers and articles in different languages, apart from English.
  • Studies that do not deal with mobile, wearable, or healthcare applications in HAR.
  • Works that did not use any type of deep learning algorithm.
  • Documents that were only part of a presentation, or made up of abstracts, with nothing more (e.g., workshop summaries).

 

Databases Searched

To find the most important studies on the topic, I consulted the following academic databases.

  • IEEE Xplore
  • ACM Digital Library
  • ScienceDirect
  • SpringerLink
  • Google Scholar

Search Strategy

To filter the results, I searched with keywords and Boolean operators in all the chosen databases. Representative search queries included:

 

  • “context-aware” AND “human activity recognition” AND “deep learning”
  • “context-aware computing” as well as “healthcare recommender systems”.
  • There is now research combining “wearable sensors” with “deep learning” for HAR.

1 Research Methodology-context-aware human activity recognition in healthcare

Context-Aware Human Activity Recognition (HAR)
Context-Aware Human Activity Recognition (HAR)

1.1 Introduction and Objective of Research

Chapter 3 explains the purpose of the study and how the review will be organized.

We examine in this SLR the effectiveness of deep learning (DL) methods for recognizing activities in Context-Aware Human Activity Recognition (HAR). They are used in healthcare recommender systems. Since wearable devices and smartphones contain many sensors, the study focuses on them. We want to see how various DL architectures perform in activities related to healthcare, such as finding falls, tracking rehabilitation, and supervising patients remotely. The guidelines from the Cochrane Handbook, PRISMA 2020, and certain SLR protocols were followed in preparing this review.

 

1.2 Information Sources and Search Process

Literature relevant to SDGs was drawn from the top five scientific databases. The platform makes use of five scholarly databases: IEEE Xplore, Springer, ScienceDirect, Taylor & Francis, and Oxford Academic. To identify medical questions related to oxalate, we needed to define what the topic covered, form a Boolean search query, and select only studies that fit our criteria.

 

1.3 Research Questions

This SLR looks at new developments in DL for HAR discovered in healthcare recommender systems from 2021 to 2024. The study has the purpose of addressing the following questions:

  • RQ1: Which neural networks are applied to HAR in the healthcare field in the years 2021 to 2024?
  • RQ2: CNN, RNN, LSTM, and other DL models impact both the positive and negative sides of HAR in healthcare.
  • RQ3: Are DL models accurate, able to work with noisy data, able to be applied in a variety of examples quickly, and effective in healthcare HAR?
  • RQ4: What do you find are common characteristics of open data in the field of healthcare HAR research?
  • RQ5: Can these data be used to train DL-based HAR systems in healthcare?
  • RQ6: What are the biggest problem areas in today’s use of DL to develop healthcare HAR from wearables and smartphones?
  • RQ7: What are possible new areas to explore that can help DL in healthcare-related human activity recognition?

 

1.4 Search String and Refinement

An improved search string was put in place to guarantee thorough coverage.

Use “HAR” OR “Human Activity Recognition”/name” AND “CNN” OR “LSTM” OR “RNN” OR “Transformer” OR “Deep Learning” AND “Healthcare” OR “Patient Monitoring” OR “Fall Detection” AND “Smartphones” OR “Smartwatches” OR “Wearable Devices”.

The purpose of this design was to select studies that examine healthcare using DL and sensor-equipped smart devices. As the papers using “future directions” and “performance metrics” were inconsistently included, they were removed from the analysis.

 

1.5 Filtration Criteria

Our study considered the most relevant were includedwe can use in the research.

  1. Only primary research studies for this study, and not reviews.
  2. Relevance and up-to-date data were achieved by restricting publications to the years 2021 to March 2024.
  3. We limited our study to traditional deep learning techniques. We did not include studies that applied traditional machine learning, federated learning, or a mix of methods.
  4. Research was mainly centered on smartphone and wearable sensor HAR, with none considering video, radar, or ambient sensor examples.
  5. We stop at our study to models built using open-source datasets so that it could be repeated and used more widely.

1.6 Selection Criteria

The research articles selected after the first filtration were then assessed using a well-defined scoring scheme to guarantee their relevance and accuracy with the goals of this systematic review. The rating process used five main factors to ensure that no personal biases entered into the marks: the most recent, significant, and applicable studies, those that rely on deep learning, having public databases, and result presentation. In order to cover new progress, papers from these two years got the highest scores. The studies of healthcare applications, such as watching patients, noticing falls, and tracking chronic illnesses. This research underlined the value of deep learning approaches that consider time, external situations, and the person using the system in real-life healthcare recommender systems. 

When comparing papers, a priority was given to those that used accessible and well-documented data sources. Last, the amount of information of studies’ performance measures, such as accuracy, precision, recall, F1-score, and how much of computing resources they require, was examined. The final review covered only papers that got a score of at least 16 out of 20. The scoring was done with a formula so that the evaluators would not affect the process unfairly. Figure 2 shows the total number of studies that were selected with this scoring approach.

 

1.7 Study Selection

Through our search of the main digital libraries, we found 1231 articles. Our search in IEEE Xplore produced 104 outcomes, ScienceDirect returned 799 results, Springer brought back 324, and we obtained two results from each of Taylor & Francis and Oxford Academic. The databases  are important and scientifically reliable because they often contain recent work on HAR, deep learning, and healthcare recommender systems.

 

The results from Section 3.6 reduce the list to 93 studies according to filtering by publication year, use of deep learning models (e.g., CNN, RNN, LSTM, Transformer), application for smart devices, and open datasets. They we test them by applying a set of grading tools that examined the quality of publications, what the dataset contains, and how diverse the activities are.

<yoastmark class=

A total of 67 studies scores 17 or higher on the checklist (out of 20), and these studies use for the final systematic literature review. This process supports research that offers practical outcomes for use in today’s smartphones and wearable accessories. Most of the selected studies come from 2023 and 2024, and a handful from 2021 and 2022.

 

Figure 3 presents the development and attention for this field over the years by displaying the studies. . The details of each study covering the datasets used (open or closed), the number of activities, used deep learning models used, the accuracy achieved on average, and the journal ranking set by the Higher Education Commission (HEC) [27]. Thanks to this broad data mapping, this review remains transparent and reusable.

Conclusion- context-aware human activity recognition in healthcare

This systematic review has revealed a growing interest in deep learning applications for human activity recognition (HAR) in healthcare, particularly when integrated with wearable and smartphone technologies. While many studies have demonstrated the effectiveness of CNNs, RNNs, LSTMs, and Transformers in HAR, there remains a lack of focus on context-aware recommender systems specifically tailored to healthcare. Additionally, recent developments are still underexplored in terms of real-time implementation, energy efficiency, and personalization. Moving forward, there is significant opportunity to address these gaps by leveraging more recent datasets, enhancing contextual awareness, and improving scalability for real-world healthcare applications. This review provides a foundation for future research and system development in this promising intersection of healthcare, deep learning, and ubiquitous computing.

FAQs – context-aware human activity recognition in healthcare

1. What is Human Activity Recognition (HAR) in healthcare?
HAR in healthcare involves using sensors to monitor and identify physical activities like walking, falling, or sleeping, which can support patient monitoring, rehabilitation, and chronic disease management.

2. How does deep learning enhance HAR in healthcare recommender systems?
Deep learning models, such as CNNs and LSTMs, can learn complex patterns from sensor data, allowing healthcare systems to provide accurate, personalized recommendations based on real-time user activity.

3. Which deep learning models are most common in HAR?
The most frequently used models in recent studies include Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), Long Short-Term Memory networks (LSTMs), and increasingly, Transformers.

4. Are open datasets available for developing HAR in healthcare applications?
Yes, many studies utilize open datasets collected from smartphones and wearables, although context-aware and healthcare-specific datasets remain limited, indicating a need for more targeted data collection efforts.

Scroll to Top