Results
POLARIS: Probabilistic and Ontological Activity Recognition in Smart-homes.
Recognition of activities of daily living (ADLs) is an enabling technology for several ubiquitous computing applications. In this field, most activity recognition systems rely on supervised learning to extract activity models from labeled datasets. A problem with that approach is the acquisition of comprehensive activity datasets, which ...
Info
This page provides the results of our experiments. Each result file contains the test results for each individual patient/
day, i.e., the individual and aggregated results represented by precision, recall, and F-measure. These results belong to the publication POLARIS: Probabilistic and Ontological Activity Recognition in Smart-homes. Hint
This work is an extension. Please consider the following page for detailed results concerning Offline POLARIS: Unsupervised Recognition of Interleaved Activities of Daily Living through Ontological and Probabilistic Reasoning.
Setup
MLNnc solver: show
Ontology: show (unchanged compared to D. Riboni et al. 2016)WSU CASAS Dataset
SmartFaber Dataset
Modeling and Reasoning with ProbLog: An Application in Recognizing Complex Activities.
Smart-home activity recognition is an enabling tool for a wide range of ambient assisted living applications. The recognition of ADLs usually relies on supervised learning or knowledge-based reasoning techniques. In order to overcome the well-known limitations of those two approaches and, at the same time, to combine their strengths to ...
Info
This page provides additional material. Those belong to the publication Modeling and Reasoning with ProbLog: An Application in Recognizing Complex Activities.
Setup
ProbLog Online Editor: show
ProbLog Models
This work was a cooperation between the
University of Milano
and the
University of MannheimHips Do Lie! A Position-Aware Mobile Fall Detection System (2018)
Ambient Assisted Living using mobile device sensors is an active area of research in pervasive computing. In our work, we aim to implement a self-adaptive pervasive fall detection approach that is suitable for real life situations. The paper focuses on the problem that the device’s on-body position but also the falling pattern of a person ...
Info
This page provides the results of our experiments. Each result file contains the individual results, i.e., not aggregated. The results were analyzed with pivot tables in Excel. These results belong to the publication Hips Do Lie! A Position-Aware Mobile Fall Detection System.
Datasets
Following, the datasets which we considered in our experiments can be found. Source provides a link to the original dataset, Publication provides a link to the related publication, and Download provides our preprocessed version of the original dataset. Further information can be found here (DataSets) and here (Traces).
Results
Additional Results (F-measure)
This work was a cooperation between the
Chair of Information Systems II
and theOnline Personalization of Cross-Subjects based Activity Recognition Models on Wearable Devices (2017)
Human activity recognition using wearable devices is an active area of research in pervasive computing. In our work, we target patients and elders which are unable to collect and label the required data for a subject-specific approach. For that purpose, we focus on the problem of cross-subjects based recognition models and introduce an ...
Info
This page provides the results of our experiments. Each result file contains the test results (Precision, Recall, F-measure) for each individual subjects. However, the results are not aggregated, i.e., does not provide average values of all subjects, positions, or combinations. These results belong to the publication Online Personalization of Cross-Subjects based Activity Recognition Models on Wearable Devices.
Hint
All experiments were performed using random forest as a classifier. We considered two different versions of this classifier: online and offline learning. The former was self/
re-implemented (source code) based on the work of A. Saffari et al. where the latter was developed by M. Hall et al..
Preprocessed Accerleration Data (Windows)
Data Set Short Description Download Size [MB] #1 Single sensor setup, separated by position and subject Download ~120 #2 Two-part setup, separated by combination and subject Download ~690 #3 Three-part setup, separated by combination and subject Download ~1600 #4 Four-part setup, separated by combination and subject Download ~2100 #5 Five-part setup, separated by combination and subject Download ~1600 #6 Six-part setup, separated by combination and subject Download ~640 Cross-Subjects Activity Recognition (Section VI, Part A)
Test Short Description Related To Result Size [MB] #1 Randomy, One Accelerometer, Offline learning Table II show ~1 #2 Randomy, Two Accelerometer, Offline learning Table II/ III/IV show ~2 #3 Randomy, Three Accelerometer, Offline learning Table II show ~3 #4 Randomy, Four Accelerometer, Offline learning Table II show ~3 #5 Randomy, Five Accelerometer, Offline learning Table II show ~2 #6 Randomy, Six Accelerometer, Offline learning Table II show ~1 #7 L1O, One Accelerometer, Offline learning Table II show <1 #8 L1O, Two Accelerometer, Offline learning Table II/ III/IV show <1 #9 L1O, Three Accelerometer, Offline learning Table II show <1 #10 L1O, Four Accelerometer, Offline learning Table II show <1 #11 L1O, Five Accelerometer, Offline learning Table II show <1 #12 L1O, Six Accelerometer, Offline learning Table II show <1 #13 Our approach, One Accelerometer, Offline learning Table II show <1 #14 Our approach, Two Accelerometer, Offline learning Table II/ III/IV show <1 #15 Our approach, Three Accelerometer, Offline learning Table II show <1 #16 Our approach, Four Accelerometer, Offline learning Table II show <1 #17 Our approach, Five Accelerometer, Offline learning Table II show <1 #18 Our approach, Six Accelerometer, Offline learning Table II show <1 #19 Subject-Specific, One Accelerometer, Offline learning - show <1 #20 Subject-Specific, Two Accelerometer, Offline learning - show <1 Personalization: Online and Active Learning (Section VI, Part B)
Test Short Description Related To Result Size [MB] #1 Our approach, One Accelerometer, Online Learning - show <1 #2 Our approach + User-Feedback, One Accelerometer, Online Learning - show ~1 #3 Our approach + User-Feedback + Smoothing, One Accelerometer, Online Learning - show ~1 #4 Our approach, Two Accelerometer, Online Learning Table V/ VI show <1 #5 Our approach + Smoothing, Two Accelerometer, Online Learning Table V/ VI show ~2 #6 Our approach + User-Feedback, Two Accelerometer, Online Learning Table V/ VI show ~2 #7 Our approach + User-Feedback + Smoothing, Two Accelerometer, Online Learning Table V/ VI/VII/VIII, Figure 4 show ~2 #8 Our approach + User-Feedback + Smoothing (varying confidence threshold), Two Accelerometer, Online Learning Figure 5 show ~23 #9 Our approach + User-Feedback + Smoothing (varying number of trees), Two Accelerometer, Online Learning Figure 6 show ~27 #10 Subject-Specific, One Accelerometer, Online Learning - show ~1 #11 Subject-Specific, Two Accelerometer, Online Learning - show ~2 Position-Aware Activity Recognition with Wearable Devices (2017)
Reliable human activity recognition with wearable devices enables the development of human-centric pervasive applications. We aim to develop a robust wearable-based activity recognition system for real life situations. Consequently, in this work we focus on the problem of recognizing the on-body position of the wearable device ensued by ...
Info
This page provides the results of our experiments. Each result file contains the test results (F-Measure, Confusion Matrix, ...) for each individual subject. These results belong to the publication Position-Aware Activity Recognition with Wearable Devices.
Hint
This work is an extension. Please consider the following page for detailed results that correspond to Section 5.1–5.3: On-body Localization of Wearable Devices: An Investigation of Position-Aware Activity Recognition – (Numbers of tables have changed).
Subject-Specific Activity Recognition (Section 5.2)
Cross-Subjects Activity Recognition (Section 5.4)
Test Short Description Related To Result Size [MB] #1 Dynamic Activity Recognition (Randomly, One accelormeter) Table 10, 11 show <1 #2 Dynamic Activity Recognition (L1O, One accelormeter) Table 10, 11 show <1 #3 Dynamic Activity Recognition (Top-Pairs, One accelormeter) Table 10, 11 show <1 #4 Dynamic Activity Recognition (Physical, One accelormeter) Table 10, 11 show <1 #5 Activity Recognition (Physical, One accelormeter) Table 12 show <1 #6 Activity Recognition (Physical, One accelormeter, including gravity feature for static activities) Table 12 show <1 #7 Activity Recognition (Physical, Two accelormeter, Only waist combinations) Table 12, 13 show <1 #8 Activity Recognition (Physical, Two accelormeter, Only waist combinations, including gravity feature for static activities) Table 12, 13 show <1 #9 Position Recognition (Randomly, One accelormeter) Table 12, 13 show <1 #10 Position Recognition (L1O, One accelormeter) Table 12, 13 show <1 #11 Position Recognition (Top-Pairs, One accelormeter) Table 12, 13 show <1 #12 Position Recognition (Physical, One accelormeter) Table 12, 13 show <1 Self-Tracking Reloaded: Applying Process Mining to Personalized Health Care from Labeled Sensor Data (2016)
Currently, there is a trend to promote personalized health care in order to prevent diseases or to have a healthier life. Using current devices such as smart-phones and smart-watches, an individual can easily record detailed data from her daily life. Yet, this data has been mainly used for self-tracking in order to enable personalized ...
Info
This page provides additional material of the publication Self-Tracking Reloaded: Applying Process Mining to Personalized Health Care from Labeled Sensor Data. In the following, we provide personal process maps and trace alignment clustering results of our experiments.
Hint
As mentioned in the paper, the XES files that are provided on this page were created from data sets that were created by other researchers. The original data set “Activity Log UCI” was created by Ordóñez et al. where hh102, hh104, and hh110 originate from Cook et. al.
Personal Processes (Fuzzy Model)
Number Short Description Related To Result #1 Main personal activity for all users during the working days (frequency) 6.2 show #2 Main personal activity for all users during the weekend days (frequency) 6.2 show #3 Main personal activity for all users during the working days (duration) 6.2 show #4 Main personal activity for all users during the weekend days (duration) 6.2 show Personal Process Models (XES)
Number Short Description Related To Result #1 Activity Log UCI Detailed (during the week) 6.2 show #2 Activity Log UCI Detailed (Weekend) 6.2 show #3 hh102 (during the week) 6.2 show #4 hh102 (Weekend) 6.2 show #5 hh104 (during the week) 6.2 show #6 hh104 (Weekend) 6.2 show #7 hh110 (during the week) 6.2 show #8 hh110 (Weekend) 6.2 show Trace Alignment
Number Short Description Related To Result #1 Clustered Traces, Subject 1 (based on our data set) 7 show Unsupervised Recognition of Interleaved Activities of Daily Living through Ontological and Probabilistic Reasoning
Recognition of activities of daily living (ADLs) is an enabling technology for several ubiquitous computing applications. Most activity recognition systems rely on supervised learning methods to extract activity models from labeled datasets. An inherent problem of that approach consists in the acquisition of comprehensive activity datasets, ...
Info
This page provides the results of our experiments. Each result file contains the test results for each individual patient/
day, i.e., the individual and aggregated results represented by precision, recall, and F-measure. Further, we provide the sensor and instance-based results as well as the precomputet instance candidates. These results belong to the publication Unsupervised Recognition of Interleaved Activities of Daily Living through Ontological and Probabilistic Reasoning. WSU CASAS Dataset
SmartFaber Dataset
On-body Localization of Wearable Devices: An Investigation of Position-Aware Activity Recognition (2016)
Human activity recognition using mobile device sensors is an active area of research in pervasive computing. In our work, we aim at implementing activity recognition approaches that are suitable for real life situations. This paper focuses on the problem of recognizing the on-body position of the mobile device which in a real world setting ...
Info
This page provides the results of our experiments. Each result file contains the test results (F-Measure, Confusion Matrix, ...) for each individual subjects. However, the results are not aggregated, i.e., does not provide average values of all subjects. We applied 10-fold cross validation and performed 10 runs where each time the data set was randomized and the 10-folds were recreated. These results belong to the publication On-body Localization of Wearable Devices: An Investigation of Position-Aware Activity Recognition.
Random Forest
Test Short Description Related To Result #1 Position Detection (all activities) Table II show #2 Position Detection (only static activities) Table III show #3 Position Detection (only static activities) incl. gravity feature Table IV show #4 Position Detection (only dynamic activities) Table III show #5 Position Detection (activity-level dependend) Table VI see #3,#4 #6 Activity Recognition (single classifier, all activities) Table VII,VIII show #7 Activity Recognition (assumption: position is known for sure, all activities) - show #8 Activity Recognition (based on the position detection result of RF, incl. all mistakes) Table IX,X show #9 Distinction between static and dynamic activity (all activities) Table V show Naive Bayes
Test Short Description Related To Result #1 Position Detection (all activities) Table II show #2 Position Detection (only static activities) Table III show #3 Position Detection (only static activities) incl. gravity feature Table IV show #4 Position Detection (only dynamic activities) Table III show #5 Position Detection (activity-level dependend) Table VI see #3,#4 #6 Activity Recognition (single classifier, all activities) Table VII,VIII show #7 Activity Recognition (assumption: position is known for sure, all activities) - show #8 Activity Recognition (based on the position detection result of RF, incl. all mistakes) Table IX,X show #9 Distinction between static and dynamic activity (all activities) Table V show Artificial Neural Network
Test Short Description Related To Result #1 Position Detection (all activities) Table II show #2 Position Detection (only static activities) Table III show #3 Position Detection (only static activities) incl. gravity feature Table IV show #4 Position Detection (only dynamic activities) Table III show #5 Position Detection (activity-level dependend) Table VI see #3,#4 #6 Activity Recognition (single classifier, all activities) Table VII,VIII show #7 Activity Recognition (assumption: position is known for sure, all activities) - show #8 Activity Recognition (based on the position detection result of RF, incl. all mistakes) Table IX,X show #9 Distinction between static and dynamic activity (all activities) Table V show Decision Tree
Test Short Description Related To Result #1 Position Detection (all activities) Table II show #2 Position Detection (only static activities) Table III show #3 Position Detection (only static activities) incl. gravity feature Table IV show #4 Position Detection (only dynamic activities) Table III show #5 Position Detection (activity-level dependend) Table VI see #3,#4 #6 Activity Recognition (single classifier, all activities) Table VII,VIII show #7 Activity Recognition (assumption: position is known for sure, all activities) - show #8 Activity Recognition (based on the position detection result of RF, incl. all mistakes) Table IX,X show #9 Distinction between static and dynamic activity (all activities) Table V show k-Nearest-Neighbor
Test Short Description Related To Result #1 Position Detection (all activities) Table II show #2 Position Detection (only static activities) Table III show #3 Position Detection (only static activities) incl. gravity feature Table IV show #4 Position Detection (only dynamic activities) Table III show #5 Position Detection (activity-level dependend) Table VI see #3,#4 #6 Activity Recognition (single classifier, all activities) Table VII,VIII show #7 Activity Recognition (assumption: position is known for sure, all activities) - show #8 Activity Recognition (based on the position detection result of RF, incl. all mistakes) Table IX,X show #9 Distinction between static and dynamic activity (all activities) Table V show Support Vector Machine
Test Short Description Related To Result #1 Position Detection (all activities) Table II show #2 Position Detection (only static activities) Table III show #3 Position Detection (only static activities) incl. gravity feature Table IV show #4 Position Detection (only dynamic activities) Table III show #5 Position Detection (activity-level dependend) Table VI see #3,#4 #6 Activity Recognition (single classifier, all activities) Table VII,VIII show #7 Activity Recognition (assumption: position is known for sure, all activities) - show #8 Activity Recognition (based on the position detection result of RF, incl. all mistakes) Table IX,X show #9 Distinction between static and dynamic activity (all activities) Table V show