Results

  • POLARIS: Probabilistic and Ontological Activity Recognition in Smart-homes.

    Recognition of activities of daily living (ADLs) is an enabling technology for several ubiquitous computing applications. In this field, most activity recognition systems rely on supervised learning to extract activity models from labeled datasets. A problem with that approach is the acquisition of comprehensive activity datasets, which ...

    Info

    This page provides the results of our experiments. Each result file contains the test results for each individual patient/day, i.e., the individual and aggregated results represented by precision, recall, and F-measure. These results belong to the publication POLARIS: Probabilistic and Ontological Activity Recognition in Smart-homes.

    Hint

    This work is an extension. Please consider the following page for detailed results concerning Offline POLARIS: Unsupervised Recognition of Interleaved Activities of Daily Living through Ontological and Probabilistic Reasoning.

    Setup

    MLNnc solver:  show
    Ontology:  show (unchanged compared to D. Riboni et al. 2016)

    WSU CASAS Dataset

    (Publication)

    MLNnc Model: show (also unchanged)

    TestShort DescriptionRelated ToResult
    #1--coming soon

    SmartFaber Dataset

    (Publication)

    MLNnc Model: show (also unchanged)

    TestShort DescriptionRelated ToResult
    #1--coming soon
  • Modeling and Reasoning with ProbLog: An Application in Recognizing Complex Activities.

    Smart-home activity recognition is an enabling tool for a wide range of ambient assisted living applications. The recognition of ADLs usually relies on supervised learning or knowledge-based reasoning techniques. In order to overcome the well-known limitations of those two approaches and, at the same time, to combine their strengths to ...

    Info

    This page provides additional material. Those belong to the publication Modeling and Reasoning with ProbLog: An Application in Recognizing Complex Activities.

    Setup

    ProbLog Online Editor: show

    ProbLog Models

    ModelShort DescriptionDownloadSize [MB]
    #1Minimal ProbLog Model (Figure 1)Download<1
    #2Final ProbLog Model (Running Example)Download<1

     

    This work was a cooperation between the
    University of Milano
    and the
    University of Mannheim

  • Hips Do Lie! A Position-Aware Mobile Fall Detection System (2018)

    Ambient Assisted Living using mobile device sensors is an active area of research in pervasive computing. In our work, we aim to implement a self-adaptive pervasive fall detection approach that is suitable for real life situations. The paper focuses on the problem that the device’s on-body position but also the falling pattern of a person ...

    Info

    This page provides the results of our experiments. Each result file contains the individual results, i.e., not aggregated. The results were analyzed with pivot tables in Excel. These results belong to the publication Hips Do Lie! A Position-Aware Mobile Fall Detection System.
     

    Datasets

    Following, the datasets which we considered in our experiments can be found. Source provides a link to the original dataset, Publication provides a link to the related publication, and Download provides our preprocessed version of the original dataset. Further information can be found here (DataSets) and here (Traces).

    Results

    TestShort DescriptionDetailed DescriptionRelated ToResult
    #1Fall DetectionReadMeTable III-VIshow
    #2PositioningReadMeTable VIIshow
    #3Fullstack (FallDetection)ReadMeTable VIIIshow
    #4Fullstack (Positioning)ReadMeTable IXshow
    #5ClusteringReadMe-show

    Additional Results (F-measure)

    TestShort DescriptionRelated ToResult
    #1F0.5, F1, and F2Table IVshow
    #2F0.5, F1, and F2Table Vshow
    #3F0.5, F1, and F2Table VIIshow
    #4F0.5, F1, and F2Table VIIIshow
    #5F0.5, F1, and F2Table IXshow

    This work was a cooperation between the
    Chair of Information Systems II
    and the

    Chair of Artificial Intelligence

  • Online Personalization of Cross-Subjects based Activity Recognition Models on Wearable Devices (2017)

    Human activity recognition using wearable devices is an active area of research in pervasive computing. In our work, we target patients and elders which are unable to collect and label the required data for a subject-specific approach. For that purpose, we focus on the problem of cross-subjects based recognition models and introduce an ...

    Info

    This page provides the results of our experiments. Each result file contains the test results (Precision, Recall, F-measure) for each individual subjects. However, the results are not aggregated, i.e., does not provide average values of all subjects, positions, or combinations. These results belong to the publication Online Personalization of Cross-Subjects based Activity Recognition Models on Wearable Devices.

    Hint

    All experiments were performed using random forest as a classifier. We considered two different versions of this classifier: online and offline learning. The former was self/re-implemented (source code) based on the work of A. Saffari et al. where the latter was developed by M. Hall et al..
     

    Preprocessed Accerleration Data (Windows)

    Data SetShort DescriptionDownloadSize [MB]
    #1Single sensor setup, separated by position and subjectDownload~120
    #2Two-part setup, separated by combination and subjectDownload~690
    #3Three-part setup, separated by combination and subjectDownload~1600
    #4Four-part setup, separated by combination and subjectDownload~2100
    #5Five-part setup, separated by combination and subjectDownload~1600
    #6Six-part setup, separated by combination and subjectDownload~640

    Cross-Subjects Activity Recognition (Section VI, Part A)

    TestShort DescriptionRelated ToResultSize [MB]
    #1Randomy, One Accelerometer, Offline learningTable IIshow~1
    #2Randomy, Two Accelerometer, Offline learningTable II/III/IVshow~2
    #3Randomy, Three Accelerometer, Offline learningTable IIshow~3
    #4Randomy, Four Accelerometer, Offline learningTable IIshow~3
    #5Randomy, Five Accelerometer, Offline learningTable IIshow~2
    #6Randomy, Six Accelerometer, Offline learningTable IIshow~1
    #7L1O, One Accelerometer, Offline learningTable IIshow<1
    #8L1O, Two Accelerometer, Offline learningTable II/III/IVshow<1
    #9L1O, Three Accelerometer, Offline learningTable IIshow<1
    #10L1O, Four Accelerometer, Offline learningTable IIshow<1
    #11L1O, Five Accelerometer, Offline learningTable IIshow<1
    #12L1O, Six Accelerometer, Offline learningTable IIshow<1
    #13Our approach, One Accelerometer, Offline learningTable IIshow<1
    #14Our approach, Two Accelerometer, Offline learningTable II/III/IVshow<1
    #15Our approach, Three Accelerometer, Offline learningTable IIshow<1
    #16Our approach, Four Accelerometer, Offline learningTable IIshow<1
    #17Our approach, Five Accelerometer, Offline learningTable IIshow<1
    #18Our approach, Six Accelerometer, Offline learningTable IIshow<1
    #19Subject-Specific, One Accelerometer, Offline learning-show<1
    #20Subject-Specific, Two Accelerometer, Offline learning-show<1

    Personalization: Online and Active Learning (Section VI, Part B)

    TestShort DescriptionRelated ToResultSize [MB]
    #1Our approach, One Accelerometer, Online Learning-show<1
    #2Our approach + User-Feedback, One Accelerometer, Online Learning-show~1
    #3Our approach + User-Feedback + Smoothing, One Accelerometer, Online Learning-show~1
    #4Our approach, Two Accelerometer, Online LearningTable V/VIshow<1
    #5Our approach + Smoothing, Two Accelerometer, Online LearningTable V/VIshow~2
    #6Our approach + User-Feedback, Two Accelerometer, Online LearningTable V/VIshow~2
    #7Our approach + User-Feedback + Smoothing, Two Accelerometer, Online LearningTable V/VI/VII/VIII, Figure 4show~2
    #8Our approach + User-Feedback + Smoothing (varying confidence threshold), Two Accelerometer, Online LearningFigure 5show~23
    #9Our approach + User-Feedback + Smoothing (varying number of trees), Two Accelerometer, Online LearningFigure 6show~27
    #10Subject-Specific, One Accelerometer, Online Learning-show~1
    #11Subject-Specific, Two Accelerometer, Online Learning-show~2
  • Position-Aware Activity Recognition with Wearable Devices (2017)

    Reliable human activity recognition with wearable devices enables the development of human-centric pervasive applications. We aim to develop a robust wearable-based activity recognition system for real life situations. Consequently, in this work we focus on the problem of recognizing the on-body position of the wearable device ensued by ...

    Info

    This page provides the results of our experiments. Each result file contains the test results (F-Measure, Confusion Matrix, ...) for each individual subject. These results belong to the publication Position-Aware Activity Recognition with Wearable Devices.

    Hint

    This work is an extension. Please consider the following page for detailed results that correspond to Section 5.1–5.3: On-body Localization of Wearable Devices: An Investigation of Position-Aware Activity Recognition – (Numbers of tables have changed).

    Subject-Specific Activity Recognition (Section 5.2)

    TestShort DescriptionRelated ToResultSize [MB]
    #1Single SensorFigure Vshow~20
    #2Two-part setup (all combinations)-show~60
    #3Three-part setup (all combinations)-show~100

    Cross-Subjects Activity Recognition (Section 5.4)

    TestShort DescriptionRelated ToResultSize [MB]
    #1Dynamic Activity Recognition (Randomly, One accelormeter)Table 10, 11show<1
    #2Dynamic Activity Recognition (L1O, One accelormeter)Table 10, 11show<1
    #3Dynamic Activity Recognition (Top-Pairs, One accelormeter)Table 10, 11show<1
    #4Dynamic Activity Recognition (Physical, One accelormeter)Table 10, 11show<1
    #5Activity Recognition (Physical, One accelormeter)Table 12show<1
    #6Activity Recognition (Physical, One accelormeter, including gravity feature for static activities)Table 12show<1
    #7Activity Recognition (Physical, Two accelormeter, Only waist combinations)Table 12, 13show<1
    #8Activity Recognition (Physical, Two accelormeter, Only waist combinations, including gravity feature for static activities)Table 12, 13show<1
    #9Position Recognition (Randomly, One accelormeter)Table 12, 13show<1
    #10Position Recognition (L1O, One accelormeter)Table 12, 13show<1
    #11Position Recognition (Top-Pairs, One accelormeter)Table 12, 13show<1
    #12Position Recognition (Physical, One accelormeter)Table 12, 13show<1
  • Self-Tracking Reloaded: Applying Process Mining to Personalized Health Care from Labeled Sensor Data (2016)

    Currently, there is a trend to promote personalized health care in order to prevent diseases or to have a healthier life. Using current devices such as smart-phones and smart-watches, an individual can easily record detailed data from her daily life. Yet, this data has been mainly used for self-tracking in order to enable personalized ...

    Info

    This page provides additional material of the publication Self-Tracking Reloaded: Applying Process Mining to Personalized Health Care from Labeled Sensor Data. In the following, we provide personal process maps and trace alignment clustering results of our experiments.

    Hint

    As mentioned in the paper, the XES files that are provided on this page were created from data sets that were created by other researchers. The original data set “Activity Log UCI” was created by Ordóñez et al. where hh102, hh104, and hh110 originate from Cook et. al.

    Personal Processes (Fuzzy Model)

    NumberShort DescriptionRelated ToResult
    #1Main personal activity for all users during the working days (frequency)6.2show
    #2Main personal activity for all users during the weekend days (frequency)6.2show
    #3Main personal activity for all users during the working days (duration)6.2show
    #4Main personal activity for all users during the weekend days (duration)6.2show

    Personal Process Models (XES)

    NumberShort DescriptionRelated ToResult
    #1Activity Log UCI Detailed (during the week)6.2show
    #2Activity Log UCI Detailed (Weekend)6.2show
    #3hh102 (during the week)6.2show
    #4hh102 (Weekend)6.2show
    #5hh104 (during the week)6.2show
    #6hh104 (Weekend)6.2show
    #7hh110 (during the week)6.2show
    #8hh110 (Weekend)6.2show

    Trace Alignment

    NumberShort DescriptionRelated ToResult
    #1Clustered Traces, Subject 1 (based on our data set)7show
  • Unsupervised Recognition of Interleaved Activities of Daily Living through Ontological and Probabilistic Reasoning

    Recognition of activities of daily living (ADLs) is an enabling technology for several ubiquitous computing applications. Most activity recognition systems rely on supervised learning methods to extract activity models from labeled datasets. An inherent problem of that approach consists in the acquisition of comprehensive activity datasets, ...

    Info

    This page provides the results of our experiments. Each result file contains the test results for each individual patient/day, i.e., the individual and aggregated results represented by precision, recall, and F-measure. Further, we provide the sensor and instance-based results as well as the precomputet instance candidates. These results belong to the publication Unsupervised Recognition of Interleaved Activities of Daily Living through Ontological and Probabilistic Reasoning. 

    Setup

    MLNnc Solver: show

    Ontology: show

    WSU CASAS Dataset

    MLNnc Model: show
    Probabilities (Ontology-based): show
    Dataset (External Link): show
     

    TestShort DescriptionRelated ToResult
    #1Probabilities derived from our ontologyTable II,IIIshow
    #2Probabilities derived from the data setTable II,IIIshow

    SmartFaber Dataset

    MLNnc Model: show
    Probabilities (Ontology-based): Comming soon
    Dataset: Not publicly available due to data privacy

    TestShort DescriptionRelated ToResult
    #1Probabilities derived from our ontologyTable IV,Vshow
    #2Probabilities derived from the data setTable IV,Vshow
  • On-body Localization of Wearable Devices: An Investigation of Position-Aware Activity Recognition (2016)

    Human activity recognition using mobile device sensors is an active area of research in pervasive computing. In our work, we aim at implementing activity recognition approaches that are suitable for real life situations. This paper focuses on the problem of recognizing the on-body position of the mobile device which in a real world setting ...

    Info

    This page provides the results of our experiments. Each result file contains the test results (F-Measure, Confusion Matrix, ...) for each individual subjects. However, the results are not aggregated, i.e., does not provide average values of all subjects. We applied 10-fold cross validation and performed 10 runs where each time the data set was randomized and the 10-folds were recreated. These results belong to the publication On-body Localization of Wearable Devices: An Investigation of Position-Aware Activity Recognition.

    Random Forest

    TestShort DescriptionRelated ToResult
    #1Position Detection (all activities)Table IIshow
    #2Position Detection (only static activities)Table IIIshow
    #3Position Detection (only static activities) incl. gravity featureTable IVshow
    #4Position Detection (only dynamic activities)Table IIIshow
    #5Position Detection (activity-level dependend)Table VIsee #3,#4
    #6Activity Recognition (single classifier, all activities)Table VII,VIIIshow
    #7Activity Recognition (assumption: position is known for sure, all activities)-show
    #8Activity Recognition (based on the position detection result of RF, incl. all mistakes)Table IX,Xshow
    #9Distinction between static and dynamic activity (all activities)Table Vshow

    Naive Bayes

    TestShort DescriptionRelated ToResult
    #1Position Detection (all activities)Table IIshow
    #2Position Detection (only static activities)Table IIIshow
    #3Position Detection (only static activities) incl. gravity featureTable IVshow
    #4Position Detection (only dynamic activities)Table IIIshow
    #5Position Detection (activity-level dependend)Table VIsee #3,#4
    #6Activity Recognition (single classifier, all activities)Table VII,VIIIshow
    #7Activity Recognition (assumption: position is known for sure, all activities)-show
    #8Activity Recognition (based on the position detection result of RF, incl. all mistakes)Table IX,Xshow
    #9Distinction between static and dynamic activity (all activities)Table Vshow

    Artificial Neural Network

    TestShort DescriptionRelated ToResult
    #1Position Detection (all activities)Table IIshow
    #2Position Detection (only static activities)Table IIIshow
    #3Position Detection (only static activities) incl. gravity featureTable IVshow
    #4Position Detection (only dynamic activities)Table IIIshow
    #5Position Detection (activity-level dependend)Table VIsee #3,#4
    #6Activity Recognition (single classifier, all activities)Table VII,VIIIshow
    #7Activity Recognition (assumption: position is known for sure, all activities)-show
    #8Activity Recognition (based on the position detection result of RF, incl. all mistakes)Table IX,Xshow
    #9Distinction between static and dynamic activity (all activities)Table Vshow

    Decision Tree

    TestShort DescriptionRelated ToResult
    #1Position Detection (all activities)Table IIshow
    #2Position Detection (only static activities)Table IIIshow
    #3Position Detection (only static activities) incl. gravity featureTable IVshow
    #4Position Detection (only dynamic activities)Table IIIshow
    #5Position Detection (activity-level dependend)Table VIsee #3,#4
    #6Activity Recognition (single classifier, all activities)Table VII,VIIIshow
    #7Activity Recognition (assumption: position is known for sure, all activities)-show
    #8Activity Recognition (based on the position detection result of RF, incl. all mistakes)Table IX,Xshow
    #9Distinction between static and dynamic activity (all activities)Table Vshow

    k-Nearest-Neighbor

    TestShort DescriptionRelated ToResult
    #1Position Detection (all activities)Table IIshow
    #2Position Detection (only static activities)Table IIIshow
    #3Position Detection (only static activities) incl. gravity featureTable IVshow
    #4Position Detection (only dynamic activities)Table IIIshow
    #5Position Detection (activity-level dependend)Table VIsee #3,#4
    #6Activity Recognition (single classifier, all activities)Table VII,VIIIshow
    #7Activity Recognition (assumption: position is known for sure, all activities)-show
    #8Activity Recognition (based on the position detection result of RF, incl. all mistakes)Table IX,Xshow
    #9Distinction between static and dynamic activity (all activities)Table Vshow

    Support Vector Machine

    TestShort DescriptionRelated ToResult
    #1Position Detection (all activities)Table IIshow
    #2Position Detection (only static activities)Table IIIshow
    #3Position Detection (only static activities) incl. gravity featureTable IVshow
    #4Position Detection (only dynamic activities)Table IIIshow
    #5Position Detection (activity-level dependend)Table VIsee #3,#4
    #6Activity Recognition (single classifier, all activities)Table VII,VIIIshow
    #7Activity Recognition (assumption: position is known for sure, all activities)-show
    #8Activity Recognition (based on the position detection result of RF, incl. all mistakes)Table IX,Xshow
    #9Distinction between static and dynamic activity (all activities)Table Vshow