Both sides previous revision
Previous revision
Next revision
|
Previous revision
Next revision
Both sides next revision
|
wiki:dataset [2022/02/13 16:44] droggen [The data] |
wiki:dataset [2022/02/13 16:47] droggen [The data] |
~~NOTOC~~ | ~~NOTOC~~ |
====== Activity recognition datasets ====== | ====== Activity recognition datasets ====== |
| |
| |
| |
| |
===== The data ===== | ===== The data ===== |
| Daphnet Freezing of Gait Dataset in users with Parkinson's disease {{ :wiki:dataset:daphnetfog:logo.jpg?direct&100 |}} | Gait recording of PD users with occasional freeze | Detection of gait freeze | 3 3D acceleration sensors (9 attributes) | 10 | C | walk, freeze | - | - | [[daniel.roggen@ieee.org|Daniel Roggen]], Marc Baechlin, Meir Plotnik, Jeffrey M. Hausdorff, Nir Giladi | {{:wiki:dataset:daphnetfog:dataset_fog_release.zip|}}\\ [[https://archive.ics.uci.edu/ml/datasets/Daphnet+Freezing+of+Gait|Also on the UCI ML repository]] | | | Daphnet Freezing of Gait Dataset in users with Parkinson's disease {{ :wiki:dataset:daphnetfog:logo.jpg?direct&100 |}} | Gait recording of PD users with occasional freeze | Detection of gait freeze | 3 3D acceleration sensors (9 attributes) | 10 | C | walk, freeze | - | - | [[daniel.roggen@ieee.org|Daniel Roggen]], Marc Baechlin, Meir Plotnik, Jeffrey M. Hausdorff, Nir Giladi | {{:wiki:dataset:daphnetfog:dataset_fog_release.zip|}}\\ [[https://archive.ics.uci.edu/ml/datasets/Daphnet+Freezing+of+Gait|Also on the UCI ML repository]] | |
| Opportunity Dataset \\ {{ :wiki:dataset:opportunity:logo.jpg?direct&100 |}} | Dataset of wearable, object, and ambient sensors recorded in a room simulating a studio flat where users performed early morning cleanup and breakfast activities. The dataset comprises freely executed "activities of daily living" (ADL) and more a constrained "drill" run. | Reference benchmark dataset for human activity recognition algorithms (classification, automatic data segmentation, sensor fusion, feature extraction, etc). | Body-worn sensors: 7 inertial measurement units, 12 3D acceleration sensors, 4 3D localization information\\ Object sensors: 12 objects with 3D acceleration and 2D rate of turn\\ Ambient sensors: 13 switches and 8 3D acceleration sensors\\ | 4 | C | Modes of locomotion and postures | 17 gestures in the Drill runs, larger number in the ADL runs | [[http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=5573462&tag=1|Dataset publication]]\\ [[http://www.sciencedirect.com/science/article/pii/S0167865512004205|Challenge publication]] | [[daniel.roggen@ieee.org|Daniel Roggen]] and colleagues (see publications) | [[https://archive.ics.uci.edu/ml/datasets/OPPORTUNITY+Activity+Recognition|Available on the UCI ML repository]] | | | Opportunity Dataset \\ {{ :wiki:dataset:opportunity:logo.jpg?direct&100 |}} | Dataset of wearable, object, and ambient sensors recorded in a room simulating a studio flat where users performed early morning cleanup and breakfast activities. The dataset comprises freely executed "activities of daily living" (ADL) and more a constrained "drill" run. | Reference benchmark dataset for human activity recognition algorithms (classification, automatic data segmentation, sensor fusion, feature extraction, etc). | Body-worn sensors: 7 inertial measurement units, 12 3D acceleration sensors, 4 3D localization information\\ Object sensors: 12 objects with 3D acceleration and 2D rate of turn\\ Ambient sensors: 13 switches and 8 3D acceleration sensors\\ | 4 | C | Modes of locomotion and postures | 17 gestures in the Drill runs, larger number in the ADL runs | [[http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=5573462&tag=1|Dataset publication]]\\ [[http://www.sciencedirect.com/science/article/pii/S0167865512004205|Challenge publication]] | [[daniel.roggen@ieee.org|Daniel Roggen]] and colleagues (see publications) | [[https://archive.ics.uci.edu/ml/datasets/OPPORTUNITY+Activity+Recognition|Available on the UCI ML repository]] | |
| |
| Opportunity++ \\ {{ :wiki:dataset:opportunitypp:logos-opportunity-final_50p_pp_v2_xp_33p.png?direct&100 |}} | Opportunity++ is a precisely annotated dataset designed to support AI and machine learning research focused on the multimodal perception and learning of human activities. Opportunity++ is a significant multimodal extension of the original OPPORTUNITY Activity Recognition Dataset. Opportunity++ includes the original video recordings as well as video-derived skeleton tracking data. | Opportunity++ enables a wide-range of novel multimodal activity recognition research based on video data, ambient- and object-integrated sensors and wearable sensors (classification, automatic data segmentation, sensor fusion, feature extraction, etc). | Body-worn sensors: 7 inertial measurement units, 12 3D acceleration sensors, 4 3D localization information\\ Object sensors: 12 objects with 3D acceleration and 2D rate of turn\\ Ambient sensors: 13 switches and 8 3D acceleration sensors\\ Side-view video\\ Motion capture from video using OpenPose | 4 | C | Modes of locomotion and postures | 17 gestures in the Drill runs, larger number in the ADL runs | [[https://www.frontiersin.org/articles/10.3389/fcomp.2021.792065/full|Dataset publication]] | [[daniel.roggen@ieee.org|Daniel Roggen]] and colleagues (see publication) | [[https://ieee-dataport.org/open-access/opportunity-multimodal-dataset-video-and-wearable-object-and-ambient-sensors-based-human|Available on IEEE DataPort]] | | | Opportunity++ \\ {{ :wiki:dataset:opportunitypp:logos-opportunity-final_50p_pp_v2_xp_33p.png?direct&100 |}} | Opportunity++ is a precisely annotated dataset designed to support AI and machine learning research focused on the multimodal perception and learning of human activities. Opportunity++ is a significant multimodal extension of the original OPPORTUNITY Activity Recognition Dataset. Opportunity++ includes the original video recordings as well as video-derived skeleton tracking data. | Opportunity++ enables a wide-range of novel multimodal activity recognition research based on video data, ambient- and object-integrated sensors and wearable sensors (classification, automatic data segmentation, sensor fusion, feature extraction, etc). | Body-worn sensors: 7 inertial measurement units, 12 3D acceleration sensors, 4 3D localization information\\ Object sensors: 12 objects with 3D acceleration and 2D rate of turn\\ Ambient sensors: 13 switches and 8 3D acceleration sensors\\ Side-view video\\ Motion capture from video using OpenPose | 4 | C | Modes of locomotion and postures | 17 gestures in the Drill runs, larger number in the ADL runs | [[https://www.frontiersin.org/articles/10.3389/fcomp.2021.792065/full|Dataset publication]] | [[daniel.roggen@ieee.org|Daniel Roggen]] and colleagues (see publication) | [[https://ieee-dataport.org/open-access/opportunity-multimodal-dataset-video-and-wearable-object-and-ambient-sensors-based-human|Available on IEEE DataPort]] | |
| <!-- This is a HTML comment --> |
| HCI Tabletop Gestures {{ :wiki:dataset:hcitable:hcitable-logo.png?direct&100 |}} | 39 writing gestures using the Palm alphabet performed in 3 sizes and on several touch surfaces: using a mouse sitting and standing, using a tablet standing, using a touchtable sitting and standing. | Gesture recognition | Three 9 DoF IMUs at the finger, hand and wrist; one AHRS at the wrist (9DoF IMU + orientation in quaternion); screen coordinates (48 attributes) | 10 | Continuous recording in dataset | - | 39 palm alphabet gestures (numbers, letters and symbols).\\ 5 instances of each gesture per size and per touch surface. | None. | [[daniel.roggen@ieee.org|Daniel Roggen]]| {{:wiki:dataset:hcitable:hcitable_release_2022_02_13.zip|}} | | | HCI Tabletop Gestures {{ :wiki:dataset:hcitable:hcitable-logo.png?direct&100 |}} | 39 writing gestures using the Palm alphabet performed in 3 sizes and on several touch surfaces: using a mouse sitting and standing, using a tablet standing, using a touchtable sitting and standing. | Gesture recognition | Three 9 DoF IMUs at the finger, hand and wrist; one AHRS at the wrist (9DoF IMU + orientation in quaternion); screen coordinates (48 attributes) | 10 | Continuous recording in dataset | - | 39 palm alphabet gestures (numbers, letters and symbols).\\ 5 instances of each gesture per size and per touch surface. | None. | [[daniel.roggen@ieee.org|Daniel Roggen]]| {{:wiki:dataset:hcitable:hcitable_release_2022_02_13.zip|}} | |
| |