Name | Scenario | Original purpose | Sensors | Subjects | Seg. / Cont. | Static / periodic act. | Sporadic act. | Comments | Authors | Link |
---|---|---|---|---|---|---|---|---|---|---|
Skoda mini checkpoint | 10 manipulative gestures performed in a car maintenance scenario | Gesture recognition | 20 3D acceleration sensors (60 attributes) | 1 | Segmented and continous recording in dataset | - | 10: write notes, open engine hood, close engine hood, check door gaps, open door, close door, open/close two doors, check trunk gap, open/close trunk, check steering wheel. 70 instances of each gesture. | ACM TECS 2012 EWSN 2008 ISSNIP 2007 | Daniel Roggen, Piero Zappi | skodaminicp_2015_08.zip |
BodyAttack fitness | 6 fitness activity classes, done mostly with the legs. | Analyse effect of sensor displacement | 10 3D accelerometers on the leg | 1 | C | 6 activities: flick kicks; knee lifts; jumping jacks; superman jumps; high knee runs; feet back runs | - | ISWC 2009 | Daniel Roggen, Kilian Foerster | bafitness.zip |
HCI gestures | 5 gestures performed freehand or guided against a blackboard | Analyse effect of sensor displacement | 8 3D acceleration sensors (24 attributes) | 1 | Segmented and continous recording in dataset | - | 5 gestures: triangle up, square, circle, infinity, triangle down. 10 instances of freehand, 60 instances of guided gestures | ISWC 2009 | Daniel Roggen, Kilian Foerster | hci.zip |
Daphnet Freezing of Gait Dataset in users with Parkinson's disease | Gait recording of PD users with occasional freeze | Detection of gait freeze | 3 3D acceleration sensors (9 attributes) | 10 | C | walk, freeze | - | - | Daniel Roggen, Marc Baechlin, Meir Plotnik, Jeffrey M. Hausdorff, Nir Giladi | dataset_fog_release.zip Also on the UCI ML repository |
Opportunity Dataset | Dataset of wearable, object, and ambient sensors recorded in a room simulating a studio flat where users performed early morning cleanup and breakfast activities. The dataset comprises freely executed “activities of daily living” (ADL) and more a constrained “drill” run. | Reference benchmark dataset for human activity recognition algorithms (classification, automatic data segmentation, sensor fusion, feature extraction, etc). | Body-worn sensors: 7 inertial measurement units, 12 3D acceleration sensors, 4 3D localization information Object sensors: 12 objects with 3D acceleration and 2D rate of turn Ambient sensors: 13 switches and 8 3D acceleration sensors | 4 | C | Modes of locomotion and postures | 17 gestures in the Drill runs, larger number in the ADL runs | Dataset publication Challenge publication | Daniel Roggen and colleagues (see publications) | Available on the UCI ML repository |
Opportunity++ | Opportunity++ is a precisely annotated dataset designed to support AI and machine learning research focused on the multimodal perception and learning of human activities. Opportunity++ is a significant multimodal extension of the original OPPORTUNITY Activity Recognition Dataset. Opportunity++ includes the original video recordings as well as video-derived skeleton tracking data. | Opportunity++ enables a wide-range of novel multimodal activity recognition research based on video data, ambient- and object-integrated sensors and wearable sensors (classification, automatic data segmentation, sensor fusion, feature extraction, etc). | Body-worn sensors: 7 inertial measurement units, 12 3D acceleration sensors, 4 3D localization information Object sensors: 12 objects with 3D acceleration and 2D rate of turn Ambient sensors: 13 switches and 8 3D acceleration sensors Side-view video Motion capture from video using OpenPose | 4 | C | Modes of locomotion and postures | 17 gestures in the Drill runs, larger number in the ADL runs | Dataset publication | Daniel Roggen and colleagues (see publication) | Available on IEEE DataPort |
HCI Tabletop Gestures | 39 writing gestures using the Palm alphabet performed in 3 sizes and on several touch surfaces: using a mouse sitting and standing, using a tablet standing, using a touchtable sitting and standing. | Gesture recognition | Three 9 DoF IMUs at the finger, hand and wrist; one AHRS at the wrist (9DoF IMU + orientation in quaternion); screen coordinates (48 attributes) | 10 | Continuous recording in dataset | - | 39 palm alphabet gestures (numbers, letters and symbols). 5 instances of each gesture per size and per touch surface. | None. | Daniel Roggen | hcitable_release_2022_02_13.zip |