公开数据集
数据结构 ? 23.69M
Data Structure ?
* 以上分析是由系统提取分析形成的结果,具体实际数据为准。
README.md
Context
Behavioral Context refers to a wide range of attributes describing what is going on with you: where you are (home, school, work, at the beach, at a restaurant), what you are doing (sleeping, eating, in a meeting, computer work, exercising, shower), who you are with (family, friends, co-workers), your body posture state (sitting, standing, walking, running), and so on.
The ability to automatically (effortlessly, frequently, objectively) recognize behavioral context can serve many domains. Medical applications can monitor physical activity or eating habits; aging-at-home programs can log older adults' physical, social, and mental behavior; personal assistant systems can better server the user if they are aware of the context.
In-the-wild (in real life), natural behavior is complex, composed of different aspects, and has high variability. You can run outside at the beach, with friends with your phone in the pocket; you can also run indoors, at the gym, on a treadmill, with your phone motionless next to you. This high variability makes context-recognition a hard task to perform **in-the-wild**.
Content
The ExtraSensory Dataset was collected from 60 participants where each person participated approximately 7 days. We installed our data-collection mobile app on their *personal phone* and it was used to collect both sensor-measurements and context-labels. The sensor-measurements were recorded automatically for a window of 20-seconds every minute. This included accelerometer, gyroscope, magnetometer, audio, location, and phone-state from the person's phone, as well as accelerometer and compass from an additional smartwatch that we provided. In addition, the app's interface had many mechanisms for self-reporting the relevant context-labels, including reporting past context, near future, responding to notifications, and more. The flexible interface allowed to collect many labels with minimal effort and interaction-time, to avoid interfering with the natural behavior. The data was collected in-the-wild: participants used their phone in any way that was convenient to them, they engaged in their regular behavior and reported an combinations of labels that fit their context.
For every participant (or "user"), the dataset has a CSV file with pre-computed features that we extracted from the sensors and with labels. Each row has a separate example (representing 1 minute) and is indexed by the timestamp (seconds since the epoch). There are columns for the sensor-features, with the prefix of the column name indicating the sensor it came from (e.g. prefix "raw_acc:" indicating a feature came from the raw phone accelerometer measurements). There are columns for 51 diverse context-labels and the value for an example-label pair is either 1 (the label is relevant for the example), 0 (the label is not relevant), or 'NaN' (missing information).
Here, we provide data for 2 of the 60 participants. You can use this partial data to get familiar with the data and practice algorithms. The full dataset is publicly available at http://extrasensory.ucsd.edu. The website has additional parts of the data (such as a wider range of the original reported labels, location coordinates, mood labels from part of the participants). If you use the data for your publications, you are required to cite our original paper
Vaizman, Y., Ellis, K., and Lanckriet, G. "Recognizing Detailed Human Context In-the-Wild from Smartphones and Smartwatches". IEEE Pervasive Computing, vol. 16, no. 4, October-December 2017, pp. 62-74.
Read the information at http://extrasensory.ucsd.edu and the original paper for more details.
Acknowledgements
The dataset was collected by Yonatan Vaizman and Katherine Ellis, under the supervision of prof. Gert Lanckriet, all from the department of Electrical and Computer Engineering, University of California, San Diego.
Inspiration
The ExtraSensory Dataset can serve as a benchmark to compare methods for context-recognition (or context-awareness, activity recognition, daily activity detection). You can focus on specific sensors or on specific context-labels. You can suggest new models and classifiers, train them on the data and evaluate their performance on the data.
×
帕依提提提温馨提示
该数据集正在整理中,为您准备了其他渠道,请您使用
注:部分数据正在处理中,未能直接提供下载,还请大家理解和支持。
暂无相关内容。
暂无相关内容。
- 分享你的想法
去分享你的想法~~
全部内容
欢迎交流分享
开始分享您的观点和意见,和大家一起交流分享.
数据使用声明:
- 1、该数据来自于互联网数据采集或服务商的提供,本平台为用户提供数据集的展示与浏览。
- 2、本平台仅作为数据集的基本信息展示、包括但不限于图像、文本、视频、音频等文件类型。
- 3、数据集基本信息来自数据原地址或数据提供方提供的信息,如数据集描述中有描述差异,请以数据原地址或服务商原地址为准。
- 1、本站中的所有数据集的版权都归属于原数据发布者或数据提供方所有。
- 1、如您需要转载本站数据,请保留原数据地址及相关版权声明。
- 1、如本站中的部分数据涉及侵权展示,请及时联系本站,我们会安排进行数据下线。