公开数据集
数据结构 ? 86.14M
Data Structure ?
* 以上分析是由系统提取分析形成的结果,具体实际数据为准。
README.md
Context
This dataset was recorded as part of an investigation into machine learning algorithms for iOS. 20,136 glyphs were drawn by 257 subjects on the touch screen of an iPhone 6.
An iOS app was developed to record the dataset. Firstly, subjects entered their age, sex, nationality and handedness. Each subject was then instructed to draw the digits 0 to 9 on the touchscreen using their index finger and thumb. This was repeated four times for each subject resulting in 80 glyphs drawn per subject, 40 using index finger and 40 using thumb. The sequence of glyph entry was random. Instructions to the user were provided using voice synthesis to avoid suggesting a specific glyph rendering.
The index finger and thumb were both used to account for situations in which the subject may only have one hand free. The aim here was to train a model that could accurately classify the glyph drawn in as many real life scenarios as possible.
Cubic interpolation of touches during gesture input was rendered on the screen to provide visual feedback to the subject and to compute arclengths. The screen was initially blank (white) and the gestures were displayed in black.
The subject could use most of screen to draw with small areas at the top and bottom reserved for instructions/interactions/guidance.
The subject was permitted to erase and repeat the entry, if desired.
Content
![Database Schema][1]
The database consists of 4 tables as seen in the schema. The tables are Subject, Glyph, Stroke and Touch. This is a logical structure as each subject draws 80 glyphs, each glyph consists of a number of strokes and each stroke consists of a number of touches. The four tables are presented in csv format and sqlite format.
Note that, in the files below, all columns start with a capital Z. This is automatically prepended to column names by Core Data, apples database framework. Column names which start with Z_ were automatically created by Core Data and hence, do not appear in the schema above.
The tables are connected through the first column in each table (Z_PK). This primary key links to the relevant column name in the next table. For example, the subject that entered any given glyph can be found by taking the value from the ZSUBJECT column in the glyph table and finding the matching Z_PK value in the subject table.
Some questions to get you started...
* What is the best model for classifying glyphs?
* What is the best model for classifying sequences of these glyphs?
* What is the best model to predict what number a glyph is before completion?
* How much of the glyph needs to be completed before a prediction can be made?
* What is the best method for interpolating between the touches in the dataset?
* How can a trained model be integrated into iOS apps?
CITATION REQUEST
Please cite the following paper in any publications reporting on use of this dataset:
Philip J. Corr, Guenole C. Silvestre, Chris J. Bleakley
Open Source Dataset and Deep Learning Models for Online Digit Gesture Recognition on Touchscreens
Irish Machine Vision and Image Processing Conference (IMVIP) 2017
Maynooth, Ireland, 30 August-1 September 2017
http://arxiv.org/abs/1709.06871
[1]: https://raw.githubusercontent.com/PhilipCorr/numeral-gesture-dataset/master/database.png
×
帕依提提提温馨提示
该数据集正在整理中,为您准备了其他渠道,请您使用
注:部分数据正在处理中,未能直接提供下载,还请大家理解和支持。
暂无相关内容。
暂无相关内容。
- 分享你的想法
去分享你的想法~~
全部内容
欢迎交流分享
开始分享您的观点和意见,和大家一起交流分享.
数据使用声明:
- 1、该数据来自于互联网数据采集或服务商的提供,本平台为用户提供数据集的展示与浏览。
- 2、本平台仅作为数据集的基本信息展示、包括但不限于图像、文本、视频、音频等文件类型。
- 3、数据集基本信息来自数据原地址或数据提供方提供的信息,如数据集描述中有描述差异,请以数据原地址或服务商原地址为准。
- 1、本站中的所有数据集的版权都归属于原数据发布者或数据提供方所有。
- 1、如您需要转载本站数据,请保留原数据地址及相关版权声明。
- 1、如本站中的部分数据涉及侵权展示,请及时联系本站,我们会安排进行数据下线。