公开数据集
数据结构 ? 1312.38M
Data Structure ?
* 以上分析是由系统提取分析形成的结果,具体实际数据为准。
README.md
Context
Snake Eyes is a dataset of tiny images simulating dice.
![Snake Eyes example pictures][1]
Invariance to translation and rotation is an important attribute we would like image classifiers to have in many applications. For many problems, even if there doesn't seem to be a lot of translation in the data, augmenting it with these transformations is often beneficial. There are not many datasets where these transformations are clearly relevant, though. The "Snake Eyes" dataset seeks to provide a problem where rotation and translation are clearly a fundamental aspect of the problem, and not just something intuitively believed to be involved.
Image classifiers are frequently utilized in a pipeline where a bounding box is first extracted from the complete image, and this process might provide centered data to the classifier. Some translation might still be present in the data the classifier sees, though, making the phenomenon relevant to classification nevertheless. A Snake Eyes classifier can clearly benefit from such a pre-processing. But the point here is trying to learn how much a classifier can learn to do by itself. In special we would like to demonstrate the "built-in" invariance to translations from CNNs.
Content
Snake Eyes contains artificial images simulating the a roll of one or two dice. The face patterns were modified to contain at most 3 black spots, making it impossible to solve the problem by merely counting them. The data was synthesized using a Python program, each image produced from a set of floating-point parameters modeling the position and angle of each dice.
![Snake Eyes face patterns, with distinctive missing pips][2]
The data format is binary, with records of 401 bytes. The first byte contains the class (1 to 12, notice it does not start at 0), and the other 400 bytes are the image rows. We offer 1 million images, split in 10 files with 100k records each, and an extra test set with 10,000 images.
Inspiration
We were inspired by the popular "tiny image" datasets often studied in ML research: MNIST, CIFAR-10 and Fashion-MNIST. Our dataset has smaller images, though, only 20x20, and 12 classes. The reduced proportions should help approximate the actual 3D and 6D manifolds of each class with the available number of data points (1 million images).
The data is artificial, with limited and very well-defined patterns, noise-free and properly anti-aliased. This is not about improving from 95% to 97% accuracy and wondering if 99% is possible with a deeper network. We don't expect less than 100% precision to be achieved with any method eventually. What we are interested to see is how do different methods compare in efficiency, how hard is it to train different models, and how the translation and rotation invariance is enforced or achieved.
We are also interested in studying the concept of manifold learning. The data has some intra-class variability due to different possible face combinations with two dice. But most of the variation comes from translation and rotation. We hope to have sampled enough data to really allow for the extraction of these manifolds in 400 dimensions, and to investigate topics such as the role of pre-training, and the relation between modeling the manifold of the whole data and of the separate classes.
Translations alone already create quite non-convex manifolds, but our classes also have the property that some linear combinations are actually a different class (e.g. two images from the "2" face make an image from the "4" class). We are curious to see how this property can make the problem more challenging to different techniques.
We are also secretly hoping to have created the image-detection version of the infamous "spiral" problem for neural networks. We are offering the prize of one ham sandwich, collected at my local café, to the first person who manages to train a neural network to solve this problem, convolutional or not, and using just traditional techniques such as logistic or ReLU activation functions and SGD training. 99% accuracy is enough. The resulting network may be susceptible to adversarial instances, this is fine, but we'll be constantly complaining about it in your ear while you eat the sandwich.
[1]: https://i.imgur.com/gaD5UtQ.png
[2]: https://imgur.com/gIcZVLN.png
×
帕依提提提温馨提示
该数据集正在整理中,为您准备了其他渠道,请您使用
注:部分数据正在处理中,未能直接提供下载,还请大家理解和支持。
暂无相关内容。
暂无相关内容。
- 分享你的想法
去分享你的想法~~
全部内容
欢迎交流分享
开始分享您的观点和意见,和大家一起交流分享.
数据使用声明:
- 1、该数据来自于互联网数据采集或服务商的提供,本平台为用户提供数据集的展示与浏览。
- 2、本平台仅作为数据集的基本信息展示、包括但不限于图像、文本、视频、音频等文件类型。
- 3、数据集基本信息来自数据原地址或数据提供方提供的信息,如数据集描述中有描述差异,请以数据原地址或服务商原地址为准。
- 1、本站中的所有数据集的版权都归属于原数据发布者或数据提供方所有。
- 1、如您需要转载本站数据,请保留原数据地址及相关版权声明。
- 1、如本站中的部分数据涉及侵权展示,请及时联系本站,我们会安排进行数据下线。