公开数据集
数据结构 ? 1.9G
Data Structure ?
* 以上分析是由系统提取分析形成的结果,具体实际数据为准。
README.md
ModelNet 数据集共有 662 种目标分类,127915 个 CAD 模型,以及 10 类标记过方向的数据,旨在为计算机视觉、计算机图形学、机器人和认知科学的研究人员提供全面的物体 3D 模型。
该数据集包含了三个子集:
ModelNet10 为 10 个标记朝向的子集数据;
ModelNet40 为 40 个类别的三维模型;
Aligned40 为 40 类标记的三维模型。
ModelNet 数据集由普林斯顿视觉与机器人实验室于 2015 年发布,主要发布人为 N. Sedaghat, M. Zolfaghari, E. Amiri and T. Brox,相关论文有《3D ShapeNets: A Deep Representation for Volumetric Shapes》
ModelNet Benchmark Leaderboard
Please email Shuran Song to add or update your results.
In your email please provide following information in this format:
Algorithm Name, ModelNet40 Classification, ModelNet40 Retrieval, ModelNet10 Classification, ModelNet10 Retrieval
Author list, Paper title, Conference. link to paper.
Example:
3D-DescriptorNet, -, -, -,92.4%,-
Jianwen Xie, Zilong Zheng, Ruiqi Gao, Wenguan Wang, Song-Chun Zhu, and
Ying Nian Wu, Learning Descriptor Networks for 3D Shape Synthesis and
Analysis. CVPR 2018, http://...
Algorithm | ModelNet40 Classification (Accuracy) | ModelNet40 Retrieval (mAP) | ModelNet10 Classification (Accuracy) | ModelNet10 Retrieval (mAP) |
---|---|---|---|---|
RS-CNN[63] | 93.6% | - | - | - |
LP-3DCNN[62] | 92.1% | - | 94.4% | - |
LDGCNN[61] | 92.9% | - | - | - |
Primitive-GAN[60] | 86.4% | - | 92.2% | - |
3DCapsule [59] | 92.7% | - | 94.7% | - |
3D2SeqViews [58] | 93.40% | 90.76% | 94.71% | 92.12% |
OrthographicNet [57] | - | - | 88.56% | 86.85% |
Ma et al. [56] | 91.05% | 84.34% | 95.29% | 93.19% |
MLVCNN [55] | 94.16% | 92.84% | - | - |
iMHL [54] | 97.16% | - | - | - |
HGNN [53] | 96.6% | - | - | - |
SPNet [52] | 92.63% | 85.21% | 97.25% | 94.20% |
MHBN [51] | 94.7 | - | 95.0 | - |
VIPGAN [50] | 91.98 | 89.23 | 94.05 | 90.69 |
Point2Sequence [49] | 92.60 | - | 95.30 | - |
Triplet-Center Loss [48] | - | 88.0% | - | - |
PVNet[47] | 93.2% | 89.5% | - | - |
GVCNN[46] | 93.1% | 85.7% | - | - |
MLH-MV[45] | 93.11% | 94.80% | ||
MVCNN-New[44] | 95.0% | |||
SeqViews2SeqLabels[43] | 93.40% | 89.09% | 94.82% | 91.43% |
G3DNet[42] | 91.13% | 93.1% | ||
VSL [41] | 84.5% | 91.0% | ||
3D-CapsNets[40] | 82.73% | 70.1% | 93.08% | 88.44% |
KCNet[39] | 91.0% | 94.4% | ||
FoldingNet[38] | 88.4% | 94.4% | ||
binVoxNetPlus[37] | 85.47% | 92.32% | ||
DeepSets[36] | 90.3% | |||
3D-DescriptorNet[35] | 92.4% | |||
SO-Net[34] | 93.4% | 95.7% | ||
Minto et al.[33] | 89.3% | 93.6% | ||
RotationNet[32] | 97.37% | 98.46% | ||
LonchaNet[31] | 94.37 | |||
Achlioptas et al. [30] | 84.5% | 95.4% | ||
PANORAMA-ENN [29] | 95.56% | 86.34% | 96.85% | 93.28% |
3D-A-Nets [28] | 90.5% | 80.1% | ||
Soltani et al. [27] | 82.10% | |||
Arvind et al. [26] | 86.50% | |||
LonchaNet [25] | 94.37% | |||
3DmFV-Net [24] | 91.6% | 95.2% | ||
Zanuttigh and Minto [23] | 87.8% | 91.5% | ||
Wang et al. [22] | 93.8% | |||
ECC [21] | 83.2% | 90.0% | ||
PANORAMA-NN [20] | 90.7% | 83.5% | 91.1% | 87.4% |
MVCNN-MultiRes [19] | 91.4% | |||
FPNN [18] | 88.4% | |||
PointNet[17] | 89.2% | |||
Klokov and Lempitsky[16] | 91.8% | 94.0% | ||
LightNet[15] | 88.93% | 93.94% | ||
Xu and Todorovic[14] | 81.26% | 88.00% | ||
Geometry Image [13] | 83.9% | 51.3% | 88.4% | 74.9% |
Set-convolution [11] | 90% | |||
PointNet [12] | 77.6% | |||
3D-GAN [10] | 83.3% | 91.0% | ||
VRN Ensemble [9] | 95.54% | 97.14% | ||
ORION [8] | 93.8% | |||
FusionNet [7] | 90.8% | 93.11% | ||
Pairwise [6] | 90.7% | 92.8% | ||
MVCNN [3] | 90.1% | 79.5% | ||
GIFT [5] | 83.10% | 81.94% | 92.35% | 91.12% |
VoxNet [2] | 83% | 92% | ||
DeepPano [4] | 77.63% | 76.81% | 85.45% | 84.18% |
3DShapeNets [1] | 77% | 49.2% | 83.5% | 68.3% |
帕依提提提温馨提示
该数据集正在整理中,为您准备了其他渠道,请您使用
- 分享你的想法
全部内容
数据使用声明:
- 1、该数据来自于互联网数据采集或服务商的提供,本平台为用户提供数据集的展示与浏览。
- 2、本平台仅作为数据集的基本信息展示、包括但不限于图像、文本、视频、音频等文件类型。
- 3、数据集基本信息来自数据原地址或数据提供方提供的信息,如数据集描述中有描述差异,请以数据原地址或服务商原地址为准。
- 1、本站中的所有数据集的版权都归属于原数据发布者或数据提供方所有。
- 1、如您需要转载本站数据,请保留原数据地址及相关版权声明。
- 1、如本站中的部分数据涉及侵权展示,请及时联系本站,我们会安排进行数据下线。