公开数据集
数据结构 ? 57.45G
Data Structure ?
* 以上分析是由系统提取分析形成的结果,具体实际数据为准。
README.md
The tasks are based on BDD100K, the largest driving video dataset to date supporting heterogenous multi-task learning. It contains 100,000 videos representing more than 1000 hours of driving experience with more than 100 million frames. The videos comes with GPU/IMU data for trajectory information. The BDD100K dataset now provide annotations of the 10 tasks: image tagging, lane detection, drivable area segmentation, object detection, semantic segmentation, instance segmentation, multi-object detection tracking, multi-object segmentation tracking, domain adaptation and imitation learning. These diverse tasks make the study of heterogenous multi-task learning possible.
For the CVPR 2020 Workshop on Autonomous Driving, we host the multi-object detection tracking challenge on CodaLab detailed below. Challenges on the other tasks will be announced on our dataset website.
Video Data
Explore 100,000 HD video sequences of over 1,100-hour driving experience across many different
times in the day, weather conditions, and driving scenarios. Our video sequences also include
GPS locations, IMU data, and timestamps.
Road Object Detection
2D Bounding Boxes annotated on 100,000 images for
bus, traffic light, traffic sign, person, bike, truck, motor, car, train, and rider.
Instance Segmentation
Explore over 10,000 diverse images with pixel-level and rich instance-level annotations.
Driveable Area
Learn complicated drivable decision from 100,000 images.
Lane Markings
Multiple types of lane marking annotations on 100,000 images for driving guidance.
帕依提提提温馨提示
该数据集正在整理中,为您准备了其他渠道,请您使用
- 分享你的想法
全部内容
数据使用声明:
- 1、该数据来自于互联网数据采集或服务商的提供,本平台为用户提供数据集的展示与浏览。
- 2、本平台仅作为数据集的基本信息展示、包括但不限于图像、文本、视频、音频等文件类型。
- 3、数据集基本信息来自数据原地址或数据提供方提供的信息,如数据集描述中有描述差异,请以数据原地址或服务商原地址为准。
- 1、本站中的所有数据集的版权都归属于原数据发布者或数据提供方所有。
- 1、如您需要转载本站数据,请保留原数据地址及相关版权声明。
- 1、如本站中的部分数据涉及侵权展示,请及时联系本站,我们会安排进行数据下线。