公开数据集
数据结构 ? 336.6M
Data Structure ?
* 以上分析是由系统提取分析形成的结果,具体实际数据为准。
README.md
概述:2005年,以色列 Weizmann institute 发布了Weizmann 数据库。数据库包含了 10个动作(bend, jack, jump, pjump, run,side, skip, walk, wave1,wave2),每个动作有 9 个不同的样本。视频的视角是固定的,背景相对简单,每一帧中只有 1 个人做动作。
数据库中标注数据除了类别标记外还包括:前景的行为人剪影和用于背景抽取的背景序列。
Abstract
Human action in video sequences can be seen as silhouettes of a moving torso and protruding limbs undergoing articulated
motion. We regard human actions as three-dimensional shapes induced by the silhouettes in the space-time volume. We adopt a
recent approach by Gorelick et. al. for analyzing 2D shapes and generalize it to deal with volumetric space-time action
shapes. Our method utilizes properties of the solution to the Poisson equation to extract space-time features such as local
space-time saliency, action dynamics, shape structure and orientation. We show that these features are useful for action
recognition, detection and clustering. The method is fast, does not require video alignment and is applicable in
(but not limited to) many scenarios where the background is known. Moreover, we demonstrate the robustness of our method
to partial occlusions, non-rigid deformations, significant changes in scale and viewpoint, high irregularities in the
performance of an action and low quality video.
NEW! The PAMI paper (full version, updated results) in PDF (2MB) format (BibTeX).
Updated database - including original silhouette sequences and
their aligned version, as well as the robustness sequences, can be found below.
The ICCV paper (shorter version) in PDF (2MB) format (BibTeX).
Poisson features
We use the solution of the Poisson equation to extract several space time features. In the table below we demonstrate these features for three sequences of different actions. The first two columns show the original video sequence and the extracted foreground mask. The third column shows the solutions of the Poisson equation, color-coded from blue (low values) to red (high values). The last three columns show the Space-Time ''saliency'', ''plateness'' and ''stickness'' features that we use. See the paper for details. Click the images below to play the full video sequences.
Experimental Results
In the paper we report results for four experiments: action clustering, action recognition, robustness experiments and action detection. Here we show results of last three.
Action Recognition:
We collected a database of 90 low-resolution (180 x 144, deinterlaced 50 fps) video sequences showing nine different people, each performing 10 natural actions such as run,walk,skip,jumping-jack(or shortly jack), jump-forward-on-two-legs (or jump), jump-in-place-on-two-legs(or pjump), gallopsideways(or side), wave-two-hands (or wave2), waveone- hand (or wave1), or bend.
In order to treat both the periodic and nonperiodic actions in the same framework as well as to compensate for different length of periods, we used a sliding window in time to extract space-time cubes, each having eight frames with an overlap of four frames between the consecutive space-time cubes.
Below we summarize our recognition rates in "leave-one-sequence-out" classification experiments for both complete sequences and sub-sequences .
Robustness Experiments:
In this experiment we demonstrate the robustness of our method to high irregularities in the performance of an action.
We collected ten test video sequences of people walking in various difficult scenarios in front of different non-uniform
backgrounds (see the sequences and their foreground masks below). We show that our approach has relatively low sensitivity
to partial occlusions, non-rigid deformations and other defects in the extracted space-time shape.
Click the images below to play the full video sequences.
Experiment results: The table below shows for each of the test sequences the first and second best choices and
their distances as well as the median distance to all the actions in our database. The test sequences are sorted
by the distance to their first best chosen action. All the sequences were classified as "walk".
Moreover we demonstrate the robustness of our
method to substential changes in viewpoint. For this purpose we
collected ten additional sequences, each showing the "walk" action
captured from a different viewpoint
(varying between 0° and 81° relative to the image plane with steps of
9°). Note, that sequences with
angles approaching 90 degrees contain significant changes in scale
within the sequence.
All sequences with viewpoints between 0° and 54° were classified
correctly with a large relative gap between the first (true) and
the second closest actions (see table below). For larger viewpoints a
gradual deterioration occurs. This demonstrates the robustness
of our method to relatively large variations in viewpoint.
Action Detection in a Ballet Movie
This experiment shows action detection on a movie sequence of a ballet
dance, performed by the "Birmingham Royal Ballet" from the
"London Dance" website.
Original full video can be found also here (WMV format, 400KB). The task was to detect all instances of
the ''cabriole'' pa (the query) in the input video.
Click the images below to play the full video sequences.
BibTeX
The PAMI paper: @article{ActionsAsSpaceTimeShapes_pami07,
author = {Lena Gorelick and Moshe Blank and Eli Shechtman and Michal Irani and Ronen Basri},
title = {Actions as Space-Time Shapes},
journal = {Transactions on Pattern Analysis and Machine Intelligence},
volume = {29},
number = {12},
pages = {2247--2253},
month = {December},
year = {2007},
ee = {www.wisdom.weizmann.ac.il/~vision/SpaceTimeActions.html}
}
The ICCV paper: @inproceedings{ActionsAsSpaceTimeShapes_iccv05,
author = {Moshe Blank and Lena Gorelick and Eli Shechtman and Michal Irani and Ronen Basri},
title = {Actions as Space-Time Shapes},
booktitle = {The Tenth IEEE International Conference on Computer Vision (ICCV'05)},
pages = {1395-1402},
location = {Beiging},
year = {2005},
ee = {www.wisdom.weizmann.ac.il/~vision/SpaceTimeActions.html},
}Contact Details
For further details please contact the authors:
Lena Gorelick
Moshe Blank
Eli Shechtman
帕依提提提温馨提示
该数据集正在整理中,为您准备了其他渠道,请您使用
- 分享你的想法
全部内容
数据使用声明:
- 1、该数据来自于互联网数据采集或服务商的提供,本平台为用户提供数据集的展示与浏览。
- 2、本平台仅作为数据集的基本信息展示、包括但不限于图像、文本、视频、音频等文件类型。
- 3、数据集基本信息来自数据原地址或数据提供方提供的信息,如数据集描述中有描述差异,请以数据原地址或服务商原地址为准。
- 1、本站中的所有数据集的版权都归属于原数据发布者或数据提供方所有。
- 1、如您需要转载本站数据,请保留原数据地址及相关版权声明。
- 1、如本站中的部分数据涉及侵权展示,请及时联系本站,我们会安排进行数据下线。