1. 程式人生 > >計算機視覺-實驗常用影象資料集

計算機視覺-實驗常用影象資料集

1.搜狗實驗室資料集:

網際網路圖片庫來自sogou圖片搜尋所索引的部分資料。其中收集了包括人物、動物、建築、機械、風景、運動等類別,總數高達2,836,535張圖片。對於每張圖片,資料集中給出了圖片的原圖、縮圖、所在網頁以及所在網頁中的相關文字。200多G

2

IMAGECLEF致力於點陣圖片相關領域提供一個基準(檢索、分類、標註等等)Cross Language Evaluation Forum (CLEF) 。從2003年開始每年舉行一次比賽.

3

Xiaorong Li 維護的資料集。PhD ,Intelligent Systems Lab Amsterdam.research on video and image retrieval.

  • Flickr-3.5M: A collection of 3.5 million social-tagged images.
  • Social20: A ground-truth set for tag-based social image retrieval.
  • Biconcepts2012test: A ground-truth set for retrieving bi-concepts (concept pairs) in unlabeled images.
  • neg4free: A set of negative examples automatically harvested from social-tagged images for 20 PASCAL VOC concepts.
4 wikipedia featured articles 函式圖片(以及特徵)以及對應的wiki文字。可以看看文章A New Approach to Cross-Modal Multimedia Retrieval,還有一批文章On the Role of Correlation and Abstraction in Cross-Modal Multimedia Retrieval不過還沒有下載連結
http://www.svcl.ucsd.edu/projects/crossmodal/

5

To our knowledge, this is the largest real-world web image dataset comprising over 269,000 images with over 5,000 user-provided tags, and ground-truth of 81 concepts for the entire dataset. The dataset is much larger than the popularly available Corel and Caltech 101 datasets. Though some datasets comprise over 3 million images, they only have ground-truth for a small fraction of images. Our proposed NUS-WIDE dataset has the ground-truth for the entire dataset.


6.

7.

Jegou的資料集,不過Jegou是專門做CBIR的,影象有ground truth,沒有標註。

8.

vgg的osford building dataset。也是專門CBIR的資料。

9.

The dataset for the Microsoft Image Grand Challenge on Image Retrieval 

另外介紹cvpaper上的整理的資料集

Participate in Reproducible Research

Detection

Classification/Detection Competitions, Segmentation Competition, Person Layout Taster Competition datasets
LabelMe is a web-based image annotation tool that allows researchers to label images and share the annotations with the rest of the
 community. If you use the database, we only ask that you contribute to it, from time to time, by using the labeling tool.
1521 images with human faces, recorded under natural conditions, i.e. varying illumination and complex background. The eye positions 
have been set manually.
Cars, Motorcycles, Airplanes, Faces, Leaves, Backgrounds
Pictures of objects belonging to 101 categories
Pictures of objects belonging to 256 categories
15,560 pedestrian and non-pedestrian samples (image cut-outs) and 6744 additional full images not containing pedestrians for
 bootstrapping. The test set contains more than 21,790 images with 56,492 pedestrian labels (fully visible or partially occluded), 
captured from a vehicle in urban traffic.
CVC Pedestrian Datasets
CBCL Pedestrian Database
CBCL Face Database
CBCL Car Database
CBCL Street Database
A large set of marked up images of standing or walking people
A set of car and non-car images taken in a parking lot nearby INRIA
A set of horse and non-horse images
3D skeletons and segmented regions for 1000 people in images
A large-scale vehicle detection dataset
10000 images of natural scenes, with 37 different logos, and 2695 logos instances, annotated with a bounding box.
10000 images of natural scenes grabbed on Flickr, with 2695 logos instances cut and pasted from the BelgaLogos dataset.
The dataset FlickrLogos-32 contains photos depicting logos and is meant for the evaluation of multi-class logo detection/recognition 
as well as logo retrieval methods on real-world images. It consists of 8240 images downloaded from Flickr.
30000+ frames with vehicle rear annotation and classification (car and trucks) on motorway/highway sequences. Annotation 
semi-automatically generated using laser-scanner data. Distance estimation and consistent target ID over time available.
Phos is a color image database of 15 scenes captured under different illumination conditions. More particularly, every scene
 of the database contains 15 different images: 9 images captured under various strengths of uniform illumination, and 6 images
 under different degrees of non-uniform illumination. The images contain objects of different shape, color and texture and can
 be used for illumination invariant feature detection and selection.
California-ND contains 701 photos taken directly from a real user's personal photo collection, including many challenging 
non-identical near-duplicate cases, without the use of artificial image transformations. The dataset is annotated by 10 different 
subjects, including the photographer, regarding near duplicates.

Classification

Classification/Detection Competitions, Segmentation Competition, Person Layout Taster Competition datasets
Cars, Motorcycles, Airplanes, Faces, Leaves, Backgrounds
Pictures of objects belonging to 101 categories
Pictures of objects belonging to 256 categories
A dataset for testing object class detection algorithms. It contains 255 test images and features five diverse shape-based 
classes (apple logos, bottles, giraffes, mugs, and swans).
17 Flower Category Dataset
A dataset for Attribute Based Classification. It consists of 30475 images of 50 animals classes with six pre-extracted 
feature representations for each image.
Dataset of 20,580 images of 120 dog breeds with bounding-box annotation, for fine-grained image categorization.

Recognition

Face and Gesture Recognition Working Group FGnet
Feret
Face and Gesture Recognition Working Group FGnet
9971 images of 100 people
A database of face photographs designed for studying the problem of unconstrained face recognition
Traffic Lights Recognition, Lara's public benchmarks.
The PubFig database is a large, real-world face dataset consisting of 58,797 images of 200 people collected from the internet. 
Unlike most other existing face datasets, these images are taken in completely uncontrolled situations with non-cooperative subjects.
The data set contains 3,425 videos of 1,595 different people. The shortest clip duration is 48 frames, the longest clip is 6,070 
frames, and the average length of a video clip is 181.3 frames.
The Microsoft Research Cambridge-12 Kinect gesture data set consists of sequences of human movements, represented as 
body-part locations, and the associated gesture to be recognized by the system.
This dataset contains 250 pedestrian image pairs + 775 additional images captured in a busy underground station for the research 
on person re-identification.
Face tracks, features and shot boundaries from our latest CVPR 2013 paper. It is obtained from 6 episodes of Buffy the Vampire
 Slayer and 6 episodes of Big Bang Theory.
ChokePoint is a video dataset designed for experiments in person identification/verification under real-world surveillance 
conditions. The dataset consists of 25 subjects (19 male and 6 female) in portal 1 and 29 subjects (23 male and 6 female) in portal 2.

Tracking

Walking pedestrians in busy scenarios from a bird eye view
Three pedestrian crossing sequences
The set was recorded in Zurich, using a pair of cameras mounted on a mobile platform. It contains 12'298 annotated pedestrians
 in roughly 2'000 frames.
BMP image sequences.
Data sets for tracking vehicles and people in aerial image sequences.
MIT traffic data set is for research on activity analysis and crowded scenes. It includes a traffic video sequence of 90 minutes 
long. It is recorded by a stationary camera.

Segmentation

Ground truth database of 50 images with: Data, Segmentation, Labelling - Lasso, Labelling - Rectangle
Classification/Detection Competitions, Segmentation Competition, Person Layout Taster Competition datasets
Cows for object segmentation, Five video sequences for motion segmentation
Geometric Context Dataset: pixel labels for seven geometric classes for 300 images
This dataset contains videos of crowds and other high density moving objects. The videos are collected mainly from the BBC 
Motion Gallery and Getty Images website. The videos are shared only for the research purposes. Please consult the terms and
 conditions of use of these videos from the respective websites.
Contains hand-labelled pixel annotations for 38 groups of images, each group containing a common foreground. Approximately
 17 images per group, 643 images total.
200 gray level images along with ground truth segmentations
Image segmentation and boundary detection. Grayscale and color segmentations for 300 images, the images are divided into 
a training set of 200 images, and a test set of 100 images.
328 side-view color images of horses that were manually segmented. The images were randomly collected from the WWW.
10 videos as inputs, and segmented image sequences as ground-truth

Foreground/Background

For evaluating background modelling algorithms
Foreground/Background segmentation and Stereo dataset from Microsoft Cambridge
The SABS (Stuttgart Artificial Background Subtraction) dataset is an artificial dataset for pixel-wise evaluation of 
background models.

Saliency Detection (source)

AIM
120 Images / 20 Observers (Neil D. B. Bruce and John K. Tsotsos 2005).
27 Images / 40 Observers (O. Le Meur, P. Le Callet, D. Barba and D. Thoreau 2006).
100 Images / 31 Observers (Kootstra, G., Nederveen, A. and de Boer, B. 2008).
DOVES
101 Images / 29 Observers (van der Linde, I., Rajashekar, U., Bovik, A.C., Cormack, L.K. 2009).
912 Images / 14 Observers (Krista A. Ehinger, Barbara Hidalgo-Sotelo, Antonio Torralba and Aude Oliva 2009).
NUSEF
758 Images / 75 Observers (R. Subramanian, H. Katti, N. Sebe1, M. Kankanhalli and T-S. Chua 2010).
235 Images / 19 Observers (Jian Li, Martin D. Levine, Xiangjing An and Hangen He 2011).
ECSSD contains 1000 natural images with complex foreground or background. For each image, the ground truth mask of 
salient object(s) is provided.

Video Surveillance

For the CAVIAR project a number of video clips were recorded acting out the different scenarios of interest. These include
 people walking alone, meeting with others, window shopping, entering and exitting shops, fighting and passing out and last, 
but not least, leaving a package in a public place.
ViSOR
ViSOR contains a large set of multimedia data and the corresponding annotations.

Multiview

Multiview stereo data sets: a set of images
Dinosaur, Model House, Corridor, Aerial views, Valbonne Church, Raglan Castle, Kapel sequence
Oxford colleges
Temple, Dino
Venus de Milo, Duomo in Pisa, Notre Dame de Paris
Dataset provided by Center for Machine Perception
CVLab dense multi-view stereo image database
Objects viewed from 144 calibrated viewpoints under 3 different lighting conditions
Images from 19 sites collected from a helicopter flying around Providence, RI. USA. The imagery contains approximately 
a full circle around each site.
24 scenarios recorded with 8 IP video cameras. The first 22 first scenarios contain a fall and confounding events, the last 2 
ones contain only confounding events.

Action

This dataset consists of a set of actions collected from various sports which are typically featured on broadcast television 
channels such as the BBC and ESPN. The video sequences were obtained from a wide range of stock footage websites 
including BBC Motion gallery, and GettyImages.
This dataset features video sequences that were obtained using a R/C-controlled blimp equipped with an HD camera mounted 
on a gimbal.The collection represents a diverse pool of actions featured at different heights and aerial viewpoints. Multiple 
instances of each action were recorded at different flying altitudes which ranged from 400-450 feet and were performed by 
different actors.
It contains 11 action categories collected from YouTube.
Walk, Run, Jump, Gallop sideways, Bend, One-hand wave, Two-hands wave, Jump in place, Jumping Jack, Skip.
UCF50
UCF50 is an action recognition dataset with 50 action categories, consisting of realistic videos taken from YouTube.
ASLAN
The Action Similarity Labeling (ASLAN) Challenge.
The dataset was captured by a Kinect device. There are 12 dynamic American Sign Language (ASL) gestures, and 10 people. 
Each person performs each gesture 2-3 times.
Contains six types of human actions (walking, jogging, running, boxing, hand waving and hand clapping) performed several 
times by 25 subjects in four different scenarios: outdoors, outdoors with scale variation, outdoors with different clothes and
 indoors.
Hollywood-2 datset contains 12 classes of human actions and 10 classes of scenes distributed over 3669 video clips and 
approximately 20.1 hours of video in total.
This dataset contains 5 different collective activities : crossing, walking, waiting, talking, and queueing and 44 short video 
sequences some of which were recorded by consumer hand-held digital camera with varying view point.
The Olympic Sports Dataset contains YouTube videos of athletes practicing different sports.
Surveillance-type videos
The dataset is designed to be realistic, natural and challenging for video surveillance domains in terms of its resolution, 
background clutter, diversity in scenes, and human activity/event categories than existing action recognition datasets.
Collected from various sources, mostly from movies, and a small proportion from public databases, YouTube and Google 
videos. The dataset contains 6849 clips divided into 51 action categories, each containing a minimum of 101 clips.
Dataset of 9,532 images of humans performing 40 different actions, annotated with bounding-boxes.
Fully annotated dataset of RGB-D video data and data from accelerometers attached to kitchen objects capturing 25 people 
preparing two mixed salads each (4.5h of annotated data). Annotated activities correspond to steps in the recipe and include
 phase (pre-/ core-/ post) and the ingredient acted upon.

Human pose/Expression

Image stitching

Medical

Collection of endoscopic and laparoscopic (mono/stereo) videos and images

Misc

ZuBuD Image Database contains over 1005 images about Zurich city building.
The mall dataset was collected from a publicly accessible webcam for crowd counting and activity profiling research.
A busy traffic dataset for research on activity analysis and behaviour understanding.

CVOnline的資料集

Index by Topic

Action Databases

Biological/Medical

Face Databases

Fingerprints

General Images

Gesture Databases

Image, Video and Shape Database Retrieval

Object Databases

People, Pedestrian, Eye/Iris, Template Detection/Tracking Databases

Segmentation

Surveillance

Textures

General Videos

  1. Large scale YouTube video dataset - 156,823 videos (2,907,447 keyframes) crawled from YouTube videos (Yi Yang)

Other Collections

Miscellaneous