Yet Another Computer Vision Index To Datasets (YACVID) - Details

Stand: 2017-08-19 000000m 09:16:06 - Overview

ok287
Attribute Current Content New
Name (Institute + Shorttitle)Annotated Web Ears Dataset (AWE Dataset) 
Description (include details on usage, files and paper references)Dataset contains 1000 images of 100 persons, with 10 images per person and is freely available. All images were acquired by cropping ears from images from the internet of known persons. No special regard to pose, lighting or occlusions was taken - meaning that images are in-the-wild as far as possible. This is important since, to the best of our knowledge, all currently freely available ear datasets contain images taken in supervised, laboratory conditions, with little pose or lightning variations.

All images are stored in PNG, sizes vary from 15x29 pixels to 473x1022 with the average size being 83x160. Annotation data is stored in JSON format with the following properties for each image: gender, ethnicity, accessories, occlusions, head pitch, head roll, head yaw, and head side.

Please, cite the following paper: Ž. Emeršič, V. Štruc, and P. Peer: "Ear Recognition: More than a Survey", Neurocomputing, 2016 
URL Linkhttp://awe.fri.uni-lj.si 
Files (#)1000 
References (SKIPPED)
Category (SKIPPED) 
Tags (single words, spaced)ear biometry person pedestrian recognition human lighting 
Last Changed2017-08-19 
Turing (2.12+3.25=?) :-)