AI Reference Datasets¶
UFRC maintains a repository of reference AI datasets that can be accessed by all HiPerGator users. The primary purposes of this repository are researcher convenience, efficient use of filesystem space, and cost savings. Research groups do not have to use their Blue or Orange quota to host their own copies of these reference datasets.
Use https://support.rc.ufl.edu to request the addition of a reference dataset. All reference datasets hosted on HiPerGator must comply with Research Computing's AI reference dataset hosting policy.
You may need to use the full path /blue/data/ai for software applications to find the file.
Catalog of available datasets¶
dirpath/dirsize | name/url | version/date | license_text/url | categories | description |
---|---|---|---|---|---|
/data/ai/ref-data/audio/free-spoken-digit-dataset-1.0.10 (20.4 MiB) | Free Spoken Digit Dataset (FSDD) | v1.0.10 [March 11, 2021] | Creative Commons Attribution-ShareAlike 4.0 International | Audio | A simple audio/speech dataset consisting of recordings of spoken digits in wav files at 8kHz. The recordings are trimmed so that they have near minimal silence at the beginnings and ends. |
/data/ai/ref-data/audio/FSD50K (32.2 GiB) | Freesound Dataset 50k (FSD50K) | 1.0 (10.5281/zenodo.4060432) [March 12, 2021] | Mixed Creative Commons licenses | Audio | FSD50K is an open dataset of human-labeled sound events containing 51,197 Freesound clips unequally distributed in 200 classes drawn from the AudioSet Ontology. |
/data/ai/ref-data/audio/LibriSpeech (59.4 GiB) | LibriSpeech ASR corpus | SLR12 [March 11, 2021] | Creative Commons Attribution 4.0 International | Audio | LibriSpeech is a corpus of approximately 1,000 hours of 16kHz read English speech, prepared by Vassil Panayotov with the assistance of Daniel Povey. The data is derived from read audiobooks from the LibriVox project, and has been carefully segmented and aligned. |
/data/ai/ref-data/video/ADE20K (7.2 GiB) | ADE20K | [Aug 23, 2022] | Not reported | Computer vision | The ADE20K semantic segmentation dataset contains more than 20K scene-centric images exhaustively annotated with pixel-level objects and object parts labels. There are totally 150 semantic categories, which include stuffs like sky, road, grass, and discrete objects like person, car, bed. |
/data/ai/ref-data/image/Celebfaces (8.5 GiB) | CelebA | [Aug 23, 2022] | Not reported | Computer vision | CelebFaces Attributes dataset contains 202,599 face images of the size 178×218 from 10,177 celebrities, each annotated with 40 binary labels indicating facial attributes like hair color, gender and age. |
/data/ai/ref-data/image/cifar-10-batches-py (177.6 MiB) | CIFAR-10 | [March 12, 2021] | Not reported | Computer vision | The CIFAR-10 dataset consists of 60,000 32x32 colour images in 10 classes, with 6,000 images per class. There are 50,000 training images and 10,000 test images. |
/data/ai/ref-data/image/cifar-100 (177.7 MiB) | CIFAR-100 | [Aug 23, 2022] | Not reported | Computer vision | The CIFAR-100 dataset consists of 60,000 32x32 colour images in 100 classes, with 600 images per class. There are 50,000 training images and 10,000 test images. The 100 classes are grouped into 20 superclasses. Each image comes with a "fine" label |
/data/ai/ref-data/image/COIL (821.3 MiB) | COIL | [Aug 23, 2022] | Not reported | Computer vision | COIL-100 was collected by the Center for Research on Intelligent Systems at the Department of Computer Science, Columbia University. The database contains colorimages of 100 objects.The dataset contains 7200 color images of 100 objects (72 images per object). The objects were placed on a motorized turntable against a black background and images were taken at pose internals of 5 degrees. This dataset was used in a real-time 100 object recognition system whereby a system sensor could identify the object and display its angular pose. |
/data/ai/ref-data/image/FIRE (478.1 MiB) | FIRE: Fundus Image Registration Dataset | [Aug 23, 2022] | Not reported | Computer vision | The dataset consists of 129 retinal images forming 134 image pairs. These image pairs are split into 3 different categories depending on their characteristics. The images were acquired with a Nidek AFC-210 fundus camera, which acquires images with a resolution of 2912x2912 pixels and a FOV of 45° both inthe x and y dimensions. Images were acquired at the Papageorgiou Hospital, Aristotle University of Thessaloniki, Thessaloniki from 39 patients. |
/data/ai/ref-data/image/google-landmark (588.0 GiB) | Google LandMarks Dataset | v2 [Aug 23, 2022] | Not reported | Computer vision | This is the second version of the Google Landmarks dataset (GLDv2), which contains images annotated with labels representing human-made and natural landmarks. The dataset can be used for landmark recognition and retrieval experiments. This version of the dataset contains approximately 5 million images, split into 3 sets of images: train, index and test. |
/data/ai/ref-data/video/Hollywood2 (40.4 GiB) | HOLLYWOOD-2 | [Aug 23, 2022] | Not reported | Computer vision | Hollywood-2 is a dataset with 12 classes of human actions and 10 classes of scenes distributed over 3669 video clips and approximately 20.1 hours of video in total. The dataset intends to provide a comprehensive benchmark for human action recognition in realistic and challenging settings. The dataset is composed of video clips from 69 movies. |
/data/ai/ref-data/image/ImageNet/imagenet1k (156.2 GiB) | ImageNet-1K | (ILSVRC2012-2017) [Aug 23, 2022] | Custom (research, non-commercial) | Computer vision | The most highly-used subset of ImageNet is the ImageNet Large Scale Visual Recognition Challenge (ILSVRC) 2012-2017 image classification and localization dataset. This dataset spans 1000 object classes and contains 1,281,167 training images, 50,000 validation images and 100,000 test images. |
/data/ai/ref-data/image/ImageNet/imagenet21k_resized (269.4 GiB) | ImageNet-21K | [Aug 23, 2022] | Custom (research, non-commercial) | Computer vision | ImageNet is an image database organized according to the WordNet hierarchy (currently only the nouns), in which each node of the hierarchy is depicted by hundreds and thousands of images. This dataset is Processed version of ImageNet21K. |
/data/ai/ref-data/video/Kinetics-400/kinetics-dataset (438.0 GiB) | kinetics-400 | [Aug 23, 2022] | Creative Commons Attribution 4.0 International License | Computer vision | The Kinetics dataset is a large-scale, high-quality dataset for human action recognition in videos. The dataset consists of around 500,000 video clips covering 600 human action classes with at least 600 video clips for each action class. Each video clip lasts around 10 seconds and is labeled with a single action class. The videos are collected from YouTube. |
/data/ai/ref-data/video/MPI-Sintel (5.7 GiB) | MPI-Sintel | [Aug 23, 2022] | Not reported | Computer vision | MPI (Max Planck Institute) Sintel is a dataset for optical flow evaluation that has 1064 synthesized stereo images and ground truth data for disparity. Sintel is derived from open-source 3D animated short film Sintel. The dataset has 23 different scenes. The stereo images are RGB while the disparity is grayscale. Both have resolution of 1024×436 pixels and 8-bit per channel. |
/data/ai/ref-data/image/OpenImagesDataset (1.1 TiB) | Open Images Dataset | V6 and Extended [Aug 23, 2022] | Creative Commons Attribution 4.0 International | Computer vision | Open Images is a dataset of ~9M images annotated with image-level labels, object bounding boxes, object segmentation masks, visual relationships, and localized narratives. The subset with Bounding Boxes (600 classes), Object Segmentations, Visual Relationships, and Localized Narratives. These annotation files cover the 600 boxable object classes, and span the 1,743,042 training images where we annotated bounding boxes, object segmentations, visual relationships, and localized narratives; as well as the full validation (41,620 images) and test (125,436 images) sets. |
/data/ai/ref-data/image/VisualGenome/1.2 (23.9 GiB) | VisualGenome | 1.2 [Aug 23, 2022] | Creative Commons Attribution 4.0 International License | Computer vision | Visual Genome is a dataset, a knowledge base, an ongoing effort to connect structured image concepts to language. Total images: 108,077; Total region descriptions: 4,297,502; Total image object instances: 1,366,673;Unique image objects: 75,729; Total object-object relationship instances: 1,531,448. |
/data/ai/ref-data/image/VisualGenome/1.4 (1.0 GiB) | VisualGenome | 1.4 [Aug 23, 2022] | Creative Commons Attribution 4.0 International License | Computer vision | Visual Genome is a dataset, a knowledge base, an ongoing effort to connect structured image concepts to language. Total images: 108,077; Total region descriptions: 4,297,502; Total image object instances: 1,366,673;Unique image objects: 75,729; Total object-object relationship instances: 1,531,448. |
/data/ai/ref-data/video/Youtube-8M (27.4 GiB) | Youtube-8M | [Aug 23, 2022] | Apache License 2.0 | Computer vision | The YouTube-8M dataset is a large scale video dataset, which includes more than 7 million videos with 4716 classes labeled by the annotation system. The dataset consists of three parts: training set, validate set, and test set. In the training set, each class contains at least 100 training videos. Features of these videos are extracted by the state-of-the-art popular pre-trained models and released for public use. Each video contains audio and visual modality. Based on the visual information, videos are divided into 24 topics, such as sports, game, arts & entertainment, etc |
/data/ai/ref-data/proteinbinding (3.3 TiB) | DeepAtomDB_v2018-MD | 0.1 [March, 2022] | Creative Commons Attribution-ShareAlike 4.0 International | Molecular Dynamics Trajectories | MD trajectories for drug-protein complexes extracted from PDBBind, BindingMOAB and Astex databases. |
/data/ai/ref-data/image/PASCAL (4.4 GiB) | PASCAL | VOC2012 [April 19, 2021] | nan | Object Recognition | Object datasets from the VOC challenges. The main goal of this challenge is to recognize objects from a number of visual object classes in realistic scenes (i.e. not pre-segmented objects). It is fundamentally a supervised learning learning problem in that a training set of labelled images is provided. |
/data/ai/ref-data/image/COCO (47.7 GiB) | COCO | 2017 [April 19, 2021] | CC-BY 4.0 | Object segmentation | COCO is a large-scale object detection, segmentation, and captioning dataset. COCO has several featutures: 330K images (>200K labeled), 1.5 million object instances, 80 object categories, 91 stuff categories, 5 captions per image, 250,000 people with keypoints |
/data/ai/ref-data/nlp/babelnet (10.3 KiB) | Babelnet-v5 | May, 2022 [May, 2022] | https://babelnet.org/license Researchers who agree to the license will be granted access to Babelnet databases. | Text | Babelnet is an NLP dictionary about words and their meanings. |
/data/ai/ref-data/nlp/wikipedia (30.9 GiB) | Wikipedia | January, 2021 [January, 2021] | Creative Commons Attribution-ShareAlike 4.0 International | Text | Wikipedia articles as downloaded January 2021 from https://dumps.wikimedia.org/enwiki/latest/enwiki-latest-pages-articles.xml.bz2, cleaned using wikiextractor Python library. |