GraphLab: Distributed Graph-Parallel API  2.1
 All Classes Namespaces Files Functions Variables Typedefs Enumerations Enumerator Macros Groups Pages
Computer Vision

GraphLab Computer Vision Toolkit aims to provide fully distributed wrappers to algorithms in OpenCV, an open-source library aimed at real-time computer vision. Eventually, GraphLab Computer Vision Toolkit will become it’s own spin-off project called CloudCV, a system that will provide access to state-of-the-art computer vision algorithms on the cloud.

Currently, the only implemented algorithm is Image-Stitching, where the goal is to create a composite panoramic image from a collection of images.

Panoramic Image Stitching

panorama_small.png

The goal in image stiching is to create a composite panoramic image from a collection of images. The standard pipeline consists of four main steps:

  • Feature Extraction: where distinctive points (or keypoints) are identified in each image and a feature descriptor (SIFT, SURF, etc) is computed for each keypoint.
  • Image/Feature Matching: where features are matched between pairs of images to estimate relative camera transformations.
  • Global Refinement: of camera transformation parameters across all images.
  • Seam Blending: where seams are estimated between pairs of images and blending is performed.

See the following for details about the pipeline:

M. Brown and D. Lowe. 
Automatic Panoramic Image Stitching using Invariant Features. 
International Journal of Computer Vision, 74(1), pages 59-73, 2007.

The stiching code in this toolkit is based on OpenCV Stitching Module.

Implemented by Dhruv Batra and Prakriti Banik.

Running Stitch

The program requires a directory that contains all images from the panorama. Currently (and temporarily for now), the program also requires an adjacency list indicating the overlap between images. We are working on incorporating code that will estimate this adjacency list directly from the images.

> ./stitch --img /path/to/image/dir --graph /path/to/adjacency/list.txt 

The adjacency list file format stores on each line, a vertex (image id), followed by a list of all vertices (image ids) that contain overlapping visual content. Each line has the following format:

[image ID] [number of neighbouring vertices/images] [neighbour-image ID 1] [neighbour-image ID 2] [neighbour-image ID 3] ...

Here's an example adjacency list file with 3 images (numbered 0,1,2) in a chain graph (0-1-2):

0 1 1
1 2 0 2
2 1 1

Options

Other arguments are:

  • –help Display the help message describing the list of options.
  • –output (Optional, default "./") The output directory in which to save the final mosiac.
  • –verbose (Optional, default 0) How much information to print out.
  • –work_megapix (Optional, default 0.6 Mpx) Resolution for image matching step. See other details in stitch_opts.hpp.
  • –engine (Optional, Default: asynchronous) The engine type to use when executing the vertex-programs
    • synchronous: All LoopyBP updates are run at the same time (Synchronous BP). This engine exposes greater parallelism but is less computationally efficient.
    • asynchronous: LoopyBP updates are run asynchronous with priorities (Residual BP). This engine is has greater overhead and exposes less parallelism but can substantially improve the rate over convergence.
  • –ncpus (Optional, Default 2) The number of local computation threads to use on each machine. This should typically match the number of physical cores.
  • –scheduler (Optional, Default sweep) The scheduler to use when running with the asynchronous engine. The default is typically sufficient.
  • –engine_opts (Optional, Default empty) Any additional engine options. See –engine_help for a list of options.
  • –graph_opts (Optional, Default empty) Any additional graph options. See –graph_help for a list of options.
  • –scheduler_opts (Optional, Default empty) Any additional scheduler options. See –scheduler_help for a list of options.