Caffe2 - Python API
A deep learning, cross platform ML framework
Classes | Functions | Variables
pipeline Namespace Reference

Module caffe2.python.pipeline. More...

Classes

class  NetProcessor
 
class  Output
 
class  ProcessingReader
 

Functions

def make_processor (processor)
 
def normalize_processor_output (output)
 
def pipe (input, output=None, num_threads=1, processor=None, name=None, capacity=None, group=None)
 
def pipe_and_output (input, output=None, num_threads=1, processor=None, name=None, capacity=None, group=None, final_outputs=None)
 
def processor_name (processor)
 

Variables

int DEFAULT_QUEUE_CAPACITY = 100
 

Detailed Description

Module caffe2.python.pipeline.

Function Documentation

◆ normalize_processor_output()

def pipeline.normalize_processor_output (   output)
Allow for processors to return results in several formats.
TODO(azzolini): simplify once all processors use NetBuilder API.

Definition at line 67 of file pipeline.py.

◆ pipe()

def pipeline.pipe (   input,
  output = None,
  num_threads = 1,
  processor = None,
  name = None,
  capacity = None,
  group = None 
)
Given a Reader, Queue or DataStream in `input`, and optionally, a Writer,
Queue or DataStream in `output`, creates a Task that, when run, will
pipe the input into the output, using multiple parallel threads.
Additionally, if a processor is given, it will be called between reading
and writing steps, allowing it to transform the record.

Args:
    input:       either a Reader, Queue or DataStream that will be read
                 until a stop is signaled either by the reader or the
                 writer.
    output:      either a Writer, a Queue or a DataStream that will be
                 writen to as long as neither reader or writer signal
                 a stop condition. If output is not provided or is None,
                 a Queue is created with given `capacity` and writen to.
    num_threads: number of concurrent threads used for processing and
                 piping. If set to 0, no Task is created, and a
                 reader is returned instead -- the reader returned will
                 read from the reader passed in and process it.
    processor:   (optional) function that takes an input record and
                 optionally returns a record; this will be called
                 between read and write steps. If the processor does
                 not return a record, a writer will not be instantiated.
                 Processor can also be a core.Net with input and output
                 records properly set. In that case, a NetProcessor is
                 instantiated, cloning the net for each of the threads.
    name:        (optional) name of the task to be created.
    capacity:    when output is not passed, a queue of given `capacity`
                 is created and written to.
    group:       (optional) explicitly add the created Task to this
                 TaskGroup, instead of using the currently active one.

Returns:
    Output Queue, DataStream, Reader, or None, depending on the parameters
    passed.

Definition at line 96 of file pipeline.py.

◆ pipe_and_output()

def pipeline.pipe_and_output (   input,
  output = None,
  num_threads = 1,
  processor = None,
  name = None,
  capacity = None,
  group = None,
  final_outputs = None 
)
Similar to `pipe`, with the additional ability for the pipe Task to
return output values to the `Session` once done.

Returns:
    Tuple (out_queue, *task_outputs)
        out_queue:    same as return value of `pipe`.
        task_outputs: TaskOutput object, fetchable from the client after
                      session.run() returns.

Definition at line 140 of file pipeline.py.