Houdini Development Toolkit - Version 9.0

Side Effects Software Inc. 2007

Compositing Operators

Anatomy of a Cook

Compositing Operators cook very differently than other operators. The COP engine is designed to be a multithreaded, tile-based, on-demand algorithm. This means that:
Because of this, the traditional cook(OP_Context&) paradigm could not be used for COPs. Instead, the cooking algorithm is broken down into several discrete steps:
  1. Determine the sequence information for the node (frame range, data format, plane composition)
  2. Evaluate all parameters required for the cook of that node and store them in a new context data structure.
  3. Determine the output bounds of the image for that frame and plane.
  4. Create the data dependencies for the node. Given the image area of the plane(s) the output node needs to cook, determine the image area of the input nodes' plane(s) that need to cook.
  5. Process the image, tile by tile, creating the image data.
It is rare that you will need to implement all of those steps in a single custom node, as there are subclasses off COP2_Node which handle some of these tasks for you (COP2_Generator, COP2_MaskOp, etc).

Each step above produces information that subsequent steps need to cook. You cannot use information from a later step in an earlier step (like using image data to determine the output bounds).

Sequence Information

This is the first and most commonly called step in a COP cook. It is called whenever a cop is accessed and its parameters are out of date. Its purpose is to define all the high level information about a COP, mainly the:
All of this information is derived from the parameters on the node and its inputs. It cannot be based on image data, or downstream nodes. By default, all of this information is copied from the first input node. You can change this behaviour by overriding:
   virtual TIL_Sequence *  cookSequenceInfo(OP_ERROR &error);
Note: This will not be called if the COP is bypassed.

This function is responsible for setting up the TIL_Sequence mySequence member data structure of the node.

Context Data Creation

The next major step in a cook is to build a context data for your node. A context data structure is a custom class that derives from COP2_ContextData. It is used to store all the data you will need in the cook method. Because parameter evaluation is not threadsafe, any parameters you use in the cook must be evaluated and stored in the context data. The context data can also be used to precompute and store other objects as well (like kernel matrices), as the context data is created once for a cook, instead of recreating the objects per-tile in the cook method.

The method in COP2_Node that is used to create your custom context data is:
virtual COP2_ContextData * newContextData(const TIL_Plane *plane,
                        int         array_index,
                     float       t,
                        int         xres,
                        int         yres,
                        int         thread,
                        int         max_threads);
A context data class is created for each frame and each resolution cooked, by default. You can also tell it to create a context data per plane, and/or per thread. Context data classes are automatically cleaned up by the compositing engine (they are cached, so do not delete them).

Here is a quick example from the Border COP:
class COP2_API cop2_BorderData : public COP2_ContextData
{
public:
 cop2_BorderData() {}
 virtual ~cop2_BorderData() {}
    float	myColor[4];
 float myLeft, myRight, myTop, myBottom;
};

COP2_ContextData *
COP2_Border::newContextData(const TIL_Plane *, int , float t, int xres, int yres, int, int )
{
 cop2_BorderData *bdata = new cop2_BorderData();
    COLOR(bdata->myColor, t);
    // Get the scale factors for reduced resolution images.
 getScaleFactors(xres,yres, scx, scy);
    bdata->myLeft   = SYSrint(LEFT(t) * scx);
 bdata->myRight = SYSrint(RIGHT(t) * scx);
 bdata->myTop = SYSrint(TOP(t) * scx);
 bdata->myBottom = SYSrint(BOTTOM(t) * scx);

return bdata;
}
Note: Parameters that are time, plane, thread and resolution invariant can be stashed in the node as member variables, but it's often a good idea to put all your parameters in one spot (the context data). If all of your parameters are invariant, you can avoid using a context data and stash all of them in the node.

Compute Image Bounds

Next, you need to tell the compositor the image bounds that your node will be outputing. The area visible to the user, (0,0) to (xres-1, yres-1), is known as the 'frame area'. Many generators produce image bounds that are the same as the frame area. However, the image bounds can be smaller than, larger than or even disjoint from the frame area. As an example, the RotoShape COP fits its image bounds to the shape the user draws, which usually means that its image bounds are smaller than the frame area.

If the image bounds do not completely contain the frame area, then some of the frame area must be filled in with data. This is done by taking an edge of the image bounds and streaking it either horizontally or vertically to the frame area edge (like edge clamping in texture mapping). This allows the image data to have a background color other than black and still produce a correct frame image.

Filter COPs produce their bounds by basing them off the input(s) bounds. Often, this is a straight copy of the bounds of input #1. But certain algorithms that use neighbouring pixel data (like blur or expand) will need to enlarge the size of the bounds to accomodate the image expansion. Other operations, like Transform, will need to shift or transform the input bounds. Finally, operations that merge several inputs will likely need to take the union of the input bounds. The good news is that if you are not moving pixels or dealing with neighbouring pixels, you can skip this step altogether.

The image bounds are stored in the COP2_Context class, which is created per-plane, per-frame and per-resolution. You can access them with the getImageBounds() method. Image bounds are expressed in pixels, with (0,0) being the lower left corner of the frame area.

The virtual function COP2_Node::computeImageBounds(COP2_Context &) is where you define the image bounds. You can use the stashed parameters in the context data (accessible from COP2_Context::data()) and/or input bounds to determine the new image bounds. You cannot use any image data in the computation.

COP2_Node and COP2_Context provide a variety of methods to access and manipulate the image bounds:

COP2_Node:
// Copy the input bounds into context. If the x1,y1,x2,y2 pointers are non-null,
// the bounds are copied to those variables as well. Returns false if the plane
// doesn't exist in the input, or the input is not connected.
bool copyInputBounds(int input, COP2_Context &context,
int *x1 = 0, int *y1 = 0,
int *x2 = 0, int *y2 = 0);
// Get the bounds from an input. Returns false if not connected or the plane doesn't exist.
bool getInputBounds(int input, COP2_Context &context,
int &x1, int &y1, int &x2, int &y2);
// Get the bounds from a specific plane in an input.
bool getInputBounds(int input,
const TIL_Plane *plane, int array,
float t, int xres, int yres,
int thread,
int &x1, int &y1, int &x2, int &y2);

COP2_Context:
// replace the existing image bounds with these bounds.
void setImageBounds(int x1, int y1, int x2, int y2);
// Enlarge the image bounds by X pixels horizontally and Y pixels vertically, on each side.
void enlargeImageBounds(int x, int y);
// Same as above, but the next integer larger than X and Y is used if fractional.
void enlargeImageBounds(float x, float y);

bool getImageBounds(int &x1, int &y1, int &x2, int &y2);
bool areBoundsSet() const;

Determine Data Dependencies

The final step before actually cooking image data is to tell the compositor what image data you need from the input(s). This is the 'on-demand' part of COP cooking which highly optimizes the cook. The virtual method in COP2_Node to override to set up data dependencies is:
virtual void     getInputDependenciesForOutputArea(COP2_CookAreaInfo &output_area,
             const COP2_CookAreaList &input_areas,
           COP2_CookAreaList &needed_areas);
You are responsible for determining from the passed in 'output_area' which input areas are needed, and placing them in the needed_areas list. The class structure for representing a data dependency is COP2_CookAreaInfo. It represents the dependency on a given image area in a plane, in a certain input, at a certain frame. By default, the only needed_area included is the same area from input #1, at the same frame and plane as the output_area.

If you needed to override the computeImageBounds() method above, you will need to modify the area of the input bounds. Also, if you use more than just the input plane corresponding to the plane you are cooking, or access planes from inputs other than input #1, you will need to add dependencies. However, there are a lot of utility functions in COP2_Node to make this as painless as possible:

// Selects a plane from the input to be dependent on. Returns the plane
// added to the needed_areas list.
COP2_CookAreaInfo *makeOutputAreaDependOnInputPlane(int input, const char *planename, int array_index, float t,
const COP2_CookAreaList &input_areas,
COP2_CookAreaList &needed_areas);

// selects the corresponding plane from an input to be dependent on.
// Returns the plane added to the needed_areas list.
COP2_CookAreaInfo *makeOutputAreaDependOnMyPlane(int input,
COP2_CookAreaInfo &output_area,
const COP2_CookAreaList &input_areas,
COP2_CookAreaList &needed_areas);
In both cases, pass the output_area, input_areas and needed_areas that were passed to getInputDependenciesForOutputArea(). The methods return a pointer to a newly created COP2_CookAreaInfo and adds it to needed_areas. Note that this may be null if:
If you don't use one of these methods, you can find the COP2_CookAreaInfo you need in the input_areas list (all valid dependencies are in this list). Once you've found it, you can clone it and append it to the needed_areas yourself:
COP2_CookAreaInfo *area = input_areas(found_index)->clone(output_area.getArrayIndex());
area->expandNeededArea(1,1,1,1);
needed_areas.append(area);
When appending the area to the needed_areas array, the duplicate dependency check is performed, which means that your area pointer may be set to null and deleted if it is a duplicate. So always do a null check on area if you need to use it after append().

Now, once you have the COP2_CookAreaInfo object, you can call a variety of COP2_CookAreaInfo methods to modify the area needed:
// Enlarge the area so it contains the are (xstart,ystart) - (xend,yend) (or the
// area specified by the COP_CookAreaInfo object).
// True is returned if the area needed to be grown. Note that pixels_left,
// pixels_down, etc. may be negative in enlargeNeededArea().
bool enlargeNeededArea(int xstart, int ystart, int xend, int yend);
bool enlargeNeededArea(COP2_CookAreaInfo &area,
int pixels_left = 0,
int pixels_down = 0,
int pixels_right = 0,
int pixels_up = 0);
// Depend on the entire image area.
bool enlargeNeededAreaToBounds();

// This method expands the needed area outward by the specified number
// of pixels. The needed area must already be defined before calling
// this method. This method does not shrink the needed area, so
// pixels_left, pixels_down, etc. must not be negative.
bool expandNeededArea(int pixels_left, int pixels_down,
int pixels_right, int pixels_up);
Any output plane can depend on as many (or as few) input planes and frames as desired. It is a good idea to put dependencies on only as many planes as you need, otherwise you will slow the entire composite network down with additional and unnecessary cooking.

Processing Image Data

Now that you've done all the setup for the cook, you can process the data. Override the virtual function COP2_Node::cookMyTile() to do your node's processing (or, if you've derived from COP2_Generator or COP2_MaskOp, override generateTile() or doCookMyTile() - see COP Families).

cookMyTile() is called for each tilelist that needs to be cooked. There is no predictable order in which you will receive these tiles, and more than one thread may be calling cookMyTile() at once (so don't use statics to store information). Your responsibility is to fill out the data in each of the tiles in the tilelist (they are not zeroed for you). These tiles expect data to be written to them in the data format that their parent plane is in.

Since this is such a large topic, it is covered in its own section on Cooking Image Data.



Table of Contents
Operators | Surface Operations | Particle Operations | Composite Operators | Channel Operators
Material & Texture | Objects | Command and Expression | Render Output |
Mantra Shaders | Utility Classes | Geometry Library | Image Library | Clip Library
Customizing UI | Questions & Answers

Copyright © 2007 Side Effects Software Inc.
477 Richmond Street West, Toronto, Ontario, Canada M5V 3E7