Image Effects are a way of post-processing rendered image in Unity.
Any script that has OnRenderImage function can act as a postprocessing effect – just add it on a camera object. The script function will drive the whole image effect logic.
This script function receives two arguments: source image as a RenderTexture and destination it should render into, as a render texture as well. Typically a postprocessing effect uses Shaders that read the source image, do some calculations on it, and render the result into the provided destination (e.g using Graphics.Blit). It is expected that the image effect will fully replace all pixels of the destination texture.
When multiple postprocessing effects are added on the camera, they are executed in the order they are shown in the inspector, topmost effect being rendered first. Result of one effect is passed as “source image” to the next one; and internally Unity creates one or more temporary render textures to keep these intermediate results in.
Things to keep in mind:
Cull Off ZWrite Off ZTest Always
states.By default, an image effect is executed after whole scene is rendered. In some cases however, it is desirable to render an effect after all opaque objects are done (but before skybox or transparencies are rendered). Often depth-based effects like Depth of Field use this.
Adding an ImageEffectOpaque
attribute on the OnRenderImage
function allows to achieve that.
If an image effect is sampling different screen-related textures at once, you might need to be aware of platform differences in how texture coordinates for them are used.
A common scenario is that the effect “source” texture and camera’s depth texture will need different vertical coordinates, depending on anti-aliasing settings. See rendering platform differences page for details.
Effects package contains some base and helper classes to base your own image effects on. All the code there is in UnityStandardAssets.ImageEffects
namespace.