Fusion's Deep Pixel nomenclature predates Weta's Deep tools. They're not the same thing. In Fusion terms, deep means tools that can access various technical AOVs such as depth and world position, not the sliced volumes that most people think of as deep.
Each of the deep pixel tools works slightly differently, and they all use different AOVs to do their thing. Some also require a camera from 3d-land. Let's take them in order, and along the way, I'll give you a cool Redshift tool and tip that'll help.
Ambient Occlusion
This creates a screen-space post-processed AO image. It requires two utility buffers in addition to RGB: Normals and Z. These AOVs must be shuffled into the stream with the RGB channels, and all of it goes to the main Input. If you've got all of your AOVs packed into one EXR, you can just assign the needed channels in the Loader, but if you split the AOVs into discrete files, you'll need to use a ChannelBooleans to insert them into the image.
The node also requires a 3d camera in order to make its calculations. Traditionally, you'd need to export a camera to fbx or alembic and import it into Fusion, but since Redshift embeds information about its camera into the images it renders, you can instead extract that information directly from your EXRs and apply it to a Fusion camera.
If you have Reactor, download the Redshift Utilities atom. Create a Camera3D and switch to Version 6. This is a preset that has expressions that will pull information from the metadata of an input image to configure the camera's transforms and Angle of View. However, the Redshift metadata is not in a format the expressions can parse, so you also need the RSCameraExtractor node, which interprets the camera transform matrix into Euler angles that Fusion understands.
Plug any Redshift render image into the input of the RSCameraExtractor. Plug the output of the Extractor into the ImageInput of the Camera3D. Then plug the output of the Camera3D into the Camera input of the Ambient Occlusion.
You may also need the RsVectorFlipper node to convert the Normals AOV to Fusion standard. Redshift's z axis is inverted.
DepthBlur
DepthBlur is intended to use the Z AOV to control the strength of the blur or defocus effect. Unlike AO, it doesn't require the Z buffer to be packaged in the main input—you can connect Z to the green input. It does work if Z is in the main input, too, but being able to put any image, and use channels other than Z as the blur strength source, makes it more versatile.
It often takes some fiddling with the Z Scale control to get reasonable results.
One cool feature of DepthBlur that we'll see some more in other tools is its ability to sample values from channels other than RGB. If you drag the Sample button over an image that contains a Z channel, it will grab its Focal Point value directly from Z. The image you're sampling from need not even be the one connected to the DepthBlur's input. And in fact, you can sample from a 3D scene, but the value you get will be from whatever camera you're looking through, so make sure you're looking through the shot camera instead the Perspective camera!
Fog
There are two Fog tools in Fusion. This is the simpler of the two. The main (yellow) input gets your RGBA+Z image, and the other gets an image that creates the "texture" of the fog. A FastNoise is quick and easy. I find this tool difficult to use because the sliders are very twitchy.
The Z channel needs to be bundled with RGB for this one to work.
Shader
Shader uses the Normals to do limited relighting. Again, the Normals need to be bundled with the RGBA image—there's no input to add them separately. The second input can take a reflection map, which I believe should be an equirectangular (otherwise described as latlong) image. It might be necessary to run it through the SphereMap node first, but I don't think so.
Texture
Texture is designed to replace a texture map on a rendered object if you have a UV AOV. More commonly, it's used to process STMaps for lens distortion. I've written quite a lot on the Texture tool elsewhere, so I'll just drop a link to my blog:
http://www.bryanray.name/wordpress/blac ... ture-node/
VolumeFog
There are three more tools found in the Position category that should also be mentioned. All three make use of the World Position buffer.
VolumeFog is a more advanced version of the Fog tool, but it uses the World Position buffer instead of Z. This guy has some seriously powerful applications, but I'll be honest: I don't fully understand it. Instead, I'll give you a link to another forum, with a very good tutorial that should help:
https://www.steakunderwater.com/wesuckl ... =16&t=3200
VolumeMask
VolumeMask can be used to create masks that will "stick" to their location in 3d space, no matter what the camera's doing. Use it to do things like adding a pool of light under a streetlight that you forgot to turn on in the render. We used to use it to isolate bits of the head of the talking dog in Dog With a Blog, as described in this article:
http://www.bryanray.name/wordpress/face ... nd-fusion/
ZtoWorldPosition
This one is used to convert a Z depth AOV into a World Position. Like Ambient Occlusion, it requires a camera. I can't think of many occasions these days when you'd need this thing, unless the lighter just forgot to turn on the position buffer and you don't have time for a rerender.