- Posts: 10
- Joined: Wed Dec 11, 2013 2:56 pm
There are no tutorials or even basic workflow docs on a VR workflow so I appreciate any help. I don’t know if there is an issue or if the VR assets in Fusion just currently don’t do what I would thing it should.
I have been composting for many years but am new to Fusion.
Windows 10
Dell precision 36 core hyper threaded to 72 virtual cores
96GB memory
GTX 1080 Ti
Oculus Rift CV1
I am beginning a VR project and am deciding on AE or Fusion or both. My work flow is to use a combination of Maya output which provides an over-under stereo LatLong or equirectangular sequence. Currently I will be doing that with 4k x 4k 16bit tif, but may increase the res and or use exr 32 bit float in future. I will also be outputting images, geometry, etc. from Maya and other sources for composite with or without the 4k OU stereo LatLong as part of the composite. In other words a typical advanced stereoscopic composite work flow, accept in VR.
I followed the basic tutorial for a 3d stereo composite and have logically tried to apply it to a VR 3d stereo workflow by;
1. Starting with a loader node the 4kx4k stereo OU latlong is imported into the comp. Which is working fine to import the VR stereo image sequence. All the monitor selections at the bottom of the node work fine and i am able to view in the viewers and it is also available in the Rift in full 3d 360 as expected.
2. I tried a number of ways to go from #1 and what worked was to use a 3Dslitter which also appears to work fine and also is able to see the image in full 360 3d.
3. then to an image plane 3d node where the 360 view is no longer available.
4. then to a 3d merge where the image plan and the VR spherical camera is connected. Spherical camera is set to LatLong.
5. out of 3d merge to 3d render node which provides the 360 view but now it is no longer viewable as 3d 360 in the viewer or the Rift.
I just learned form support that in node position #3 it should be a shape node set to sphere. That did work to keep the VR 360 going into the render node. However, now the image quality has degraded a good bit and the whole scene is in reverse. I can usually progress intuitively with these things having been doing this type work for so long, but I must say this is not at all intuitive and I need help. A good tutorial would be nice. But, in the mean time is there any one out in the forum with this knowledge to pass on to a fellow traveler?
Please..
I have been composting for many years but am new to Fusion.
Windows 10
Dell precision 36 core hyper threaded to 72 virtual cores
96GB memory
GTX 1080 Ti
Oculus Rift CV1
I am beginning a VR project and am deciding on AE or Fusion or both. My work flow is to use a combination of Maya output which provides an over-under stereo LatLong or equirectangular sequence. Currently I will be doing that with 4k x 4k 16bit tif, but may increase the res and or use exr 32 bit float in future. I will also be outputting images, geometry, etc. from Maya and other sources for composite with or without the 4k OU stereo LatLong as part of the composite. In other words a typical advanced stereoscopic composite work flow, accept in VR.
I followed the basic tutorial for a 3d stereo composite and have logically tried to apply it to a VR 3d stereo workflow by;
1. Starting with a loader node the 4kx4k stereo OU latlong is imported into the comp. Which is working fine to import the VR stereo image sequence. All the monitor selections at the bottom of the node work fine and i am able to view in the viewers and it is also available in the Rift in full 3d 360 as expected.
2. I tried a number of ways to go from #1 and what worked was to use a 3Dslitter which also appears to work fine and also is able to see the image in full 360 3d.
3. then to an image plane 3d node where the 360 view is no longer available.
4. then to a 3d merge where the image plan and the VR spherical camera is connected. Spherical camera is set to LatLong.
5. out of 3d merge to 3d render node which provides the 360 view but now it is no longer viewable as 3d 360 in the viewer or the Rift.
I just learned form support that in node position #3 it should be a shape node set to sphere. That did work to keep the VR 360 going into the render node. However, now the image quality has degraded a good bit and the whole scene is in reverse. I can usually progress intuitively with these things having been doing this type work for so long, but I must say this is not at all intuitive and I need help. A good tutorial would be nice. But, in the mean time is there any one out in the forum with this knowledge to pass on to a fellow traveler?
Please..
Last edited by Joel Peck on Tue Aug 29, 2017 1:36 pm, edited 1 time in total.