3D object to two trackers?

• Author
• Message

ykarmin88

• Posts: 9
• Joined: Wed Sep 21, 2022 2:00 pm
• Real Name: yizzy karmin
What if let's say I have footage where I move my hands like I'm shaping/molding something big between them. And let's say that something is a cube made from shape3d. Can I set a tracker for each hand, place my cube to one tracker, and set its target to another tracker so it looks like I'm rotating the cube with my hands? Sorry if I made it hard to visualize.

Kel Philm

• Posts: 554
• Joined: Sat Nov 19, 2016 6:21 am
You should be able to track the 2 hands with a 2D tracker and then take the unsteady rotation and apply it to the 3D object, the issue is you will only have rotation information for the front on axis (to camera). If you were able to shoot the hands with a 2nd camera simultaneously from the side you may be able to extract z depth as well and then hand track each hands in 3 dimensions from which you could calculate all of the rotations but would require some tinkering.

ykarmin88

• Posts: 9
• Joined: Wed Sep 21, 2022 2:00 pm
• Real Name: yizzy karmin
Kel Philm wrote:You should be able to track the 2 hands with a 2D tracker and then take the unsteady rotation and apply it to the 3D object, the issue is you will only have rotation information for the front on axis (to camera). If you were able to shoot the hands with a 2nd camera simultaneously from the side you may be able to extract z depth as well and then hand track each hands in 3 dimensions from which you could calculate all of the rotations but would require some tinkering.

So take a shot from multiple angles to get the necessary data. I figured that's probably how some hollywood films are made with all their beefy camera.

I want something like the regular tracker node but can be used in 3D space so I can connect other nodes like merge3d and shape3d. I take it what I'm probably looking for is something from 3D software such as Blender, and specialized tracker software such as SynthEyes and PFTrack. I find this weird coz this seems like an easier task than what the 3D camera tracker does.

Bryan Ray

• Posts: 2287
• Joined: Mon Nov 28, 2016 5:32 am
• Location: Los Angeles, CA, USA
Fusion has a 3d Camera tracker, but it doesn't do any object or geometry tracking. And in comparison to dedicated 3d matchmove software, even the camera tracker that it has is very primitive.

Even Syntheyes or PFTrack might have some trouble tracking hands, though. The typical workflow for that is to first solve the camera, then get approximate locations and orientation for the hands with a rudimentary object solve, and then the motion is fully solved by manually animating rigged hands in the 3d software of choice (usually Maya. Blender is also a possibility, but it lacks some quality-of-life features.)

I believe that Kalvin Kingdon is planning a tutorial on rotoanimating in Blender after a Syntheyes track. Not sure how long it will be before he does it, though. He's been rethinking a lot of his workflows lately, and the list of tuts he wants to do just keeps getting longer and longer.

Here's his channel, though:
Bryan Ray
http://www.bryanray.name
http://www.sidefx.com

ykarmin88

• Posts: 9
• Joined: Wed Sep 21, 2022 2:00 pm
• Real Name: yizzy karmin
Bryan Ray wrote:Fusion has a 3d Camera tracker, but it doesn't do any object or geometry tracking. And in comparison to dedicated 3d matchmove software, even the camera tracker that it has is very primitive.

Even Syntheyes or PFTrack might have some trouble tracking hands, though. The typical workflow for that is to first solve the camera, then get approximate locations and orientation for the hands with a rudimentary object solve, and then the motion is fully solved by manually animating rigged hands in the 3d software of choice (usually Maya. Blender is also a possibility, but it lacks some quality-of-life features.)

I believe that Kalvin Kingdon is planning a tutorial on rotoanimating in Blender after a Syntheyes track. Not sure how long it will be before he does it, though. He's been rethinking a lot of his workflows lately, and the list of tuts he wants to do just keeps getting longer and longer.

Here's his channel, though:

yup, I was talking about Fusion's Camera tracker. I thought object tracking is simpler than what the camera tracker does, so I don't understand why we don't have it yet (I actually don't know. I just feel it's simpler lol). I'm trying to learn other software, so I have other options.

Sander de Regt

• Posts: 2780
• Joined: Thu Nov 13, 2014 10:09 pm
Object tracking is even more difficult, as far as I can tell. Objects are smaller in the frame, usually have less features to track, can move in depth so their pivot point is all over the place and objects can be handled so they're occluded lots of the time. But that's my 0.02 cents.
Sander de Regt

The Netherlands

Hendrik Proosa

• Posts: 2482
• Joined: Wed Aug 22, 2012 6:53 am
• Location: Estonia
Practical considerations aside (that Sander laid out), it makes no difference whether motion is solved for camera or for object, it is just a matter of reference coordinate system. In one case world is static and camera moves, in another camera is static and object ”world” moves. Usually both move, in that case camera is solved first so that world coordsys can be established.
I do stuff.

Bryan Ray

• Posts: 2287
• Joined: Mon Nov 28, 2016 5:32 am
• Location: Los Angeles, CA, USA
Can't solve a pair of hands as a camera, though, as they move independently. You can solve a single object that way, but not two.
Bryan Ray
http://www.bryanray.name
http://www.sidefx.com

ykarmin88

• Posts: 9
• Joined: Wed Sep 21, 2022 2:00 pm
• Real Name: yizzy karmin
Sander de Regt wrote:Object tracking is even more difficult, as far as I can tell. Objects are smaller in the frame, usually have less features to track, can move in depth so their pivot point is all over the place and objects can be handled so they're occluded lots of the time. But that's my 0.02 cents.

That makes sense. I was thinking since the camera tracker analyzes a lot of points and the depth they move along on, you just have to specify certain areas of the footage and object tracking would be simpler. But I guess I'm off the mark by oversimplifying it this way.

Hendrik Proosa wrote:Practical considerations aside (that Sander laid out), it makes no difference whether motion is solved for camera or for object, it is just a matter of reference coordinate system. In one case world is static and camera moves, in another camera is static and object ”world” moves. Usually both move, in that case camera is solved first so that world coordsys can be established.

That sounds like we just haven't requested this feature enough. Though I doubt nobody has.

Bryan Ray wrote:Can't solve a pair of hands as a camera, though, as they move independently. You can solve a single object that way, but not two.

Guess that explains why it's more difficult as well. That's like a 3-camera system. There's one main camera that looks at the whole scene, and a bunch of others to relate the objects within the scene.

Hendrik Proosa

• Posts: 2482
• Joined: Wed Aug 22, 2012 6:53 am
• Location: Estonia
Bryan Ray wrote:Can't solve a pair of hands as a camera, though, as they move independently. You can solve a single object that way, but not two.

Well, ofcourse, can’t solve a bus ride for inside and outside with single camera either, still need additional object/cam for inside, doesn’t change the logic. Point is, there is no technical difference between an object and a camera.
I do stuff.