There’s a lot of detail in the very far distance, which isn’t very helpful to the solver. There are other challenges in this example shot in addition to the man moving his arm. The points that are automatically tracked on his arm only confuse the solver, so it would be better to manually animate a rough mask to prevent the tracker from looking at this area. In my example, you can see the man’s arm move to raise the soda can. It’s a good idea to mask out any moving objects, such as people or vehicles, as these will make the tracking and subsequent solve more difficult. Like almost all tools in Fusion, the Camera Tracker has a Mask input. The Camera3D has all the characteristics and motion of the real world camera, and it also shows the video ‘projected’ onto an image plane as an optional reference or background. Once it’s figured out how the camera was moving in your shot, it creates a 3D setup in Fusion consisting of a Camera3D and a Point Cloud (along with a Merge3D and a Renderer3D). The purpose of the Camera Tracker tool is to calculate (solve) the motion of a real-world camera by analyzing a piece of video. What’s the point of 3D Tracking (and how do you get good results)? We also look at using the camera as a projector to paint on a patch in the scene and have it apparently move with the scene for quick and easy paint fixes. Once the 3D scene has been created with a virtual camera, we can add things (such as 3D text) into the scene by aligning and positioning it in front of the camera. Exporting from the CameraTracker to the 3D scene.Refining the solve by deleting less accurate tracking points.In previous Insights, we’ve looked at Fusion’s Point Tracker and Fusion’s Planar Tracker, so now we’re going to delve into the 3D world and learn about the Camera Tracker. Tutorials / Using Fusion’s 3D Camera Tracker for Patching and Object Placement Reconstructing a camera’s motion
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |