Photogrammetry / NeRF / Gaussian Splatting comparison
TLDRThis video explores three cutting-edge technologies: photogrammetry, neural radiance fields (Nerfs), and Gaussian splatting. The presenter, with 15 years of experience in photogrammetry, delves into each method's capabilities and ideal applications. Using a dataset of 345 frames from a video of a sandstone column named Church Rock, the video demonstrates how each technology processes and visualizes the data. Agisoft Metashape is used for photogrammetry, resulting in an accurate, textured 3D model. For neural radiance fields, the presenter employs Nerf Studio, showcasing its ability to generate novel viewpoints and create videos with customized camera paths. Lastly, Gaussian splatting in Unity provides a real-time visualization of the point cloud, offering a comprehensive scene context. The video concludes with a discussion on sharing photogrammetric models online and enhancing them with panoramic backgrounds to mimic the broader scene context provided by neural radiance fields and Gaussian splatting.
Takeaways
- ๐ **Photogrammetry, Neural Radiance Fields (Nerfs), and Gaussian Splatting are three distinct technologies that have gained popularity due to advancements in AI.**
- ๐ **Photogrammetry** is a technique used for 3D modeling that has been around for 15 years, focusing on accurate and measurable 3D models.
- ๐ค **Neural Radiance Fields (Nerfs)** utilize AI to generate novel viewpoints of a scene, offering a more extensive context including the sky and horizon not typically captured in photogrammetry.
- ๐ **Gaussian Splatting** involves rendering a scene in real-time using a game engine like Unity, providing a visualization method that combines the benefits of both photogrammetry and neural radiance fields.
- ๐ท **Agisoft Metashape** is used for photogrammetry processing, while **Nerf Studio** is employed for training the neural radiance field on the same set of video frames.
- ๐ **Photogrammetry** provides a detailed textured 3D model with baked-in lighting conditions, suitable for applications like 3D printing or game engine integration.
- ๐ **Accuracy and scalability** are key benefits of photogrammetry, allowing for measurements such as area and volume, which are not currently possible with Nerfs or Gaussian Splatting.
- ๐ **Neural Radiance Fields** can estimate the appearance of a scene from viewpoints that were not part of the original data, offering creative freedom in generating new camera paths.
- ๐ฅ **Virtual production** is a significant application for neural radiance fields, where quick data collection and later perfecting of camera paths are valuable.
- ๐งฉ **Gaussian Splatting** allows for the integration of additional elements into the scene, such as panoramic skies, providing a more complete scene visualization.
- ๐ **Photogrammetric models** can be easily shared and viewed online, offering a lightweight and accessible way to experience 3D models, which is not yet possible for Nerfs or Gaussian Splatting.
Q & A
What are the three technologies discussed in the video?
-The three technologies discussed in the video are photogrammetry, neural radiance fields (often called NeRFs), and Gaussian splatting.
What is the primary use of photogrammetry?
-Photogrammetry is primarily used for creating accurate, measurable 3D models of objects. It is often used in applications where precise measurements are required, such as in architecture or archaeology.
How does neural radiance field (NeRF) differ from photogrammetry?
-Neural radiance fields (NeRF) differ from photogrammetry in that they allow for the generation of novel viewpoints that were not part of the original data. They create a volumetric representation of a scene, which is not metrically valid for precise measurements like photogrammetry.
What is the significance of Gaussian splatting in the context of radiance fields?
-Gaussian splatting is significant as it provides a real-time visualization of radiance fields in a game engine like Unity. It allows for the representation of a point cloud with visible 'splats' derived from the input data, offering a more complete scene visualization.
How can one enhance a photogrammetric model to include broader context like the sky and distant mountains?
-One can enhance a photogrammetric model by adding a panoramic sphere to the background of the model. This technique involves mapping a panorama image to a sphere geometry around the photogrammetric model to simulate a broader scene context.
What is the advantage of using Agisoft Metashape for photogrammetry processing?
-Agisoft Metashape is advantageous for photogrammetry processing because it provides a structured workflow for creating 3D models from photographs. It allows for the estimation of camera positions, creation of a point cloud, and the generation of a textured polygonal mesh.
How does the lighting condition affect the final model in photogrammetry?
-In photogrammetry, the lighting conditions are 'baked' into the final model's texture. This means that the final model will reflect the lighting as it was during the photography, which can affect the appearance of the model when viewed without the texture.
What is the role of a neural network in the context of a neural radiance field?
-In the context of a neural radiance field, the neural network is trained on the input data (video frames) to estimate the colors and appearance of the scene from novel viewpoints that were not part of the original data set.
What is the main advantage of using neural radiance fields for virtual production?
-The main advantage of using neural radiance fields for virtual production is the ability to quickly capture a scene and later set up the perfect camera path in a virtual environment. This allows for more creative freedom and the possibility of achieving shots that may not be feasible with actual camera equipment on location.
How does the 3D Gaussian splatting method enhance the visualization of radiance fields?
-The 3D Gaussian splatting method enhances the visualization of radiance fields by representing the point cloud with visible 'splats' in a real-time environment like Unity. This provides a clearer and more detailed representation of the scene, allowing for interactive exploration and camera path setup within the game engine.
What are some potential applications of photogrammetry, neural radiance fields, and Gaussian splatting?
-Potential applications include creating accurate 3D models for architectural planning, archaeological documentation, virtual production for film and video, and real-time visualization in gaming or simulation environments.
Outlines
๐ Introduction to Emerging Technologies
The video introduces three popular technologies: photogrammetry, neural Radiance Fields (often called NeRFs), and Gaussian splatting. The speaker clarifies that while these technologies have gained attention due to AI hype, they are not deeply related and serve different purposes. The speaker, with experience in photogrammetry and a newcomer to NeRFs and Gaussian splatting, aims to simplify these complex methods. The video will compare these technologies by processing the same image set, specifically 345 frames from a video of a Sandstone column named Church Rock. The speaker provides resources such as software links and a dataset for viewers to experiment with.
๐ Photogrammetry and its Applications
The speaker discusses photogrammetry, a technique used for creating 3D models from photographs. The process involves estimating camera positions to relate each photograph to the object and to each other. Agisoft Metashape is used to process the photogrammetry, starting with a point cloud representing the surface of Church Rock. The resulting polygonal mesh is detailed, although not as high-resolution as it would be with professional photographs. The model is then textured using the original photographs to create a photorealistic appearance. Photogrammetry's strength lies in its accuracy and metric validity, allowing for measurements such as area and volume. However, it does not capture the broader scene context, focusing instead on the object of interest.
๐ง Neural Radiance Fields (NeRFs)
The video moves on to neural Radiance Fields, using the same video frames to train a neural network. The resulting Radiance field allows for the generation of novel viewpoints that were not part of the original data. Unlike photogrammetry, which provides a 3D model, NeRFs offer a volumetric representation that can estimate colors and scene details from new viewpoints. The speaker uses Metashape's camera pose estimation as a starting point for the neural Radiance field. The output includes additional scene context like the sky and distant mountains, which were not captured by photogrammetry. NeRFs are particularly useful for quickly capturing data for virtual production, where the perfect shot may not be feasible on location.
๐จ Gaussian Splatting for Real-time Visualization
The final technology discussed is Gaussian splatting, specifically 3D real-time Gaussian splatting of Radiance fields. The speaker describes how the point cloud from the photogrammetry is imported into a real-time game engine, Unity, and visualized using a method that transforms the point cloud into a splatted representation. This method allows for a broader scene context and the ability to move around and set up camera paths within the engine. The video showcases a rendered sequence from Unity that resembles the NeRF video but with added clarity and the ability to add additional elements, such as a panoramic sky.
๐ Enhancing Photogrammetry with Broader Context
The speaker concludes by demonstrating how photogrammetric models can be enhanced to include broader scene context, similar to NeRFs and Gaussian splatting. By adding a panoramic sphere to the background of the photogrammetric model, the speaker achieves a wider scene representation. The model is uploaded to Sketchfab, a web viewer for 3D models, where the panoramic background is displayed. The video ends with an invitation for viewers to engage with the content, subscribe for more, and explore the speaker's other videos on related topics.
Mindmap
Keywords
Photogrammetry
Neural Radiance Fields (NeRF)
Gaussian Splatting
Agisoft Metashape
Nerf Studio
Polygonal Textured Mesh
Point Cloud
UV Mapping
Camera Pose Estimation
Volumetric Representation
Real-Time Rendering
Highlights
The video discusses three popular technologies: photogrammetry, neural radiance fields (Nerfs), and Gaussian splatting.
Photogrammetry is a technique used for creating 3D models from photographs, which has been used for 15 years by the speaker.
Neural Radiance Fields (Nerfs) and Gaussian splatting are newer technologies used for visualization and have been explored by the speaker for a few months.
The speaker uses Agisoft Metashape for photogrammetry processing and Nerf Studio for neural radiance field processing.
Photogrammetry involves estimating camera positions and creating a textured 3D model, which can be measured and scaled.
Neural Radiance Fields allow for the generation of novel viewpoints that were not captured in the original data.
Gaussian splatting visualizes a point cloud in a real-time game engine like Unity, offering interactivity and the ability to add additional elements to the scene.
Photogrammetry provides accurate and measurable 3D models, suitable for applications like architectural plans.
Neural Radiance Fields and Gaussian splatting are not metrically valid and are better suited for virtual production and artistic visualization.
The video demonstrates how to use camera pose estimation from Metashape as a starting point for a neural radiance field.
Nerf Studio can generate new videos with stylized camera movements that emulate the look and feel of first-person view drone videos.
Gaussian splatted point clouds can be imported into Unity for real-time visualization and further enhancement with additional elements.
Photogrammetric models can be shared easily and viewed in web viewers like Sketchfab, offering a broader context to the model.
The speaker has collected numerous photogrammetric datasets over 15 years, which can be reused for new visualizations with these technologies.
The video includes a link to a zip file containing the data used in the demonstration for viewers to try processing themselves.
The speaker invites viewers to share their results and questions in the comments section for further discussion.
The video concludes with an invitation to subscribe for more content on neural radiance fields and Gaussian splatted videos.