Unleash the power of 360 cameras with AI-assisted 3D scanning. (Luma AI)

Olli Huttunen
15 Jun 202311:50

TLDRThe video introduces the innovative use of 360 cameras and AI-assisted 3D scanning through a technology called Neural Radiance Fields (Nerf). Hosted by Oli Thutunen, the video explains how Nerf models can be created using Luma AI, a user-friendly cloud service that simplifies the process of 3D modeling. With Luma AI, users can capture videos of objects from various angles and upload them to the cloud for processing. The resulting Nerf models can be manipulated in 3D space, offering new possibilities for capturing reflections, transparent objects, and even creating camera movements within the scanned environment. The video also discusses the potential of using 360 cameras for scanning, which can capture wider areas and provide unique angles for 3D modeling. Despite some limitations in accuracy and the need for further development, the technology presents an exciting future for 3D modeling and opens up new creative avenues for artists and designers.

Takeaways

  • πŸ“± The use of 360 cameras for 3D modeling is enhanced by AI-assisted 3D scanning technology.
  • πŸš€ NeRF (Neural Radiance Fields) is an advanced method for 3D scanning that uses AI to create volume models from recorded camera environments.
  • 🌟 Luma AI is a user-friendly cloud service that allows users to create NeRF models through a mobile app, with processing done in the cloud.
  • πŸ”„ After capturing video of an object from various angles, the video is sent to Luma AI for about 30 minutes of processing to create a rotatable 3D model.
  • πŸŽ₯ Luma AI enables the creation of new camera movements and rendering of retakes within scanned environments without returning to the original location.
  • 🌈 NeRF can capture reflections and transparent objects, which are challenging in traditional photogrammetry.
  • πŸ“· NeRF models can be trained with various cameras, not just smartphones, including uploading materials through a web browser.
  • 🀳 Using a 360 camera with NeRF allows for scanning larger areas and easier positioning for capturing subjects from all sides.
  • πŸ‘ The Insta360 camera is highlighted for its stabilization and horizon lock features, which are beneficial for scanning NeRF models with circular motion.
  • πŸ“ Post-processing with tools like Insta360 Studio can help keep subjects centered and remove unwanted elements before uploading to Luma AI.
  • 🌞 Shooting in overcast weather is recommended for even lighting and fewer shadows, which aids in better scan quality.
  • πŸ” While NeRF models may not be as accurate as traditional photogrammetry, they offer a unique perspective and are compatible with platforms like Unreal Engine for further development.

Q & A

  • What is the main topic discussed in the video?

    -The main topic discussed in the video is the use of neural Radiance Fields (NeRF) for 3D modeling with AI-assisted 3D scanning, particularly focusing on how it can be applied using Luma AI's cloud service and 360 cameras.

  • What is the difference between traditional photogrammetry and neural Radiance Fields (NeRF)?

    -Traditional photogrammetry involves creating polygon surfaces from a set of photos, while NeRF is an advanced method that uses AI to calculate the environment recorded by the camera and produces a volume model that can be explored in three-dimensional space.

  • How does Luma AI make the process of creating NeRF models more user-friendly?

    -Luma AI offers an easy-to-use app that can be downloaded to a phone, allowing users to create NeRF models by simply selecting an object, scanning it by moving around it, and then sending the video to Luma's cloud for processing.

  • What are the benefits of using a 360 camera for NeRF scanning?

    -A 360 camera can capture much wider areas and is easier to position at different heights or angles due to its use with a selfie stick. It also allows for scanning objects from all sides without worrying about keeping the subject in the center of the frame.

  • How does Luma AI handle the removal of the photographer from the final 3D model?

    -Luma AI uses AI to remove the photographer from the picture during the scanning process, as the photographer is constantly moving in relation to the background, leaving only the stationary objects in the final model.

  • What are the ideal conditions for shooting NeRF models with a 360 camera?

    -The best conditions for shooting NeRF models are overcast weather with few shadows and even lighting on the subjects. Direct sunlight and moving shadows can interfere with the scanning process.

  • How can full 360 images be useful in certain scenarios?

    -Full 360 images are useful when capturing tight spots or areas where it's not possible to move around objects, such as narrow alleys or corridors. They allow for a comprehensive view and can enable unique camera movements in post-processing.

  • What are the limitations of using NeRF models for 3D programs?

    -NeRF models, when exported as surface models for 3D programs, are not very accurate and can have many loose vertices that cause the model to fray. They require significant cleaning and may not be as useful in their current state due to the technology's early development.

  • How does the NeRF technology differ from typical 3D mesh models?

    -NeRF technology produces volume models that look and feel different from typical 3D mesh models. They can produce results similar to the original video from which they were built, offering a more realistic representation of the environment.

  • What are some potential applications of NeRF models in the future?

    -Potential applications of NeRF models include using them in Unreal Engine as volume models for lighting environments differently, utilizing depth of field effects, and exploring new possibilities in 3D modeling and rendering.

  • What is the speaker's recommendation for those interested in NeRF technology?

    -The speaker recommends trying out NeRF technology, especially using a 360 camera, as it can be a fun and engaging way to explore the capabilities of AI-assisted 3D scanning.

  • How long does it typically take for Luma AI to process a NeRF model after the video is uploaded?

    -After uploading the video to Luma AI's cloud for processing, it typically takes about 30 minutes for the model to be ready for rotation and exploration from different angles.

Outlines

00:00

πŸ“± Exploring 360 Cameras and Neural Radiance Fields in 3D Modeling

The video begins with the host, olithutunen, discussing the capabilities of modern smartphones and 360 cameras in the context of 3D modeling. The focus is on Neural Radiance Fields (Nerf), a cutting-edge method that uses AI to create volume models from photographic data. This technology represents a significant advancement over traditional photogrammetry, allowing for the rendering of reflections and transparent objects. Two primary methods for creating Nerf models are presented: a complex, command-line approach requiring programming skills, and a user-friendly cloud service called Luma AI, which offers a mobile app for scanning objects and generating 3D models. The process involves capturing video of an object from multiple angles, uploading it to Luma's cloud, and then manipulating the resulting model. The video also touches on the potential of using 360 cameras with Nerf, given their wide field of view and ability to capture subjects from all sides, making it an ideal tool for scanning complex environments.

05:02

🌟 Editing and Post-Processing with Luma AI and 360 Cameras

The second paragraph delves into the post-processing of 360 camera footage for use with Luma AI. It explains how the host uses the Insta 360 camera for its stabilization and horizon lock features, which are crucial for maintaining image quality during the scanning process. The video discusses how to edit footage with Insta 360 Studio to keep the subject centered and how Luma AI removes the photographer from the final model. The host also addresses the challenges of shooting in sunny weather, where shadows can affect the scan quality, and recommends overcast conditions for even lighting. Additionally, the use of full equirectangular images is explored, particularly for capturing tight spaces where a 360-degree view is necessary. The limitations of translating Nerf models into surface models for 3D programs are acknowledged, with the host noting the roughness and distortions that can occur.

10:02

πŸš€ The Future of Neural Radiance Fields and 3D Modeling

In the final paragraph, the host reflects on the potential future applications of Neural Radiance Fields. Despite the current limitations of Nerf models when exported as 3D surface models, the host expresses optimism about the technology's rapid development. The video highlights the unique opportunity to use Nerf models as volume models in Unreal Engine, which opens up possibilities for advanced lighting and camera effects. The host concludes by encouraging viewers to experiment with 360 cameras and Luma AI, emphasizing the fun and creative potential of this emerging technology. The video ends with a call to like, subscribe, and look forward to future content.

Mindmap

Keywords

360 cameras

360 cameras are devices capable of capturing a full panoramic view of the surroundings, recording in every direction simultaneously. They are used for creating immersive experiences and are particularly useful for 3D modeling as they can capture a subject from all angles. In the video, the host discusses how these cameras can be used with AI-assisted 3D scanning technology to produce detailed 3D models.

AI-assisted 3D scanning

AI-assisted 3D scanning refers to the use of artificial intelligence to assist in the creation of three-dimensional models from captured images or videos. It streamlines the process of 3D modeling by automating calculations and enhancing the accuracy of the models. The video highlights how AI can help in producing volume models that can be explored in three-dimensional space.

Neural Radiance Fields (NeRF)

Neural Radiance Fields, also known as NeRF, is a cutting-edge method in 3D modeling that uses AI to infer the continuous volume density of a scene from a collection of photographs. Unlike traditional photogrammetry, NeRF can handle complex lighting and produce more accurate 3D representations, including reflections and transparent objects. The video explains how NeRF is a significant advancement in creating 3D models from 360 camera footage.

Luma AI

Luma AI is a cloud service that offers an easy-to-use application for creating NeRF models. Users can download the app to their phones, capture video of an object from all sides, and then upload the footage to Luma AI's cloud for processing. The service simplifies the creation of 3D models, making it accessible to a broader audience. The video demonstrates how Luma AI can be used with 360 cameras to produce 3D models.

Volume model

A volume model in the context of 3D modeling refers to a representation of a three-dimensional shape that includes information about the space a shape occupies. Unlike polygon surfaces, volume models can be rotated and explored from within the three-dimensional space. The video discusses how NeRF produces volume models that allow for more dynamic and interactive exploration.

Photogrammetry

Photogrammetry is the science of making measurements from photographs, especially for recovering the exact positions of surface points. In 3D modeling, it involves creating polygon surfaces from a series of photographs. The video contrasts photogrammetry with NeRF, highlighting the latter's ability to handle more complex scenes and produce more detailed models.

Reflections and transparent objects

In 3D modeling, accurately representing reflections and transparent objects can be challenging due to the way light interacts with these surfaces. The video emphasizes that NeRF technology is capable of capturing and rendering reflections and transparent objects more effectively than traditional photogrammetry methods, which is a significant advantage for creating realistic 3D models.

Insta 360 camera

The Insta 360 camera is a specific brand of 360-degree cameras mentioned in the video. It is noted for its good stabilization and horizon lock feature, which keeps the image level during rotation. These features are beneficial for scanning NeRF models as they require capturing the subject in a circular motion from different angles.

Equirectangular image format

The equirectangular image format is a method of representing a 360-degree panoramic image in a standard 2D aspect ratio. It is typical for 360 cameras and is supported by Luma AI for processing NeRF models. The video explains that this format allows for capturing a wide field of view, which is advantageous for 3D scanning.

Unreal Engine

Unreal Engine is a game engine widely used for creating interactive experiences and 3D simulations. In the context of the video, it is mentioned as a platform where NeRF models can be imported and further utilized, offering possibilities for dynamic lighting and advanced camera features. This integration opens up new creative avenues for 3D artists and designers.

Low poly, medium poly, high poly

These terms refer to the level of detail in a 3D model, defined by the number of polygons used to construct the surface. A low poly model uses fewer polygons, making it simpler and less detailed, whereas a high poly model uses more polygons for a smoother and more detailed surface. The video discusses the option to download NeRF models in different levels of polygon detail, which affects the model's appearance and usability in various applications.

Highlights

360 cameras can be used for 3D modeling with AI-assisted 3D scanning technology.

Neural Radiance Fields (Nerf) is a method that creates volume models from camera recordings.

Nerf uses AI to calculate the environment and produce 3D models that can be explored in space.

There are two ways to create Nerf models: a complex method involving Python and terminal commands, and a user-friendly cloud service called Luma AI.

Luma AI allows users to create Nerf models through a simple app on their phones by scanning objects.

After scanning, the video is sent to Luma's Cloud for processing, and the model can be viewed in about 30 minutes.

Luma AI service enables the creation of new camera movements and rendering of retakes without returning to the shooting location.

Nerf models can display reflections and transparent objects, which are difficult to represent in traditional photo modeling.

Nerf models can be trained with various cameras, not just phones, by uploading materials through a web browser.

360 cameras are supported by Luma AI and can capture wider areas, making it easier to scan objects from all sides.

Insta 360 cameras offer good stabilization and horizon lock features, which are crucial for scanning Nerf models.

Post-processing with Insta 360 Studio allows for editing and keeping subjects centered in the image.

Luma AI can remove the photographer from the final model, leaving only stationary objects.

Shooting in overcast weather with few shadows and even lighting is recommended for the best scanning conditions.

Full 360 images are useful for capturing tight spots or areas where the photographer cannot move around objects.

Nerf models can be exported into 3D programs like Unreal Engine as volume models, offering new possibilities for lighting and camera features.

While Nerf models are not as accurate as traditional photochrometry, they offer a different perspective on 3D modeling and are a rapidly developing technology.

The future of neural Radiance Fields is fascinating and holds potential for significant advancements in 3D scanning and modeling.