3D Scan your Environment with AI (Free Tool)

Nik Kottmann
9 Dec 202304:16

TLDRIn this tutorial, the presenter demonstrates how to use a free tool called Luma AI to create NeRF (Neural Radiance Fields) scans of an environment. NeRF is an innovative method that employs artificial intelligence to capture not only standard surfaces but also metallic and reflective ones. The process begins with recording a video of the subject using a smartphone, moving around it in circles to cover various perspectives. After uploading the video to Luma Labs, the AI processes it, and the user can view the 3D scan from multiple angles. The presenter then guides viewers on creating a camera animation by setting keyframes and adjusting the focal length for different zoom levels. The animation can be refined by adjusting the duration of keyframes and distributing them evenly on the timeline. The final step is rendering the video in the desired aspect ratio and resolution. The presenter also mentions the availability of preset camera animations and offers a link to download 3D assets used in the tutorial.

Takeaways

  • 🎥 **AI-Powered 3D Scanning**: The tutorial introduces Luma AI, a free tool that uses AI to create 3D scans of environments, known as NeRF (Neural Radiance Fields).
  • 📱 **Equipment Requirement**: You don't need an expensive camera; an iPhone can be used to record the video for scanning.
  • 🔄 **Capturing Process**: Move around the subject in circles at least three times, capturing different heights to cover various perspectives.
  • ⏸️ **Luma AI Tips**: There are specific tips provided by Luma AI to pause and read before starting to capture.
  • 🌐 **Uploading Video**: After recording, upload the video to Luma Labs to start the scanning process.
  • ⏳ **Processing Time**: The scan processing typically takes around 30 minutes.
  • 👀 **3D Viewport**: Once processed, the scan can be viewed from different angles in the 3D viewport.
  • 📹 **Camera Animation**: Set the focal length and create a camera animation by adding keyframes to animate the camera path.
  • ⏰ **Animation Duration**: Adjust the duration of the animation by extending the time between keyframes.
  • 🎥 **Rendering Options**: Choose the aspect ratio and resolution for rendering the video, with presets available for convenience.
  • 📁 **Free 3D Assets**: The presenter offers to share all created 3D assets for free on their Blender Kit profile.
  • ❓ **Questions Welcome**: Encourages viewers to ask further questions in the comment section.

Q & A

  • What does the term 'NeRF' stand for in the context of the video?

    -NeRF stands for Neural Radiance Fields, which is a method used to scan environments and create camera animations using artificial intelligence.

  • What is the name of the free tool mentioned in the video for creating NeRF scans?

    -The free tool mentioned in the video for creating NeRF scans is called Luma AI.

  • How does the NeRF method differ from traditional photogrammetry approaches?

    -The NeRF method differs from traditional photogrammetry approaches by utilizing artificial intelligence, which allows it to capture metallic and reflective surfaces as well.

  • What device does the presenter use to record the video for the NeRF scan?

    -The presenter uses an iPhone to record the video for the NeRF scan.

  • How many times should one move around the subject when recording the video for a NeRF scan?

    -One should move around the subject in circles at least three times to cover as many perspectives as possible.

  • What are some tips provided by Luma AI for recording a video for a NeRF scan?

    -The video suggests pausing the video to read the tips provided by Luma AI before starting to capture, ensuring to move around the subject and capture different heights.

  • How long does it typically take for the scan to be processed after uploading the video to Luma Labs?

    -It usually takes around 30 minutes for the scan to be processed after uploading the video to Luma Labs.

  • What is the first step in creating a camera animation for the NeRF scan?

    -The first step in creating a camera animation is to set the focal length of the camera by adjusting the value to zoom in or out.

  • How does one add a key frame for the camera animation in the 3D viewport?

    -One can add a key frame by placing the camera where they want to start, clicking the 'add key frame' button, and then moving the camera to the next desired position to add another key frame.

  • How can the speed of the camera animation be adjusted in the tutorial?

    -The speed of the camera animation can be adjusted by changing the duration of the key frames on the timeline, making the animation longer or shorter as desired.

  • What are the preset camera animations available for use in Luma AI?

    -The preset camera animations available in Luma AI include 'orbit', which makes the camera rotate around the subject, and 'oscillate', which adds a bit of movement to the camera.

  • Where can viewers find the 3D assets created in the video for free?

    -Viewers can find the 3D assets created in the video for free on the presenter's Blender Kit profile, with the link provided in the video description.

Outlines

00:00

🎥 Introduction to Creating Nerf Scans with Luma AI

The video begins with a welcome back to a tutorial where the host plans to demonstrate how to create Nerf (Neural Radiance Fields) scans using Luma AI, a free tool. Nerf scanning is a novel method for scanning environments to produce impressive camera animations. The host mentions using this technique for a client's animation and notes its advantage over traditional photogrammetry in capturing metallic and reflective surfaces. The process starts with recording a video of the subject using a smartphone, moving around it in circles and at different heights to cover various perspectives. Tips from Luma AI are provided, and viewers are instructed to upload the video to Luma labs for processing, which takes approximately 30 minutes. Once the scan is ready, it can be viewed in a 3D viewport, and the host expresses satisfaction with the scan quality.

Mindmap

Keywords

💡Nerf scans

Nerf scans, which stands for Neural Radiance Fields, is a technique used to scan the environment and create 3D representations. It is a relatively new method that utilizes artificial intelligence to capture not only the geometry but also the appearance of a scene, including metallic and reflective surfaces. In the video, the host demonstrates how to create Nerf scans using a free tool called Luma AI, which is significant as it allows for a more comprehensive capture of the environment.

💡Luma AI

Luma AI is a free tool that facilitates the creation of Nerf scans. It is mentioned in the video as the software used to process the recorded video into a 3D scan. The tool's AI capabilities enable it to handle complex surfaces that traditional photogrammetry might struggle with. The host uses Luma AI to demonstrate the scanning process and the creation of a 3D animation for a client.

💡Photogrammetry

Photogrammetry is a technique used to create 3D models from photographs or videos. It is traditionally used in surveying and mapping but has expanded into various fields including film and gaming. In the context of the video, photogrammetry is contrasted with the AI-driven Nerf scanning method, which is said to offer advantages such as the ability to capture metallic and reflective surfaces.

💡Metallic and Reflective Surfaces

These are types of surfaces that are challenging to capture accurately with traditional photogrammetry due to their reflective properties. The video emphasizes that the AI-driven Nerf scanning method, which is showcased, can effectively capture these surfaces, allowing for more realistic and detailed 3D models.

💡3D Viewport

The 3D viewport is a feature in 3D modeling and animation software where users can view and manipulate 3D objects within a simulated 3D space. In the video, the host uses the 3D viewport to review the Nerf scan once it has been processed, showcasing the scan from various angles.

💡Camera Animation

A camera animation refers to the movement and changes in perspective of a virtual camera within a 3D environment or animation. The video demonstrates how to create a camera animation using keyframes to move the camera around the subject, resulting in a dynamic and engaging visual sequence.

💡Keyframes

Keyframes are points in an animation timeline that define the start or end of a transition in an object's properties, such as position, rotation, or scale. In the video, the host uses keyframes to set the path of the virtual camera, creating an animation that moves smoothly from one viewpoint to another.

💡Focal Length

The focal length of a camera determines the angle of view and the magnification of the image. A higher focal length results in a narrower field of view (zoomed in), while a lower focal length offers a wider field of view (zoomed out). In the video, the host adjusts the focal length to control the perspective of the camera animation.

💡Rendering

Rendering is the process of generating a 2D image or video from a 3D model or animation by simulating the behavior of light to create an image. In the context of the video, rendering is the final step where the host exports the animated camera path as a video, choosing the aspect ratio, resolution, and frame rate for the final output.

💡Aspect Ratio

The aspect ratio is the proportional relationship between the width and the height of an image or screen. Common aspect ratios include 4:3, 16:9, and others. In the video, the host chooses an aspect ratio of 16:9, which is a widescreen format commonly used for HDTV and online video content.

💡Presets

Presets are pre-defined settings or configurations that can be used to quickly apply specific effects or parameters without manually adjusting each setting. In the video, the host mentions using preset camera animations such as 'orbit' and 'oscillate' to easily create a camera path around the subject.

Highlights

Luma AI is a free tool that uses neural Radiance Fields (NeRF) to scan environments.

NeRF is a new method that can capture metallic and reflective surfaces unlike traditional photogrammetry.

A high-end camera is not required; an iPhone can be used to record the subject.

To capture a subject, move around it in circles at least three times, varying heights for different perspectives.

Pause the video and read Luma AI's tips before starting to capture.

Upload the recorded video to Luma Labs to start the scanning process.

Processing a scan usually takes around 30 minutes.

After processing, the scan can be viewed from various angles in the 3D viewport.

Exporting as a video requires creating a camera animation.

Adjust the focal length of the camera for different zoom levels.

Animate the camera by adding key frames at desired camera positions.

The animation can be adjusted for speed and smoothness by manipulating key frames on the timeline.

Render the video with options for aspect ratio, resolution, and frame rate.

Preset camera animations like 'orbit' and 'oscillate' can be used for convenience.

3D assets created in the tutorial are available for free download on the presenter's Blender Kit profile.

The presenter, Nick, provides a link to the Blender Kit profile in the video description.

This tutorial aims to teach viewers how to use Luma AI for creating 3D scans and animations.

Questions about the tutorial can be asked in the comment section.