Turning a VIDEO into 3D using LUMA AI and BLENDER!

Bad Decisions Studio
17 Apr 202303:18

TLDRIn this video, the presenter demonstrates how to turn a video into a 3D model using Luma AI and Blender. They capture various objects, including a payphone and a car, using different camera angles and devices. Despite challenges like reflective surfaces and low light, Luma AI successfully creates 3D models from the footage. The presenter then refines the models in Blender, removing sharp edges for a smoother appearance. The video concludes with a teaser for a future demonstration on how these 3D assets can be used in 3D software for background purposes, promising improved quality as the technology advances.

Takeaways

  • 🚀 Luma AI's new feature allows turning videos into 3D models through photogrammetry, eliminating the need for individual photos.
  • ⏱️ The process is quick, with the video-to-3D conversion taking approximately 20 to 30 minutes per clip.
  • 📹 The quality of the 3D model depends on the source video, with DSLR footage providing sharper results compared to iPhone.
  • 🌆 Reflective surfaces and low light conditions present challenges, but Luma AI still manages to produce impressive results.
  • 📈 The technology is in its early stages, with expectations of continuous improvement in quality.
  • 📈 The AI automatically separates the scene from the object, showcasing its ability to discern and create detailed 3D models.
  • 📦 Luma AI offers a downloadable glTF model and an Unreal Engine plugin for further integration and use.
  • 🛠️ Post-processing in Blender can refine the 3D models by smoothing out sharp edges.
  • 🎥 The original video footage can be short, as demonstrated by the minute and 42 seconds recording that yielded a usable 3D model.
  • 🌟 Even with limitations like reflective paint and darkness, the car model produced was surprisingly good for background use.
  • 📅 A follow-up demonstration is planned to show how these 3D assets perform when used in 3D software for background purposes.

Q & A

  • What is the main topic of the video?

    -The main topic of the video is demonstrating how to turn a video into a 3D model using Luma AI and Blender.

  • What does Luma AI's video to photogrammetry feature allow?

    -Luma AI's video to photogrammetry feature allows users to create 3D models from videos instead of taking multiple photos to capture an object in 3D space.

  • What was the time constraint the creators faced while filming the video?

    -The creators had a time constraint as they only had a couple of minutes before it turned dark, which affected their filming.

  • How long did it take for each 3D mesh to be processed after uploading the video clips to Luma AI?

    -Each 3D mesh took about 20 to 30 minutes to be processed after uploading the video clips to Luma AI.

  • What tool did the creator use in Blender to smooth out the 3D model?

    -The creator used a smooth tool in Blender to remove the sharp edges from the 3D model.

  • How long was the footage that the creator recorded for the cone 3D model?

    -The creator only spent one minute and forty-two seconds to record the footage for the cone 3D model.

  • What are the differences in the quality of the 3D models created from iPhone and Sony DSLR footage?

    -The 3D model created from the Sony DSLR footage had sharper quality, not only due to the camera's image quality but also because the creator got closer to the object. The iPhone footage was taken from three levels of loop from high, mid, and low angles as per the website's instructions.

  • What challenges were faced when trying to create a 3D model of the car?

    -The challenges faced when creating a 3D model of the car included the car's reflective paint, the darkness outside, and the fact that the entire car was not visible in the footage.

  • What is the next step the creators plan to take with the 3D models?

    -The next step the creators plan to take is to use the 3D models to create a quick and short video to see how these assets perform when used in the background in 3D software.

  • What is the significance of the glTF model and Unreal Engine plugin mentioned in the video?

    -The glTF model is a file format for 3D models that the creator downloaded, and the Unreal Engine plugin is a tool that the creator plans to cover in another video, indicating the versatility and compatibility of the 3D models created with Luma AI.

  • What does the creator imply about the future of this technology?

    -The creator implies that the technology is in its early stages and that the quality of the 3D models generated is expected to improve over time.

  • What was the overall impression of the results obtained from Luma AI for the car?

    -Despite the challenges, the creator found the results for the car to be pretty impressive, especially considering it was used in the background.

Outlines

00:00

📹 Luma AI Video to Photogrammetry Discovery

The speaker expresses excitement about discovering that Luma AI can now convert videos into 3D models through photogrammetry. They mention the urgency of capturing footage before dark and the process of recording different angles and heights. The video also covers the technical process of uploading clips to Luma AI's website, dealing with encoding issues, and the time it takes for the AI to generate a 3D mesh. The resulting 3D models are showcased, with a focus on the impressive separation of the scene from the object. The speaker also discusses the use of Blender to refine the 3D models and compares the quality of models derived from different camera sources, including an iPhone and a Sony DSLR. The reflective nature of windows and the challenges of capturing a reflective, long car in low light are mentioned, and the speaker concludes with a teaser for a future video showcasing the use of these 3D assets in a short video.

Mindmap

Keywords

Luma AI

Luma AI is a technology that enables the conversion of video into 3D models through photogrammetry. In the context of the video, it is used to transform regular video footage into a 3D representation of the objects within it. This is significant as it streamlines the process of creating 3D models from visual data, which traditionally required a series of photographs taken from different angles. The video demonstrates how Luma AI can process various types of footage, from an iPhone to a Sony DSLR, to create 3D models.

Photogrammetry

Photogrammetry is the technique of making measurements from photographs, especially for recovering the exact positions of points. In the video, it is the process by which Luma AI analyzes video to create 3D models. This method is traditionally used with a series of still images but is innovatively applied to video in this case, showcasing a new application of photogrammetry in the field of 3D modeling.

3D Model

A 3D model refers to a mathematical representation of any three-dimensional surface of objects in a computer. In the video, the 3D models are the end product of the Luma AI's photogrammetry process. The creation of these models from video footage allows for a wide range of applications, such as in gaming, animation, or virtual reality, where the 3D objects can be manipulated and viewed from any angle.

Video Footage

Video footage is the actual recording of moving images captured by a camera. In the context of the video, different video footages from an iPhone and a Sony DSLR are used as input for Luma AI to generate 3D models. The quality and characteristics of the video footage directly impact the final 3D model's accuracy and detail, as demonstrated by the comparison between the iPhone and DSLR results.

DaVinci with h.265 Encoding

DaVinci is a professional video editing software, and h.265 encoding refers to a video compression standard that provides better data compression than its predecessor, h.264. In the video, the DSLR footage was processed through DaVinci with h.265 encoding before being uploaded to Luma AI. This step was necessary because the original DSLR footage failed to upload directly, highlighting the importance of video format compatibility in the 3D modeling process.

VLC Cone

The VLC Cone is a term used in the video to refer to the initial 3D model generated by Luma AI. It is named after the shape of the 3D model, which resembles a cone. The VLC Cone serves as an example of the quality of the 3D models produced by Luma AI, demonstrating the AI's ability to separate the object from its background and create a clean 3D representation.

GLTF Model

GLTF, which stands for GL Transmission Format, is a file format for 3D models that emphasizes efficiency and speed for web-based graphics. In the video, the creator mentions downloading a GLTF model, which is one of the output options provided by Luma AI. This format allows for the 3D models to be easily integrated into various applications and platforms that support GLTF.

Unreal Engine

Unreal Engine is a widely-used game engine for developing high-quality games and 3D applications. The video mentions an Unreal Engine plugin, suggesting that Luma AI's 3D models can be used within the Unreal Engine environment. This plugin would facilitate the integration of the AI-generated 3D models into game development or other interactive 3D applications.

Blender

Blender is a free and open-source 3D creation suite that supports the entirety of the 3D pipeline—modeling, rigging, animation, simulation, rendering, compositing, and motion tracking. In the video, Blender is used to refine the 3D models generated by Luma AI, specifically to smooth out sharp edges. This indicates the software's role in post-processing and fine-tuning the AI-generated models for further use.

Payphone

In the video, the payphone serves as a subject for the 3D modeling process using Luma AI. The payphone is filmed from various angles and heights to create a 3D model. The choice of a payphone as an object demonstrates the versatility of Luma AI in handling different types of objects and textures, including reflective surfaces and those with intricate details.

Reflective Surface

A reflective surface is a material that bounces back a significant amount of light, often causing complications in image processing and 3D modeling due to mirror-like reflections. In the video, the creators mention challenges in modeling a car with a reflective paint job. Despite the difficulties, Luma AI still produced an impressive result, showcasing the technology's potential in handling complex surfaces.

Highlights

Luma AI has enabled video to photogrammetry, allowing 3D capture of an object from a video instead of photos.

The process was tested in a time-limited setting with natural light conditions.

Different camera angles and heights were utilized for the video capture.

DaVinci Resolve was used to process DSLR footage with h.265 encoding before uploading to Luma AI.

Each 3D mesh generated by Luma AI took approximately 20 to 30 minutes to complete.

The AI successfully separated the scene from the object in the video, showcasing impressive results.

A glTF model and an Unreal Engine plugin are available for further integration.

Blender was used to refine the 3D model by smoothing out sharp edges.

The video footage recorded took only a minute and 42 seconds to produce a 3D model.

The technology is in its early stages, with expectations of improving quality over time.

Payphone results varied between iPhone and Sony DSLR footage, with the DSLR providing sharper quality.

Reflective surfaces and low light conditions presented challenges in the car modeling example.

Despite the challenges, the car model result was considered impressive for background use.

A quick and short video will be created using the 3D assets to demonstrate their performance in 3D software.

The potential applications of this technology are vast, offering new possibilities for video content.

Stay tuned for a follow-up video covering the Unreal Engine plugin.

The process demonstrates the potential of AI in transforming 2D video into interactive 3D models.