Turning a VIDEO into 3D using LUMA AI and BLENDER!
TLDRIn this video, the presenter demonstrates how to turn a video into a 3D model using Luma AI and Blender. They capture various objects, including a payphone and a car, using different camera angles and devices. Despite challenges like reflective surfaces and low light, Luma AI successfully creates 3D models from the footage. The presenter then refines the models in Blender, removing sharp edges for a smoother appearance. The video concludes with a teaser for a future demonstration on how these 3D assets can be used in 3D software for background purposes, promising improved quality as the technology advances.
Takeaways
- π Luma AI's new feature allows turning videos into 3D models through photogrammetry, eliminating the need for individual photos.
- β±οΈ The process is quick, with the video-to-3D conversion taking approximately 20 to 30 minutes per clip.
- πΉ The quality of the 3D model depends on the source video, with DSLR footage providing sharper results compared to iPhone.
- π Reflective surfaces and low light conditions present challenges, but Luma AI still manages to produce impressive results.
- π The technology is in its early stages, with expectations of continuous improvement in quality.
- π The AI automatically separates the scene from the object, showcasing its ability to discern and create detailed 3D models.
- π¦ Luma AI offers a downloadable glTF model and an Unreal Engine plugin for further integration and use.
- π οΈ Post-processing in Blender can refine the 3D models by smoothing out sharp edges.
- π₯ The original video footage can be short, as demonstrated by the minute and 42 seconds recording that yielded a usable 3D model.
- π Even with limitations like reflective paint and darkness, the car model produced was surprisingly good for background use.
- π A follow-up demonstration is planned to show how these 3D assets perform when used in 3D software for background purposes.
Q & A
What is the main topic of the video?
-The main topic of the video is demonstrating how to turn a video into a 3D model using Luma AI and Blender.
What does Luma AI's video to photogrammetry feature allow?
-Luma AI's video to photogrammetry feature allows users to create 3D models from videos instead of taking multiple photos to capture an object in 3D space.
What was the time constraint the creators faced while filming the video?
-The creators had a time constraint as they only had a couple of minutes before it turned dark, which affected their filming.
How long did it take for each 3D mesh to be processed after uploading the video clips to Luma AI?
-Each 3D mesh took about 20 to 30 minutes to be processed after uploading the video clips to Luma AI.
What tool did the creator use in Blender to smooth out the 3D model?
-The creator used a smooth tool in Blender to remove the sharp edges from the 3D model.
How long was the footage that the creator recorded for the cone 3D model?
-The creator only spent one minute and forty-two seconds to record the footage for the cone 3D model.
What are the differences in the quality of the 3D models created from iPhone and Sony DSLR footage?
-The 3D model created from the Sony DSLR footage had sharper quality, not only due to the camera's image quality but also because the creator got closer to the object. The iPhone footage was taken from three levels of loop from high, mid, and low angles as per the website's instructions.
What challenges were faced when trying to create a 3D model of the car?
-The challenges faced when creating a 3D model of the car included the car's reflective paint, the darkness outside, and the fact that the entire car was not visible in the footage.
What is the next step the creators plan to take with the 3D models?
-The next step the creators plan to take is to use the 3D models to create a quick and short video to see how these assets perform when used in the background in 3D software.
What is the significance of the glTF model and Unreal Engine plugin mentioned in the video?
-The glTF model is a file format for 3D models that the creator downloaded, and the Unreal Engine plugin is a tool that the creator plans to cover in another video, indicating the versatility and compatibility of the 3D models created with Luma AI.
What does the creator imply about the future of this technology?
-The creator implies that the technology is in its early stages and that the quality of the 3D models generated is expected to improve over time.
What was the overall impression of the results obtained from Luma AI for the car?
-Despite the challenges, the creator found the results for the car to be pretty impressive, especially considering it was used in the background.
Outlines
πΉ Luma AI Video to Photogrammetry Discovery
The speaker expresses excitement about discovering that Luma AI can now convert videos into 3D models through photogrammetry. They mention the urgency of capturing footage before dark and the process of recording different angles and heights. The video also covers the technical process of uploading clips to Luma AI's website, dealing with encoding issues, and the time it takes for the AI to generate a 3D mesh. The resulting 3D models are showcased, with a focus on the impressive separation of the scene from the object. The speaker also discusses the use of Blender to refine the 3D models and compares the quality of models derived from different camera sources, including an iPhone and a Sony DSLR. The reflective nature of windows and the challenges of capturing a reflective, long car in low light are mentioned, and the speaker concludes with a teaser for a future video showcasing the use of these 3D assets in a short video.
Mindmap
Keywords
Luma AI
Photogrammetry
3D Model
Video Footage
DaVinci with h.265 Encoding
VLC Cone
GLTF Model
Unreal Engine
Blender
Payphone
Reflective Surface
Highlights
Luma AI has enabled video to photogrammetry, allowing 3D capture of an object from a video instead of photos.
The process was tested in a time-limited setting with natural light conditions.
Different camera angles and heights were utilized for the video capture.
DaVinci Resolve was used to process DSLR footage with h.265 encoding before uploading to Luma AI.
Each 3D mesh generated by Luma AI took approximately 20 to 30 minutes to complete.
The AI successfully separated the scene from the object in the video, showcasing impressive results.
A glTF model and an Unreal Engine plugin are available for further integration.
Blender was used to refine the 3D model by smoothing out sharp edges.
The video footage recorded took only a minute and 42 seconds to produce a 3D model.
The technology is in its early stages, with expectations of improving quality over time.
Payphone results varied between iPhone and Sony DSLR footage, with the DSLR providing sharper quality.
Reflective surfaces and low light conditions presented challenges in the car modeling example.
Despite the challenges, the car model result was considered impressive for background use.
A quick and short video will be created using the 3D assets to demonstrate their performance in 3D software.
The potential applications of this technology are vast, offering new possibilities for video content.
Stay tuned for a follow-up video covering the Unreal Engine plugin.
The process demonstrates the potential of AI in transforming 2D video into interactive 3D models.