ULTIMATE FREE LORA Training In Stable Diffusion! Less Than 7GB VRAM!
Summary
TLDRThis video tutorial introduces 'Laura,' an efficient method for training AI models using personal images, optimized for devices with limited GPU resources. It combines elements of Dreambooth and textual inversion, enabling the creation of lightweight embeddings for various models. The guide walks viewers through the process using Koya SS GUI, from setting up the software to training a model with detailed steps and configurations. The result is a personalized AI model that can be mixed and matched for unique image generation, all achievable with minimal hardware requirements.
Takeaways
- 😀 Laura is a method for training AI models using personal images, optimized for small graphics cards with only 6-7 GB of VRAM.
- 🤖 Laura combines elements of Dreambooth and Textual Inversion, creating small files that can be used with any model, unlike larger Dream Booth models.
- 🚀 The video demonstrates using Koya SS GUI, a user-friendly software for training models like Dreambooth, Textual Inversion embeddings, and fine-tuning models.
- 🔧 The Koya SS GUI installation process requires Python, git, and specific Visual Studio redistributables, with detailed steps provided in the video.
- 📁 A specific folder structure is necessary for Laura training, including image, model, and log folders, with the image folder containing subfolders named after training step counts.
- 🖼️ Image preparation for Laura involves high-quality, varied images, resizing to 512x512 resolution, and using tools like blip for image captioning.
- ✍️ Captioning images for Laura requires adding the character's name to the beginning of the caption for more precise training results.
- 🔢 The number of training steps is determined by the formula of at least 100 steps per image, with a total minimum of 1500 steps, influencing the subfolder naming.
- 📝 Two configuration files are provided for training: one basic and one for systems with less than 8 GB of VRAM, simplifying the setup process.
- 🔧 Training parameters like batch size, learning rate, and resolution can be adjusted, with suggestions given for weak GPUs to use memory-efficient options.
- 🎨 The final Laura model can be used in Stable Diffusion with an extension, allowing the weight of the model to be adjusted in prompts for varied image results.
Q & A
What is Laura in the context of this video?
-Laura is a method for training a subject using your own images, optimized for small graphics cards, requiring only 6 to 7 gigabytes of VRAM, making it accessible for users with limited GPU resources.
How does Laura compare to Dreambooth and Textual Inversion in terms of resource requirements?
-Laura is more resource-friendly compared to Dreambooth and Textual Inversion, as it allows for training a subject with significantly less VRAM, making it suitable for users with less powerful GPUs.
What is the file size of the embeddings created by Laura?
-The embeddings created by Laura are small, typically ranging between 300 megabytes and 100 megabytes, which is considerably less than the size of a Dream Booth model.
Can Laura embeddings be used with any model?
-Yes, Laura embeddings can be applied to any model in the same way as Textual Inversion embeddings, making them versatile for various applications.
Why is the Dreambooth extension not used for training Laura in this video?
-The Dreambooth extension is not used because it often does not work well with the automatic 11 due to new updates that can break the training process, which the presenter finds annoying and prefers to avoid.
What is the Koya SS GUI and how is it used in the video?
-The Koya SS GUI is a software that simplifies the training process for a Dreambooth model, a checkpoint, and a texture inversion embedding, or even fine-tuning your own model. It is used in the video to train Laura efficiently and easily.
What are the system requirements for installing the Koya SS GUI?
-To install the Koya SS GUI, one needs to have Python and git installed, as well as Visual Studio 2015, 2017, 2019, or 2022 with the distributable libraries. If not already installed, an Excel file is provided to assist with the installation.
How does one set up the Koya SS GUI for training?
-The setup involves installing the necessary software, creating a new folder for the Koya installation, copying and pasting command lines into Windows Powershell, and following the prompts to complete the installation process.
What is the recommended image resolution for training with Laura?
-The recommended image resolution for training with Laura is 512 by 512 pixels. However, for higher quality results with a powerful GPU, training at 768 by 768 pixels is suggested.
What is the minimum number of images recommended for training with Laura?
-It is recommended to have at least 10 high-quality images for training with Laura. The number of training steps should be at least 100 steps per image, with a total of at least 1500 steps.
How can one use the trained Laura model in Stable Diffusion?
-After training, the Laura model can be used in Stable Diffusion by copying the saved tensor file and pasting it into the Stable Diffusion UI folder's models/Laura directory. An extension called 'Koya SS Additional Networks' needs to be installed to utilize the model within the UI.
How does one mix different Laura models in a prompt for Stable Diffusion?
-Multiple Laura models can be used together in a prompt by selecting them and adjusting their respective weights to control the emphasis on each model's influence on the final image, ensuring the total weight does not exceed one.
What is the impact of using a lower batch size during training?
-Using a lower batch size, such as one, can improve the quality of training, especially when the number of images is limited. However, it may increase the training time and use less VRAM compared to higher batch sizes.
What are the benefits of training at a higher resolution like 768x768?
-Training at a higher resolution such as 768x768 can result in better-looking models, especially when high-quality images are available. However, it requires a more powerful GPU due to increased VRAM usage.
How can one ensure the proper naming of folders for training steps in Laura?
-The folder name should include the number of training steps per image followed by an underscore and the name of the character, ensuring the total training steps are at least 1500 and each image has at least 100 steps.
What are the configuration files provided in the description for?
-The configuration files provided are used to automatically set optimal training parameters for Laura, including one for general use and another for systems with less than 8 gigabytes of VRAM.
What is the role of the 'enable buckets' option during training?
-The 'enable buckets' option is used when training with images of different dimensions, ratios, and resolutions. It allows the training process to accommodate images that are not uniformly cropped or sized.
How can one modify the weight of a Laura model in a prompt?
-The weight of a Laura model in a prompt can be modified by adjusting the number associated with the model's call-out text in the prompt. A value of 1 corresponds to 100% influence, 0.9 to 90%, and so on.
Outlines
🤖 Introduction to Laura AI Training Method
This paragraph introduces Laura, an AI training method that uses personal images and is optimized for small graphics cards, requiring only 6-7 gigabytes of VRAM. Laura is described as a hybrid between dreambooth and textual inversion, creating small files for use with any model. The video will not use the dreambooth extension due to compatibility issues with automatic updates, instead focusing on the Koya SS GUI, a user-friendly software for training models and fine-tuning. The paragraph concludes with acknowledgments to contributors who provided tips and created instructional videos for using the Koya SS GUI.
🛠️ Setting Up Koya SS GUI for AI Training
The paragraph outlines the process of installing the Koya SS GUI, starting with accessing the repository and ensuring prerequisites like Python and git are installed. It details the installation steps, including running commands in Windows Powershell as an administrator, and provides a link for downloading required libraries if not already installed. The process involves creating a new folder for the installation, copying and pasting code from GitHub, and following the installation prompts. An optional step is also mentioned for users with NVIDIA 30 or 40 series graphics cards to improve training speed.
🖼️ Preparing Images for Laura Training
This section explains the preparation of images for Laura training, emphasizing the importance of high-quality and varied images. It suggests using a website for resizing images to 512x512 resolution and manually modifying captions for each image to perfectly describe them. The process involves using a tool like blip for image captioning and then editing these captions to include the character's name for more precise training. The paragraph also describes creating a specific folder structure for training, including an images folder, a model folder, and a log folder, with the training steps determined by the number of images provided.
🔧 Configuring Training Parameters for Laura
The paragraph discusses configuring training parameters for Laura, starting with choosing a base stable diffusion model and selecting appropriate checkboxes based on the model's version. It introduces two configuration files provided by the video creator for optimal training settings, one for general use and another for systems with less than 8 gigabytes of VRAM. The video creator guides viewers on how to input folder URLs for images, outputs, and logs, and how to set the model output name. It also touches on various training parameters such as batch size, learning rate, and resolution, with advice on selecting settings based on the strength of the user's GPU.
🚀 Starting the Laura Training Process
This paragraph instructs viewers on initiating the Laura training process by clicking the 'train model' button after configuring all settings. It provides insights into training parameters like batch size, which affects training speed and VRAM usage, and the importance of selecting the right settings for users with limited VRAM. The paragraph also explains how to use the provided configuration files for easy setup and how to modify settings for low VRAM systems. The training time is highlighted, noting that 1500 steps with a batch size of 2 takes only 6 minutes, and the final result is a save tension file that can be used in stable diffusion with the help of an extension.
🎨 Using Trained Laura Models in Stable Diffusion
The final paragraph demonstrates how to use the trained Laura model in stable diffusion, detailing the process of copying the trained model file into the appropriate folder and installing the necessary extension. It explains how to integrate the Laura model into prompts by using additional networks and adjusting the model's weight to influence the image generation. The creator shows examples of image generation with different model weights and mixing two Laura models to create unique images, emphasizing the flexibility and power of Laura for quick AI training even with limited GPU resources.
Mindmap
Keywords
💡Laura
💡VRAM
💡Dreambooth
💡Textual Inversion
💡Koya SS GUI
💡Training Parameters
💡Batch Size
💡Embeddings
💡Stable Diffusion
💡Extensions
💡Regularization
Highlights
Introduction of Laura, a method for training AI models using personal images with minimal VRAM requirements.
Laura is optimized for small graphics cards, requiring only 6-7 gigabytes of VRAM.
Comparison of Laura with Dreambooth and Textual Inversion, noting its smaller file size and flexibility.
Explanation of how Laura creates small 100-megabyte embeddings applicable to any model.
Mention of Koya SS GUI, a user-friendly software for training models and fine-tuning.
Details on the installation process of Koya SS GUI and prerequisites like Python and git.
Instructions for setting up the command line and executing the installation script.
Optional steps to improve training speed for NVIDIA 30 or 40 series graphics cards.
Description of the folder structure necessary for training with Laura.
Importance of image quality and variation for effective training.
Process of resizing images to 512x512 resolution for training.
Use of blip for image captioning and the manual modification of captions for precise training.
How to name training folders to determine training steps based on the number of images.
Preparation of configuration files for training parameters to simplify the process.
Selection of base stable diffusion models and the impact on training settings.
Recommendation to use 'safe tensors' for stable diffusion models.
Discussion on training parameters like batch size, learning rate, and resolution.
Techniques for mixing multiple Laura models to create unique images.
Demonstration of the final results and the capability to adjust model weights for different outcomes.
Conclusion emphasizing the power of Laura for quick training on weak GPUs.
Transcripts
and in this video I will show you how
you can train your own subject using
Laura now what is Laura well Laura is a
method of training your subject using
your own images that is optimized for
small graphics card meaning that
compared to dream wolf or textual
inversion you can train a subject with
only 6 to 7 gigabytes of vram which is a
super good news for all of you who don't
have a good GPU now basically Laura is
kinda like a love child between
dreambooth and textual inversion in the
sense that it creates these small 100
megabytes file that you can use in the
exact same way as textual new version
embeddings meaning that they can be
applied to any model and the size of
these embeddings usually range between
300 megabytes in 100 megabytes which is
definitely way less than a dream Booth
model and of course just like dreamwoof
and textual inversion you can train a
style or a character absolutely anything
you want now in this video I'm not going
to be using the dreambooth extension to
train Laura now I've already explained
why in my previous Dreamboat video then
the dreambooth extension often does not
work very well with the automatic 11
because of new updates so sometimes a
new update can completely break the
training and it's very annoying and I
personally don't want to deal with this
anymore but don't worry because we're
going to be using something even better
and that is the Koya SS GUI it is a
super cool piece of software that is
super easy to use where you can train a
dreamboof model and lower a checkpoint
and texture inversion embedding or even
fine-tune your own model and thanks to
the configuration files that I'm gonna
provide you in the description down
below it is super easy and fast to set
up so let's go but before we begin let
me thank two people for making this
video possible the first one is spy BG
who gave me a lot of tips and tricks on
how you can use Laura efficiently and
the second one is Bernard malte that
made two amazing videos explaining how
to create a lower weight using the Koya
SS GUI and that is also responsible for
making the Koya SS GUI that we're gonna
be using today so again the link for
their channels will be in the
description down below alright then now
let's begin alright so to install the
Koya SS GUI you're gonna click the link
to description down below you're gonna
arrive on this page on the Korea SS GUI
repository you're gonna scroll down
you're gonna make sure that you have
Python and git installed if you already
have stable division installed it should
be already there if you follow my
tutorial video but also make sure that
you have the visual studio 2015 2017 19
2022 with distributable already
installed if you don't have it installed
already you can click on this link right
here and it will download an Excel file
and then just run the Excel file to
install the required libraries it's very
easy very simple very fast nothing to
worry about so then once you have
everything installed you're gonna come
here and select this command line and
Ctrl C to copy it then you're gonna go
into your Windows startup menu and look
for Windows Powershell and then click on
run as administrator and then just paste
the command line that we copied
previously and then press enter it's
going to ask you if you want to change
the execution policy and here you're
gonna type
Aya that basically corresponds to yes
for all and then press enter and there
we go so now that this is done you can
close the window and we're going to
create a brand new folder on our
computer so right click new folder I'm
gonna call mine Koya which is where
we're gonna put the koyan installation
then go inside that folder click on the
folder URL Ctrl C to copy it then go
back to the Windows startup again look
for Powershell but this time no need to
run it as administrator you can just run
it normally and then what we need to do
is that we need to go and design the
folder that we created right here and to
do this you're gonna type CD space and
then paste your folder URL right here
now if you created a folder that is not
on the C drive instead of Simply putting
CD you're gonna have to add slash D and
then the path of your folder let's say
your folder is on your drive e this is
what this would look like but in my case
since this is on my C drive I don't need
the slash D so I'm simply going to be
using CD and then the URL of my folder
and then press enter and as you can see
now we are inside the folder that we
just created right here so now we go
back to GitHub and we're going to click
on this button right here to copy this
entire code and then we're gonna paste
the entire code in the center Windows
Powershell window and if it gives you a
warning you can just click on test
anyway and as you can see right now it
has started running all the lines of
code that we copy and pasted before now
this might take some time so be patient
it shouldn't take too long this is not a
stable diffusion installation after all
but for some of you it might still take
some time now at the end of the
installation it will stop at the
accelerate config line code and to
finish the installation you can just
press enter so then it's going to ask
you a few questions so for the first
question you're going to choose this
machine so here just press enter no
distributed training so press enter
again here you're gonna type no press
enter torch Dynamo you're gonna type no
again then press enter they want to use
deep speed you're going to press no then
press enter again What GPU should be
used you're gonna type all then press
enter Then you wish to use fp16 or bf16
for this one you're going to be choosing
fp16 so just press the down arrow and
then press enter and now finally as you
can see the installation is done now
technically we could stop right here but
there is an additional optional stand
that you can take that will improve the
training speed if you have a 30 or 40
series graphics card for NVIDIA but
don't worry it's very simple all you
have to do is just click on this link
right here and it will download a zip
file and if for some reason this link
does not work or is too slow I will also
give you a mega link that you can use
instead so once you've downloaded the
zip archive on your video you just right
click and then extract it then you're
gonna go inside that folder select the
cud and N Windows Ctrl C to copy it then
go inside the Koya SS folder and paste
the folder right here then we go back to
GitHub click on this button right here
to copy these two lines of code go back
to the windows Powershell window and
then paste the lines of codes right here
and again if it gives you a warning you
can just click pay list anyway and then
press enter and as you can see now
everything is done we did the most
difficult part of this video
congratulations and now if we go inside
our Konya SS folder you will see a bunch
of files but don't worry because
actually there is only two files that
interest us that is the goo either but
in the upgrade.ps one for example next
time that you want to update Koya SS all
you have to do is just come here on the
upgrade.ps1 right click and then run
with Powershell and as you can see it
will automatically update the Korea SS
folder and then finally if we want to
run the Korea SS GUI all you have to do
is just double click on the gui.bat file
and to load the CSS and just like stable
diffusion it will give you a local URL
that you can easily open in your browser
but just holding down control and then
click on the local URL and there you go
now we are inside the Koya SS GUI and we
can finally begin the dreambooth Laura
training now obviously before we can
begin the training there is a few steps
that we need to do before and the first
step is of course to prepare our images
now will not be going into too much
details about that in this video because
I've already talked about it in my
previous text uni version video so
definitely watch this video before
watching this one or at least watch the
beginning of the video where I explain
how to perfectly caption an image
because the process is pretty much the
exact same but if I had to summarize
first make sure that all the images are
of high quality and that the images have
a lot of variation so you have different
lights in different angles make sure to
have at least 10 images as long as they
are of high quality it's better to have
a small amount of images of high quality
than a lot of images with bad quality
then you can use a website like berm.net
to resize your images to 512 by 512
resolution although for lower training
you don't necessarily need to resize any
images you can use images of any
resolution but if you're not confident
and you want to make sure that
everything works well just process the
images manually it will work absolutely
fine then you're gonna capture each
image using a process like blip that
will do like 80 of the world work for
you and you can do this process either
inside stable diffusion or inside the
koyagui by going into utilities and then
believe captioning it will basically do
the exact same thing it will analyze
every image and then create a text file
with the same name as your image and
then for each image you're gonna have to
go and manually modify the captioning so
that it perfectly describes the picture
now the small difference compared to
textual inversion captioning is that in
the beginning of the captioning you're
going to input the name of your
character now you don't necessarily need
to do it you could simply leave it like
that so that the next time that you use
the Laura embedding every women in your
prompt will look like your character but
if you want more Precision for your
prompt you can simply input the name of
your character in the beginning of the
caption this way when you use your name
in The Prompt the name of your character
in the lower embedding will be linked
together but that's not a big deal if
you don't want to do it it's really up
to you so now that our images are
prepared we're gonna have to create a
specific folder structure so this is
also something that is different because
compared to the textual inversion
training but again don't worry it is
fairly easy all you have to do is just
right click new folder I'm not going to
call mine Wednesday Adams Laura I'm
going to select all the files and images
Ctrl C to copy it then going to send the
Wednesday Adam's Laura folder and here
we're going to create three folder an
image folder a model folder and finally
a log folder now we're gonna go and send
the image folder and again here we're
going to create a new folder but this
time this will be a little special
because the way we name our folder will
determine the amount of training steps
that the koyagui will do and this will
depend on the amount of images that you
have so first you need to make sure that
you do at least 100 steps per image with
a final training steps of at least 1500
so for example if you only have 10
images and you need at least 1500 steps
of training you're gonna take 1500
divided by the number of images that you
have and it's going to give you 150 and
this number is the amount of training
Style types per image that we need to
input into folder name so for this
you're going to right click new folder
and you're gonna type 150 underscore and
then the name of your character so in my
case it's Wednesday Adams then you're
going inside that folder and you paste
your images right here so as I said
previously this number right here is
what determines the amount of training
steps per image that the Lora training
will do so in this example I only give
you a number if you only had 10 images
but let's say you add 20 images for
example well in that case there would be
1500 divided by 20 and gives you 75 but
as I said this number should be at least
100 so if you already have 20 images you
don't need to do this math anymore and
you can just input instead of 150 100
same if you have 25 images 30 images etc
etc this number should be at least 100
but if you have less than 15 images you
need to do this little math and for
example simple in my case I only have 11
images 11 is less than 15 so I'm gonna
have to do some math so 1500 divided by
11 gives me 136 well this is the number
that I'm going to be using for the
folder name so again instead of 100 I'm
gonna put
136 and now finally after we've prepared
your images after we prepare the folder
structures we can finally start the last
step which is training and for this I
will make your life very simple because
I prepared for you two configuration
files that you can use to automatically
determine the training parameters so for
this you're gonna click the link in
description down below and you're gonna
have two Json files the first is the
Lora basic settings which is basically
the basic lower settings that work with
pretty much every single training and
the second one is a special settings
file that you can use if you have less
than 8 gigabytes of vram so just
download these two files put them
somewhere safe then you're gonna come
here click on the configuration file
button click on open and then choose one
of these two files so here I'm just
gonna choose the Laura basic settings
file and then click open and if you
click on training parameters you can see
that some of these options were already
done for you these are the most optimal
settings that work for the Laura
training and that you can use as is so
if you don't want to waste time this is
a super quick way to start a training
but I'm also gonna explain a few
settings that you might want to modify
the padding on your training now first
before we begin let's actually choose
our base stable diffusion model
basically what stable diffusion model
are you going to be using to train a
Lora model and here in the model quick
pick you have already a bunch of
pre-selected models the 1.4 the 1.5 the
2.0 2.1 the 2.0 512 by 512 and the 2.1
512 by 512 and if you select one of
these models it will be automatically
downloaded from GitHub and also
depending on the model that you select
some options will be selected so if you
select for example the 1.5 you will see
that this checkbox will not be selected
because this is not a V2 model however
if you select the 2.0 base you will see
that this checkbox is now checked and if
you choose the 2.0 768 version you will
see that now this V parameterization
checkbox is checked also now you need to
keep these checkboxes in mind because if
you come here and click on custom you
can actually choose your own model and
if you choose a custom model you need to
know what model this was based on so for
example if I come here and I choose the
protogen V 2.2 model I know that this
model is based on the 1.5 meaning that I
don't need to check any of these
checkboxes however if I select the model
that is based on the 2.0 model I need to
check this checkbox right here and if
it's based on the 2.0 768 version I need
to check this additional checkbox right
here so this is something to keep in
mind but I personally think that most of
you will use the 1.5 model as a base for
training so you will probably not need
to use any of these check boxes and here
with the safe train model as we have a
bunch of extensions that you can choose
from but of course I highly recommend
that you keep safe tensors because it is
now the safest extension for stable
diffusion models so whenever you have
the choice just choose saved answers so
now you're gonna click on the folders
Tab and here for each section you're
going to input the folder URL so for
example for my image folder you're going
to click here and as you remember we
created a brand new folder for that and
mine is inside Wednesday Adam's Laura
folder and this is the folder that we
want to use do not go inside that folder
you need to select this image folder
right here so just click select folder
for the output folder mine is in
Wednesday Addams Laura and then select
folder and same thing with the login
folder once they add UPS Laura and then
sync folder now you can also choose a
regularization folder where you have
your regularization images but for lower
training you don't really need
regularization images since you're going
to be training one subject anyway so
here you're gonna input the model output
name so in my case I will put Wednesday
Addams then click on the training
parameters and here you're gonna see a
bunch of options but as I said since we
have loaded the configuration file you
don't actually need to touch anything if
you want to start the training right now
all you have to do is just click on the
train model button and you are done but
let's go through some of these training
parameters anyway so the train batch
size is basically the amount of images
that you're gonna train at one time so
the higher the number the faster the
training will be because you basically
divided the amount of training steps by
this number but here's a little advice
if you don't have a lot of images if you
only have like 10 or 15 images I highly
suggest that you choose a batch size of
one it actually improve a little bit the
quality of the training sure it might
take longer but the final result will be
worth it and of course a higher batch
size will also increase the amount of
vram used for the training so if you
have a weak graphics card I suggest that
you put a batch size of one otherwise
you just leave it at 2 by default so
when it comes to everything right here
you're gonna leave everything by default
the learning rate is really good you
don't really need to change anything
this basic settings basically work for
every single training so the max
resolution this is the resolution that
you're going to be training your images
at so if you only have 512 by 5 12
images You're Gonna Leave It 512 by 512
by default but if you can this is my
advice if you have a good GPU instead of
512 or 512 try training this at 768 by
768 yes you will use way more vram but
if you have high quality images your
final model will look way better
otherwise if you have a weak GPU just
leave it at 512x512 by default also
right here the enable buckets if you
often crop your images yourself and you
have images of different dimensions and
ratio and resolutions you should enable
this option right here this will make it
so it can train images with different
resolutions otherwise if you just
cropped it yourself you can just disable
this option right here so again here you
can leave everything else by default but
if you have a weak GPU I suggest that
you enable memory efficient attention
and and gradient checkpointing this
might increase the training time but
will also use less vram and if you're
lazy like me and you don't want to
enable all of these options each time
that you want to train if you have a
weak GPU you can just come here in the
configuration file click on open and
then select the lower end of your arm
settings file and then click open and as
you can see all the low vram options
will be selected automatically and then
guess what now we are completely done
and if we want to begin the training
just scroll down and click on train
model and as you can see it will finally
start the training and for 1500 steps
with a batch size of 2 this training
takes only 6 minutes which is really
super super cool and then after the
training is complete if you go inside
the model folder you're gonna see your
final save tension file right here and
to use it inside stable diffusion you're
gonna select it Ctrl C to copy it go
inside your stable diffusion with UI
folder models Laura and then paste your
file right here and then you can launch
stable diffusion and not to be able to
use the lower weight that we train
inside koyagui we need to install a
special extension so for this you can
click on extensions click on available
load from then you're gonna scroll down
and look for Konya SS additional
networks and then click install then
you're gonna click on installed and then
click apply and restart UI and once you
see the additional networks tab that
means that the extension was installed
correctly and then finally to be able to
use Laura inside your prompt you're
going to select your model write your
prompt and then you're going to click on
this button right here to show the extra
networks and then choose Laura as you
can see right here you have all the
lower weight that we created previously
and if you want to use one of them
inside your prompt you can just click on
it and as you can see this line will
then appear inside your prompt then you
can then select and put it in the
beginning of your prompt and the way it
works is that this is the text that's
going to use to call out the lower
weight called Wednesday Addams and the
number that you see right here is the
weight of the model so exactly as if you
would select a prompt tag and put
brackets or or a higher number right
here this will use a higher percentage
of the model the one corresponds to a
hundred percent a 0.9 at 90 percent 0.8
at eighty percent
etc etc and you can create some really
cool images just by modifying the weight
right here and also what's super cool is
that you can use multiple lauraway
together for example if I click on
another one and I put it right together
we are now using two different lauraway
together but we need to make sure that
the number right here when put together
is not over one so if I use for example
0.5 here I'm gonna have to use 0.5 right
here if I use 0.2 right here I would
have to use 0 for the hand right here
and the bigger the number the more
emphasis it's gonna put on this slower
weight but for this example let's
actually do a normal one just keep it at
one then click on close and then finally
click on generate and this is the final
result looks pretty good and as you saw
with only 6 minutes of training with
less than 8 gigabytes of error M you can
have a results like this and that's
really super powerful and as I said if
we keep for example the same seed and if
you want to decrease for example the
weight of our model if I put something
like 0.8 and then click on generate
you're gonna have now a slightly
different image maybe better maybe worse
it kind of depends on what you're
looking for and as I showed you
previously you can easily mix it with
another Laura model so if I take this
one for example we just trained on top
of a Shelby if I put 60 percent of
Wednesday atoms and 40 of Thomas Shelby
using the same exact seed and I click on
generate it gives me this image and as
you can see we are slowly starting to
have a little bit more manly look and
with brighter eyes and that is because
Thomas Shelby has blue eyes so obviously
you're gonna have this look influence
into the image and if we go even higher
you'll see that our character is
starting to look more and more like
Thomas Shelby and less like Wednesday
Adams but it's still super super cool
because you can definitely mix and match
a lot of different lower weight to
create very interesting looking images
and there we have it folks now you can
train any subject you want in only a few
minutes with a very weak GPU all of that
thanks to Laura and there you go thank
you guys so much for watching don't
forget to subscribe and smash the like
button for the YouTube algorithm and
I'll see you guys next time bye bye
Посмотреть больше похожих видео
SDXL Local LORA Training Guide: Unlimited AI Images of Yourself
מדריך מלא: איך לאמן מודל FLUX עם הפנים שלכם באתר ASTRIA + קבלו 10$ לחשבון שלכם מתנה!
How to Create Ai Videos of Yourself!
Venice AI Basic Tutorial
Text to Image generation using Stable Diffusion || HuggingFace Tutorial Diffusers Library
How to build an AI MODEL that makes $11,000 per month
5.0 / 5 (0 votes)