2-Langchain Series-Building Chatbot Using Paid And Open Source LLM's using Langchain And Ollama

Krish Naik
1 Apr 202427:00

Summary

TLDRIn this informative video, Krishn demonstrates how to create chatbot applications using both paid and open-source large language models (LLMs). He focuses on the Langchain ecosystem, showcasing practical implementations with OpenAI's API and integrating open-source LLMs locally using tools like AMA. The tutorial covers setting up environment variables, defining prompt templates, and utilizing Langchain's modules for streamlined development. Viewers are guided through coding a chatbot, monitoring with Langsmith, and leveraging AMA for cost-effective local model deployment, providing a comprehensive introduction to chatbot development.

Takeaways

  • πŸ˜€ The video is part of a Lang chain series focused on creating chatbot applications using both paid and open-source LLMs (Large Language Models).
  • πŸ” The presenter, Krishn, emphasizes the importance of understanding how to integrate open-source LLMs through platforms like Hugging Face and the Lang chain ecosystem.
  • πŸ“š The tutorial aims to be practical, guiding viewers through the process of setting up a virtual environment and using specific Python packages for chatbot development.
  • πŸ’» Environment variables are set up for the Lang chain API key, the open AI API key, and the Lang chain project name to facilitate monitoring and tracking of chatbot interactions.
  • πŸ”‘ The video demonstrates the coding process for a chatbot application, starting with foundational models and gradually increasing in complexity.
  • πŸ“ The script mentions the use of 'chat prompt templates' which are essential for defining the initial prompt required for the chatbot to respond to user queries.
  • πŸ”— The integration of different components like model, prompt, output parser, and chain is discussed to show how they work together in creating a functional chatbot.
  • πŸ› οΈ The video highlights the use of 'Lang Smith' for monitoring and tracking the chatbot's performance and API costs, emphasizing the practical application of the tool.
  • πŸ†“ The presenter introduces the use of 'AMA' (Ask Me Anything) for running large language models locally, which can be beneficial for developers without access to paid APIs.
  • πŸ”„ The process of downloading and using open-source LLMs like 'Llama 2' and 'GMA' with AMA is explained, showing an alternative to paid API services.
  • πŸ“ˆ The video concludes with a demonstration of how to run the chatbot locally using the AMA model and how to track the interactions through the Lang chain dashboard.

Q & A

  • What is the main topic of the video?

    -The main topic of the video is creating chatbot applications using both paid APIs like OpenAI and open-source language models, with a focus on integrating these with the LangChain ecosystem.

  • What is LangChain?

    -LangChain is an ecosystem that provides components for developing AI applications, such as chatbots, and is focused on making it easier to integrate with various language models and APIs.

  • What is the purpose of the environment variables mentioned in the video?

    -The environment variables mentioned in the video, such as LangChain API key, OpenAI API key, and LangChain project, are used to store important information for accessing APIs and monitoring the application's performance.

  • What is the significance of the 'like target' mentioned by the presenter?

    -The 'like target' is a viewer engagement goal set by the presenter to encourage viewers to like the video, which helps in promoting the video and supporting the channel.

  • How does the presenter plan to monitor the chatbot application's performance?

    -The presenter plans to use the LangChain dashboard to monitor each call made to the chatbot application, allowing for tracking of performance and costs associated with API usage.

  • What is the role of the 'chat prompt template' in the chatbot application?

    -The 'chat prompt template' is used to define the initial prompt or system message that sets the context for the chatbot's responses, guiding how it interacts with users.

  • What is the importance of the 'output parser' in processing the chatbot's responses?

    -The 'output parser' is responsible for processing the responses from the language model. It can be customized to perform tasks such as splitting text or converting text to uppercase, and is essential for formatting the output before it is displayed to the user.

  • How does the presenter demonstrate the practical implementation of the chatbot?

    -The presenter demonstrates the practical implementation by writing code for the chatbot application, setting up the environment, defining the prompt template, and integrating with the OpenAI API and LangChain components.

  • What is the AMA mentioned in the video, and how does it relate to open-source language models?

    -AMA stands for 'Automatic Model Adapter', which is a tool that allows for the local running of large language models. It supports various open-source models and is used to demonstrate how to integrate these models with the chatbot application locally.

  • How can viewers support the presenter's channel?

    -Viewers can support the presenter's channel by subscribing, liking the videos, commenting, and taking a membership plan if available, which helps the presenter create more content.

Outlines

00:00

🌟 Introduction to Lang Chain Series

Krishn introduces his YouTube channel and the Lang Chain series, focusing on creating chatbot applications. He discusses integrating with both paid APIs and open-source LLMs, mentioning Hugging Face and the Langen ecosystem. The video aims to be practical, with a like target of 1,000 and 200 comments. Krishn encourages viewers to subscribe and support the channel for more content. He also outlines the steps to create a virtual environment and set up environment variables for Lang chain API key, open AI API key, and project name.

05:01

πŸ“š Setting Up the Environment and Coding Basics

Krishn details the setup process for the chatbot application, including creating a virtual environment and defining environment variables. He imports necessary libraries from Lang chain, such as chat open AI, chat prompt template, and output parsers. The focus is on practical implementation, with a demonstration of how to write code for the chatbot application. Krishn emphasizes the importance of understanding the integration process with different LLMs and the use of Lang chain modules.

10:01

πŸ€– Building the Chatbot Application with Open AI API

Krishn demonstrates how to build a chatbot application using the Open AI API. He defines the prompt template, sets up the streamlet framework, and integrates the Open AI LLM. The video shows how to use the Lang chain components to create a functional chatbot that responds to user queries. Krishn also explains how to monitor and track the chatbot's performance using Lang Smith dashboard.

15:03

πŸ” Exploring Open Source LLMs with AMA

Krishn introduces AMA (Ask Me Anything), a tool for running large language models locally. He explains how to download and install AMA and how it can be used to run open-source LLMs like Lama 2. The video covers the process of downloading models using AMA and integrating them into the chatbot application. Krishn also discusses the benefits of using AMA for local model execution and its compatibility with Lang chain.

20:05

πŸš€ Running Local LLMs with AMA and Lang Chain

Krishn shows how to run local LLMs using AMA and integrate them with Lang chain. He walks through the process of downloading the Lama 2 model, setting up the environment, and running the chatbot application locally. The video demonstrates how to use the Lang chain community library to call open-source models and how to monitor the application's performance. Krishn also discusses the importance of having a powerful system for running these models efficiently.

25:07

πŸ“ˆ Monitoring and Tracking Chatbot Performance

Krishn concludes the tutorial by demonstrating how to monitor and track the chatbot's performance using the Lang Smith dashboard. He shows how to track requests, costs, and response times for both Open AI API and open-source models. The video emphasizes the ease of monitoring with Lang chain and the ability to customize output parsers for better tracking. Krishn invites viewers to subscribe to the channel for more tutorials and thanks them for their support.

Mindmap

Keywords

πŸ’‘Chatbot Applications

Chatbot Applications refer to software designed to mimic human-like conversation with users through text or voice interactions. In the video, the main theme revolves around creating chatbot applications using both paid and open-source language models, demonstrating the process from setting up the environment to coding the chatbot functionalities.

πŸ’‘Lang Chain

Lang Chain is a term used in the video to refer to a specific ecosystem or set of tools for developing AI applications, particularly those involving language models. It is integral to the video's content, as the host focuses on using Lang Chain components to create and monitor chatbot applications.

πŸ’‘Open Source LLMs

Open Source LLMs, or Large Language Models, are AI models that are publicly available and can be used without cost. The video discusses integrating these models into chatbot applications, emphasizing the use of tools like Hugging Face and the Lang Chain ecosystem to facilitate this integration.

πŸ’‘Paid APIs

Paid APIs, such as the Open AI API mentioned in the script, are services that require payment for usage, often providing access to proprietary or specialized AI models. The video contrasts these with open-source models, showing how both can be used to create chatbot applications.

πŸ’‘Environment Variables

Environment Variables are settings in a computer system that can affect the behavior of running processes. In the video, the host sets up environment variables for the Lang Chain API key, the Open AI API key, and the Lang Chain project name, which are crucial for configuring the development environment for the chatbot.

πŸ’‘Virtual Environment

A Virtual Environment in Python is an isolated space that allows a project to have its own dependencies, separate from other projects. The video script includes instructions on how to create a virtual environment using the 'venv' module, which is a step in setting up the development environment for the chatbot application.

πŸ’‘Prompt Template

A Prompt Template in the context of chatbots is a predefined text or set of instructions given to the AI model to guide its responses. The video discusses creating a prompt template for the chatbot, which includes system instructions and user queries to shape the interaction.

πŸ’‘Output Parser

An Output Parser is a component that processes the output from an AI model to format it appropriately for the user. The script mentions using a string output parser as the default processor for the AI model's responses in the chatbot application.

πŸ’‘Streamlit

Streamlit is an open-source library used to create custom web apps for machine learning and data science. The video script includes using Streamlit to build the user interface for the chatbot application, allowing users to interact with the chatbot through a web-based interface.

πŸ’‘AMA (Ask Me Anything)

In the video, AMA refers to a tool that allows for the local running of large language models, which is crucial for offline functionality or when internet access to paid APIs is limited. The host demonstrates how to use AMA to download and utilize open-source models like Lama 2 for the chatbot.

πŸ’‘Lama 2

Lama 2 is an open-source language model that can be run locally using the AMA tool. The video script provides an example of how to download and use Lama 2 as an alternative to paid APIs for the chatbot application, showcasing the flexibility of using open-source solutions.

Highlights

Introduction to creating chatbot applications using both paid and open-source language models.

Explanation of integrating open-source language models through Hugging Face and focusing on the LangChain ecosystem.

Demonstration of setting up a virtual environment for the project using Python 3.1.

Instructions on creating environment variables for LangChain API key, OpenAI API key, and project name.

Overview of practical implementation in the LangChain ecosystem for chatbot development.

Importing necessary libraries from LangChain for chatbot functionality.

Discussion on using paid language models like OpenAI and alternatives like Anthropic's Cloudy AI.

Introduction to using LangChain modules for chatbot development and their significance.

Explanation of dependencies required for developing a chatbot application.

Demonstration of defining a prompt template for the chatbot using LangChain's chat prompt template.

Setup of a Streamlit framework for the chatbot application.

Integration of OpenAI's GPT 3.5 Turbo model for chatbot responses.

Utilization of LangChain's output parser for processing model responses.

Introduction to monitoring and tracking chatbot interactions with LangSmith.

Explanation of the cost associated with using OpenAI's API for chatbot responses.

Demonstration of creating a local chatbot application using open-source models with the help of LangChain and AMA.

Instructions on downloading and installing AMA for running large language models locally.

Guide on using AMA to download and integrate open-source language models like Lama 2.

Completion of the chatbot application using local models with LangChain and AMA, showcasing a custom output parser.

Conclusion and summary of the tutorial, highlighting the versatility of LangChain for both paid and open-source models.

Transcripts

play00:00

hello all my name is krishn and welcome

play00:02

to my YouTube channel so guys welcome to

play00:04

the fresh and updated Lang chain Series

play00:07

in this video I will be showing you how

play00:09

you can create chatbot applications with

play00:11

the help of both paid API llm along with

play00:14

that we'll also see how you can

play00:15

integrate with open source llms now you

play00:18

should definitely know both the specific

play00:20

ways how you can actually do it one way

play00:24

to basically integrate any open source

play00:26

llm is through hugging face but as you

play00:28

know that I'm focusing more on the L

play00:30

chain ecosystem and with respect to

play00:32

hugging face I've already uploaded a lot

play00:34

of videos in my YouTube channel and how

play00:36

you can actually call this kind of Open

play00:37

Source llms but since we are working

play00:39

with the langen ecosystem we will try to

play00:42

use all the components that are

play00:43

available in langen as you all know guys

play00:46

uh this is a fresh playlist and

play00:48

obviously my plan is that this month I

play00:51

will be focusing entirely on langen many

play00:53

more videos will be coming up many more

play00:55

amazing videos along with endtoend

play00:57

application fine-tuning many more things

play01:00

is going to come up so please make sure

play01:02

that we'll keep a like Target for every

play01:04

video and for this video the like Target

play01:06

is 1,000 and at least 200 comments and

play01:08

please make sure that you watch this

play01:10

video till the end because it is going

play01:12

to be completely practical oriented okay

play01:16

and uh if you really want to support

play01:17

please make sure that you subscribe the

play01:19

channel and take a membership plan from

play01:21

my YouTube channel so that it'll help me

play01:23

and with the help of those benefits I

play01:25

will be able to create more videos as

play01:27

such so let me quickly go ahead and

play01:29

share my screen so here is my screen

play01:31

over here and you'll be able to see in

play01:33

the GitHub that you'll be finding in the

play01:35

description of this particular video

play01:36

you'll be having folders like this so

play01:38

today is the third tutorial not third

play01:41

second tutorial uh in the first and

play01:43

second we just understood that what all

play01:45

things we are going to learn but in this

play01:47

is the real practical implementation

play01:49

that is probably there so as usual the

play01:51

first thing that we are going to do is

play01:53

that create our V andv environment how

play01:55

to create it cond create minus PV EnV

play01:58

python is equal to 3 1 you can probably

play02:01

take 3.10 version and I have already

play02:03

shown you how to create virtual

play02:04

environments in many number of videos

play02:06

then you'll be using Dov file so this

play02:09

will basically be my environment

play02:10

variable um in this environment variable

play02:13

I will be putting three important

play02:14

information one is Lang chain API key uh

play02:18

the second one is open a API key and

play02:20

Lang chain project you might be thinking

play02:22

this open AI API key I've kept it as

play02:25

open no it is not I've changed some of

play02:28

the numbers over here so don't try out

play02:30

it'll be of no use okay and then the

play02:33

third environment variable that I'm

play02:35

actually going to create is my Lin

play02:36

project name that is tutorial one I have

play02:39

written it over here the reason why I

play02:41

have written this because whenever I try

play02:44

to go ahead and see in my lsmith right I

play02:47

will be able to see observe the entire

play02:49

I'll be able to monitor each and every

play02:51

calls from the dashboard itself how we

play02:53

will be using this everything I will be

play02:54

discussing about it okay so all these

play02:56

things will specifically get required

play02:59

and uh all this will be used in our

play03:01

environment variable so these are the

play03:03

three parameters I have already created

play03:04

myb file so let's go ahead and start the

play03:07

coding okay and you have to make sure

play03:09

that you code along with me because this

play03:12

is the future AI engineering things are

play03:14

basically coming up I'll just show you

play03:16

initially with the foundation model

play03:18

later on this complexity will keep on

play03:19

increasing so let's go ahead and start

play03:22

our first code now what is our main aim

play03:26

what we are trying to do in our first

play03:28

project let me just discuss about

play03:30

because these are all the things that we

play03:31

going to discuss in the future but first

play03:33

thing that we will try to create is our

play03:36

normal chat GPT application okay I'll

play03:38

not say chat GPT but a normal chatbot

play03:41

okay and this chatbot will be important

play03:44

it will be helping you to probably

play03:46

create chatbot with the help of both

play03:48

paid and open open open source llm model

play03:51

so this will be the chatbot that we will

play03:53

be creating one way is that we will be

play03:56

using some paid llms now paid llms one

play03:59

example I can show it with the help of

play04:01

open AI API okay open AI

play04:05

API the second one that I will try to

play04:08

probably show it uh or you can also use

play04:11

cloudy API so that is from a company

play04:14

called as anthropic okay that you can do

play04:17

and one more I will try to use it with

play04:18

the help of Open Source

play04:20

llm see calling apis is a very easy task

play04:25

okay but the major thing is that since

play04:28

we have so many many modules we are

play04:30

going to use Lang chain as suggested

play04:32

right and in Lang chain we definitely

play04:34

have so many modules how we can use this

play04:38

modules for different different calls

play04:41

and along with this whenever we are

play04:43

developing any chatbot application what

play04:45

all dependencies we have specifically

play04:48

right

play04:49

dependencies now if you probably see

play04:52

this diagram here you'll be able to see

play04:54

there will be model prompt output parcel

play04:57

so in our video in this video I'm going

play04:59

to to see some of the features with

play05:01

respect to lsmith I'm going to see some

play05:03

of the features with respect to chains

play05:04

and agents and I'm also going to use

play05:07

some of the feature present in model and

play05:09

output parcel so all this combination we

play05:12

are going to specifically use and that

play05:13

is the reason how this is how I'm going

play05:15

to create the all the projects that we

play05:17

are doing entire videos that are

play05:20

probably going to come up will be much

play05:21

more practical oriented okay so now

play05:23

let's start our first chatbot

play05:26

application so here I will go ahead and

play05:28

write from Lang

play05:30

okay from Lang chain uncore open AI

play05:34

since I'm going to use open AI

play05:37

import chat open AI okay

play05:41

chat open AI so this is the first one

play05:45

that we're going to basically do from

play05:47

Lang chain see this three things will

play05:50

definitely be required then one is chat

play05:53

openi or whatever openi you whatever

play05:55

chat model that you're are going to use

play05:57

how to call Open Source I will also be

play05:59

discussing about that first of all we'll

play06:01

start with opening API itself okay so

play06:03

from

play06:03

linore core do prompts I'm going to

play06:08

import chat prompt template okay chat

play06:12

prompt template so this is the next

play06:14

thing that we are probably going to use

play06:16

chat prompt template okay at any point

play06:19

of time whenever you create a chat bot

play06:22

right this chat prompt template will be

play06:24

super important right here is what

play06:26

you'll you'll basically give the initial

play06:28

prompt template that is actually

play06:30

required Okay the third library that I'm

play06:32

actually going to import is from Lang

play06:34

chain uncore core do output uncore

play06:40

parsers okay

play06:42

Import St

play06:44

Str

play06:46

output parsel okay now this three are

play06:51

very important this string St Str output

play06:54

processor is the default output

play06:56

processor whenever your llm model gives

play06:58

any kind of response you can also create

play07:00

a custom output parser that also I will

play07:02

be showing you in the upcoming videos

play07:04

okay this custom output parser you can

play07:07

do anything with respect to the output

play07:09

that probably comes you want to do a

play07:10

split you want to make it as a capital

play07:12

letter anything right you can write your

play07:13

own custom code with respect to this but

play07:15

by default right now I'm going to use

play07:17

just St Str output parser now along with

play07:20

this the next thing that I'm actually

play07:21

going to do is that I'm going to use

play07:24

streamlet as St okay streamlet as St

play07:28

then I'm going to also import OS and

play07:31

since I'm also going to use from

play07:34

EnV

play07:35

import load uncore Dov so that we'll be

play07:39

able to import all our libraries okay so

play07:43

let's see whether everything is working

play07:44

fine or

play07:48

not okay U from EnV so here I'm going to

play07:52

basically write

play07:54

python load uncore dot sorry python app.

play07:59

py I'm just running it so that

play08:01

everything works fine and all our

play08:03

libraries will also get imp cannot uh

play08:06

python app.py okay I have to probably go

play08:08

to my chatbot folder CD chatbot so now

play08:12

I'll clear my screen

play08:14

python

play08:17

app. P oh

play08:23

sorry from streamlet as St okay import

play08:27

streamlet as St I have to write so that

play08:29

is the reason it was coming all the

play08:31

erors now let's see if everything is

play08:34

working fine Lang chain core so here you

play08:37

can probably see that there is a

play08:38

spelling mistake okay but I'm just going

play08:42

to keep all the errors like this so that

play08:44

you'll be able to see it python m.p if

play08:47

everything works fine uh do output

play08:50

parser okay P Capital

play08:54

now so I think my suggestion box is not

play08:58

working well and that is reason now

play09:00

everything is working fine uh here you

play09:02

can see that I'm not getting any error

play09:04

so let's start our coding and let's

play09:05

continue it okay so we have imported all

play09:08

these things right now now as I

play09:10

suggested guys since we are going to use

play09:12

three environment variables one is the

play09:13

open API key Lang chain API key and

play09:16

along with that I will also make sure

play09:18

that the tracing to capture all the

play09:21

monitoring results I will keep this

play09:23

three environment variable one is open

play09:24

API key Lang chain tracing version two

play09:28

and Lang chain API key so lanin API key

play09:30

will actually help us to know that where

play09:34

the entire monitoring results needs to

play09:36

be stored right so that dashboard you'll

play09:38

be able to see all the monitoring

play09:39

results will be over here and tracing we

play09:42

have kept it as true so it is

play09:43

automatically going to do the tracing

play09:45

with respect to any code that I write

play09:46

and this is not just with respect to

play09:48

paid apis with open source llm also

play09:50

you'll be able to do it now this is the

play09:52

second step that I have actually done

play09:54

now let's go ahead and Define my prompt

play09:55

template simple so here I'm going to

play09:58

write my prompt

play10:01

template okay prompt template so here

play10:04

I'm going to Define prompt is equal to

play10:07

chat prom template dot okay from uncore

play10:13

messages

play10:14

okay and here I'm going to Define my

play10:17

prom template in the form of list the

play10:19

first thing that with respect to my prom

play10:21

template that I'm going to give is

play10:23

nothing but system and system here I say

play10:26

that you

play10:27

are a

play10:30

helpful

play10:33

assistant

play10:35

please

play10:37

respond to the queries okay please

play10:41

respond to the questions or queries

play10:44

please response to the user queries okay

play10:47

whatever queries that I'm going to

play10:49

specifically ask a simple prompt that

play10:51

you can probably see over here the next

play10:53

statement uh after this is

play10:58

what

play11:00

so this will be my next see if I'm

play11:02

giving a system prompt I also have to

play11:04

give a user prompt right user prompt

play11:05

will be whatever question I ask so this

play11:08

will be user and here I will define

play11:11

something like question colon question I

play11:15

can also give context if I want but

play11:17

right now I'll just give it as a

play11:18

question a simple chatbot application so

play11:21

that you'll be able to start your

play11:23

practice of creating all these chatbots

play11:25

so now I will go ahead and Define my

play11:27

streamlet framework okay see the

play11:30

learning process will be in such a way

play11:32

that I will try to create more projects

play11:34

and use functionalities that are there

play11:36

right and in this way you'll be able to

play11:38

work it in an amazing way okay so here

play11:41

I'm going to basically write st. title

play11:43

Lang chain demo with the open API std.

play11:46

textor input search the text topic you

play11:49

want okay now let us go ahead and call

play11:52

my open AI llms okay open AI llm so here

play11:56

I'm going to basically write llm and

play11:58

whenever we use openi API so it will be

play12:00

nothing but chat open Ai and here I'm

play12:03

going to give my model name the model

play12:05

name will be nothing but GPT GPT 3.5

play12:09

turbo so I'm going to use turbo because

play12:12

the cost is less for this I've I've put

play12:14

$5 in my open a account okay just to

play12:17

teach you so please make sure that you

play12:19

support so that I will be able to

play12:21

explore all these tools and create

play12:22

videos for all of you okay and finally

play12:25

my output parser see always remember

play12:28

Lang chain provides you features that

play12:31

you can attach in the form of chain

play12:33

right so here three main things we have

play12:35

created one is the chat prom template

play12:37

next one is the llm and next one is the

play12:40

output parcel obviously this is the

play12:42

first thing that we require after this

play12:43

we integrate with our llm and then

play12:45

finally we get our output so string

play12:47

output parser is responsible in getting

play12:50

the output itself finally chain is equal

play12:52

to we will just combine all these things

play12:54

so here I'm going to write prompt llm

play12:58

and then finally my output parsel right

play13:01

I will show you going forward how we can

play13:03

customize this entire output parsel and

play13:05

all and finally if I write if input

play13:08

text if input undor text colon now

play13:14

whenever I write any input and probably

play13:17

press enter Then I should be able to get

play13:19

this output so st. write and here I'm

play13:21

going to just write chain. invoke and

play13:25

finally I get I give my input as

play13:29

question and that input is assigned to

play13:32

my input text input text right so this

play13:36

is what we are going to basically do

play13:38

right st. write now this is what we are

play13:41

doing a simple chatbot application but

play13:44

along with this we have implemented this

play13:46

this this feature is specifically for

play13:48

Lang Smith Langs

play13:51

Smith

play13:53

Lang Smith tracking okay this will be

play13:57

amazing for to use okay and this is the

play14:01

recent updates that are there so

play14:02

whatever code I'm writing will be

play14:04

applicable going forward in various

play14:07

things that are probably going to come

play14:08

up okay now let's go ahead and run this

play14:11

so in order to run it you'll just need

play14:14

to write nothing but streamlet

play14:18

Run app.py Okay oops that is an error

play14:24

app.py and here I'll do allow access

play14:27

okay so right now now you'll be able to

play14:29

see over here Lang chain series test llm

play14:32

but my my my project name was Project

play14:35

one okay so now if I go ahead and hit

play14:38

hey hi okay and just press enter you'll

play14:41

be able to see that we'll be getting

play14:43

this information over here and here you

play14:45

can see my project something let me

play14:48

reload

play14:49

it tutorial one right so this is the

play14:52

first request that is already been hit

play14:54

and here you'll be able to see your

play14:56

enable sequest chat prom template right

play14:59

all the chat Brom template output

play15:00

message your helpful assistance pleas

play15:02

response to the user queries right along

play15:05

with this you will be seeing chart open

play15:07

AI API and with respect to this what was

play15:09

the cost everything you are able to

play15:11

track so

play15:13

027 is the cost that actually took with

play15:16

respect to this and finally my string

play15:18

output parser how can you assist today

play15:20

with respect to this output parser it is

play15:22

just going to give me the response

play15:24

clearly now when I develop my own custom

play15:26

output parcel I'll be able to track

play15:28

everything so here what you are able to

play15:29

do you are able to monitor each and

play15:31

everything that is there right all the

play15:33

request that is probably coming up okay

play15:36

so provide me a python

play15:38

code a python code to swap two

play15:43

numbers okay so once I execute this and

play15:46

here you'll be able to see that I'm able

play15:48

to get the output and answer everything

play15:50

is over here and for this you'll be able

play15:53

to see the cost will be little bit High

play15:55

okay if you don't agree with me or let's

play15:58

see with respect to tutorial one the

play16:00

second request that I've actually got

play16:02

4.80 seconds yes it took a little bit

play16:04

more time and here the cost was

play16:07

00211 so it is based on the token size

play16:10

right for every token it is bearing some

play16:13

kind of cost perfect uh this was the

play16:15

first part of this particular tutorial

play16:17

now let's go to the second part uh the

play16:19

second part is more about making you

play16:21

understand that how you can call um open

play16:25

source llms in your local itself and how

play16:27

you can actually use it so for this

play16:29

first of all I will go ahead and

play16:31

download AMA okay AMA is an amazing

play16:34

thing because you'll be able to run all

play16:37

the large language models locally uh the

play16:39

best thing about AMA is that it

play16:41

automatically does the compression and

play16:43

probably in your local you'll be able to

play16:44

run it let's say if you have 16 GB Ram

play16:47

you will just have to wait for some

play16:49

amount of time to get the response but

play16:51

Lama 2 and code Lama you can

play16:52

specifically use it over here all the

play16:54

open source llm model and it supports a

play16:55

lot of Open Source llm models and yes uh

play16:59

in Lang chain ecosystem the integration

play17:01

has also been provided over here so what

play17:03

I'm actually going to do over here is

play17:05

that I'll show you first of all just go

play17:06

ahead and download it this is available

play17:08

both in Mac Mac Linux and windows

play17:11

wherever you want just download it after

play17:13

you downloaded it what you really need

play17:15

to do is just go ahead and install it it

play17:16

is a simple exe file for Windows MSI

play17:19

file for Mac OS and then Linux is a

play17:21

different version so you just need to

play17:23

double click it and start installing it

play17:24

once you install it here uh somewhere in

play17:27

the bottom this AMA will be start

play17:29

running okay now once AMA installation

play17:32

is done now what I will do over here I

play17:34

will create another file inside my

play17:37

chatbot okay and create another file

play17:42

local

play17:45

llama okay local Lama py now local Lama

play17:49

py what we are going to basically do

play17:51

over here is that uh with respect to the

play17:54

local llama I will first of all go ahead

play17:57

and import some of of the library see

play17:59

code will be almost same right there

play18:01

also I'll be using chat open API chat

play18:03

prom template string output parser so

play18:05

I'll copy the same thing over here I'll

play18:07

paste it over here now along with this

play18:09

what I'm going to do I have to import

play18:10

AMA right because that is the reason why

play18:12

we will be able to download all the

play18:15

specific models okay so Lang chain

play18:17

community. llm see over here whenever we

play18:20

need to do the third party integration

play18:22

so that will be available inside langin

play18:24

Community okay so AMA is third party

play18:26

cont configurations uh let's say you're

play18:29

using some Vector embeddings that is

play18:31

also third party so everything will be

play18:32

available over here okay now this is

play18:34

done langore community. LM import AMA

play18:37

and then we have this output parser

play18:39

string output parser core. prompts that

play18:42

is nothing but chat prompt template and

play18:43

everything is there okay now let's go

play18:46

ahead and write import streamlet as St

play18:50

so I'm going to going to use the

play18:52

streamlet over here along with this

play18:54

import

play18:56

OS and not only that we will also go

play18:59

ahead and import from

play19:02

EnV

play19:04

import load

play19:06

uncore

play19:08

dot loancore

play19:11

dob

play19:13

okay now we'll initialize it load

play19:17

underscore

play19:18

Dov okay once we initialize all this

play19:21

random all this uh environment variables

play19:24

as usual I will be importing this three

play19:26

things now see in my previous code when

play19:29

I was using open aipi prompt template we

play19:31

have written it over here right same

play19:33

promt template we'll also write it over

play19:34

here because it we just need to repeat

play19:37

it because the main thing is that you

play19:38

really need to understand how with the

play19:40

help of AMA I can call any open source

play19:42

models okay so here it is and then

play19:44

finally you'll be able to see where is

play19:47

my uh code to call my open a llms that

play19:51

we going to see over here so this is

play19:53

done now stream late framework also I

play19:55

will try to call it over here okay it's

play19:57

more about copy past the same thing that

play20:00

we have actually implemented and then

play20:02

you will also be seeing this is the code

play20:05

that we going to implement it okay but

play20:08

here we are calling chat open AI okay I

play20:10

specifically don't want chat open AI

play20:12

instead I will be calling AMA okay so o

play20:16

Lama whatever Library we have imported

play20:18

so o Lama okay and then here we are

play20:22

specifically going to call a Lama 2 okay

play20:25

now before calling any models now which

play20:27

all model are specific supported if you

play20:29

go ahead and see in the GitHub right of

play20:31

AMA you'll be seeing the list of

play20:34

everything every every every libraries

play20:36

that it supports like Lama 2 mral

play20:37

dolphin F 52 neural chat code Lama all

play20:40

are mostly open source GMA GMA is also

play20:43

there but before calling this what you

play20:45

really need to do is that just go to

play20:46

your command prompt let's say that I

play20:47

want to use GMA GMA model okay so what I

play20:50

have to do or I have to use Lama model

play20:52

right so in order to do this I have to

play20:55

just write AMA run whatever model name

play20:58

because initially it needs to download

play20:59

it right uh this will get downloaded

play21:01

from some open source some GitHub it can

play21:04

be GitHub it can be hugging pH somewhere

play21:06

right some location there will be there

play21:08

we have to download that entire model so

play21:10

let's say that I want to go ahead and

play21:11

write AMA run gamma so this what will

play21:14

happen it will pull the entire GMA model

play21:16

right wherever it is so here you can see

play21:18

pulling will basically happen now this

play21:20

is right now 5.2 GB right for the first

play21:22

instance you really need to do it now

play21:24

since I I I am writing the code with

play21:26

respect to Lama 2 I've already

play21:28

downloaded that model so that is the

play21:29

reason I'm showing you another example

play21:31

over here run GMA now once this entire

play21:34

downloading happens then only I'll be

play21:36

able to use the gamma model in my local

play21:38

with the help of AMA so I hope you have

play21:41

got an idea about

play21:42

it now what I'm actually going to do so

play21:45

here I've called AMA model Lama 2 okay

play21:48

then again output parser is this and I'm

play21:50

combining prompt llm and output parser

play21:52

and everything will be almost same and

play21:54

that is the most amazing thing about

play21:56

Lang chain the code will be only generic

play21:58

now only you need to replace open a or

play22:00

paid or open source it is up to you

play22:03

again I'm saying you guys the system

play22:04

that I'm currently working in has a 64GB

play22:07

Ram uh it has Nvidia Titan RTX which was

play22:10

gifted by Nvidia itself so with respect

play22:12

to this uh amazing system I will be able

play22:15

to run very very much quickly that is

play22:17

what I feel so let's go ahead and run it

play22:20

so here what I'm actually going to do

play22:21

I'm going to write

play22:24

python uh so it is streamlet so

play22:27

streamlet run

play22:29

run local Lama py so once I execute it

play22:35

here you'll be able to see now now

play22:37

instead of open AI API I should had okay

play22:40

no module name Lang chain Community

play22:41

let's see where is Lang chain Community

play22:44

okay I have to also make sure that in my

play22:46

requirement. txt I go ahead and use this

play22:51

langin community and I need to import

play22:54

this Library since I need to do that and

play22:57

that is the reason I'm getting an error

play22:59

so if I go ahead and write pip install

play23:01

minus r requirement.

play23:04

txt

play23:06

oops

play23:08

CD dot dot okay now if I go ahead and

play23:11

write pip install minus r requirement.

play23:15

txt so here you'll be able to see my

play23:17

requirement. will get installed this

play23:19

Lang chain Community will get installed

play23:21

once I'm done with this then I can

play23:23

probably go ahead and run my code okay

play23:25

so this will take some amount of time so

play23:27

if you liking this video please make

play23:28

sure that you hit like uh there are many

play23:31

things that are probably going to come

play23:32

up and it'll be quite amazing when you

play23:33

learn all these things okay so uh once

play23:36

this is done then what will happen is

play23:38

that we can and you can use any model up

play23:40

to you okay and I don't want this open a

play23:44

key also only this two information I

play23:46

specifically want I'll be able to track

play23:48

all these things okay and later on I'll

play23:51

also show you how you can create this in

play23:52

the form of apis again it some time

play23:55

it'll take this but uh let me know uh

play23:58

how do you think all these tutorials are

play24:01

blank chain I see a lot of purpose for

play24:04

this particular Library it's is quite

play24:05

amazing that people are doing um the

play24:07

company is doing amazingly well in this

play24:09

open source world and it is developing

play24:11

multiple things over there so now I will

play24:13

go ahead and write CD chatbot I will go

play24:16

inside my chatbot and then I will run

play24:18

this python local Lama dopy once I

play24:23

execute this now I don't think so it

play24:24

should be an

play24:26

error okay it should be streamlit come

play24:28

on streamlit run local Lama oops local

play24:33

Lama py not python run streamate run now

play24:38

here you have again I'll be getting open

play24:40

AI text over here let me change this

play24:42

also so that I can make it

play24:46

perfect with Lama

play24:49

2 okay so I've executed it saved it I

play24:53

will rerun it I'll say hey hi so once I

play24:57

execute it you'll be seeing that it'll

play24:59

take some amount of time in my system

play25:00

even though I have a 64 GB Ram but I'll

play25:02

get the output over here so assistant

play25:04

says hello how can I help you today now

play25:06

if I probably go ahead with respect to

play25:09

this dashboard uh let's see where it is

play25:12

so now tutorial one you'll be able to

play25:14

see that this will increase okay there

play25:16

will be one more over here right so I've

play25:20

reloaded this

play25:21

page okay and you'll be able to see it

play25:24

okay you'll be able to see the new AMA

play25:26

request see hey hi High 4.89 second

play25:30

token 39 but there is no charges because

play25:33

it is an open source model right so here

play25:36

you'll be able to see if I extend this

play25:38

there you'll be able to see chat prom

play25:39

template ama ama is over here now this

play25:42

AMA is specifically calling Lama 2 over

play25:45

there and whatever open source libraries

play25:46

that you specifically want just to call

play25:49

this it is very much simple you have to

play25:50

just go into the GitHub and download any

play25:52

model first of all just by writing o

play25:55

Lama run that particular model name once

play25:57

it is downloaded it is good that you can

play25:59

probably go ahead with and use it okay

play26:02

now I will say uh provide me a python

play26:07

code python code to swap two numbers

play26:11

okay if you want more coding well chat

play26:15

bot you can directly use code Lama if

play26:17

you want okay so here you can see all

play26:19

the examples are there and this was

play26:20

quite fast right so this is good you

play26:23

know so if you have the right kind of

play26:24

things so here you can see 4 seconds it

play26:27

has Pro taken okay AMA is over here all

play26:31

the information is probably over here

play26:33

prompt and completion and all right so I

play26:35

hope uh you like this specific video I

play26:37

hope you able to understand things uh I

play26:40

said guys again uh if you're new in this

play26:42

Channel please make sure that you

play26:43

subscribe the channel there a lot of

play26:44

tutorials that are probably going to

play26:45

come up but here I've just shown you

play26:47

multiple ways of creating chatbot

play26:49

application using both uh open Ai apis

play26:54

and open source models with the help of

play26:55

langin so yes this was it for my side

play26:57

I'll see you in the next video have a

play26:58

great great day thank you and all take

play26:59

care bye-bye

Rate This
β˜…
β˜…
β˜…
β˜…
β˜…

5.0 / 5 (0 votes)

Related Tags
Chatbot DevelopmentLangchain TutorialOpenAI APIOpen SourceAMA IntegrationLocal DeploymentPython CodingAPI IntegrationAI EngineeringStreamlit AppModel Comparison