The LangChain Cookbook - Beginner Guide To 7 Essential Concepts

Greg Kamradt (Data Indy)
29 Mar 202338:10

Summary

TLDRIn this informative video, Greg introduces LangChain, a framework for developing applications with language models. He covers the basics, including components like schemas, models, prompts, indexes, memory, chains, and agents. Greg demonstrates how LangChain simplifies AI model integration and agency, allowing for easier development and customization. The video also features a LangChain Cookbook for practical examples and code snippets, aiming to get viewers building and experimenting with LangChain quickly.

Takeaways

  • 📚 The script introduces LangChain, a framework for developing applications powered by language models, aiming to simplify working with AI models and customizing their interactions.
  • 🛠️ LangChain facilitates integration by bringing external data sources into language models and supports agency by enabling models to make decisions in unclear or unknown paths.
  • 🔧 The speaker, Greg, highlights the ease of swapping out components in LangChain and customizing chains, which are series of actions combined together for specific tasks.
  • 💡 LangChain's speed and active community, including meetups and webinars, are praised as valuable resources for learning and development.
  • 📝 The script provides an overview of different LangChain components such as schemas, models, prompts, indexes, memory, chains, and agents, each serving a unique purpose in application development.
  • 🔑 The importance of text as a new form of programming language is emphasized, with examples of how language models can interpret and respond to natural language instructions.
  • 🗂️ The concept of document handling in LangChain is discussed, including the use of metadata to filter and manage large repositories of information.
  • 🔍 The use of embeddings for semantic text representation is explained, allowing for efficient comparison and similarity searches within language models.
  • 📑 The script demonstrates how prompts and prompt templates can be dynamically generated to interact with language models in various scenarios.
  • 🔎 The functionality of example selectors in providing in-context learning for language models is showcased, improving their ability to understand and respond accurately.
  • 📈 The power of chains in LangChain is illustrated through examples of sequential chains and summarization chains, which automate multi-step processes within applications.

Q & A

  • What is the main purpose of the video?

    -The main purpose of the video is to provide a comprehensive overview of LangChain, covering its basics with the goal of helping viewers understand and start building applications powered by language models as quickly as possible.

  • Who is the presenter of the video?

    -The presenter of the video is Greg, who has been building apps in LangChain and shares his work on Twitter.

  • What is LangChain according to the video?

    -LangChain is a framework for developing applications powered by language models, which abstracts a lot of the complexity involved in working with AI models, making it easier to integrate external data and enabling language models to interact with their environment through decision making.

  • What are the four main reasons Greg likes LangChain?

    -Greg likes LangChain for its components that simplify working with language models, the ease of customizing chains, the speed of updates and development, and the supportive community with resources like meetups and Discord channels.

  • What is the LangChain Cookbook mentioned in the video?

    -The LangChain Cookbook is a companion document to the video, designed to be a dense resource with many links for self-service learning, providing an introductory understanding of LangChain components and use cases with examples and code snippets.

  • How does LangChain handle external data integration?

    -LangChain handles external data integration by allowing users to bring in files, other applications, and API data to their language models, thus making it easier to work with external data sources.

  • What is a 'chain' in the context of LangChain?

    -In the context of LangChain, a 'chain' refers to a series of actions or calls to language models that are combined together to perform a task, allowing for automation of multiple steps in a workflow.

  • What is an 'agent' in LangChain and how does it differ from a chain?

    -An 'agent' in LangChain is a language model that drives decision-making processes, potentially involving unknown chains that depend on user input. Unlike predetermined chains, an agent decides which tools to call based on the task at hand, making it suitable for more complex and dynamic workflows.

  • How does LangChain assist with handling long documents?

    -LangChain assists with handling long documents through text splitting, which breaks down large documents into smaller chunks that are more manageable for language models to process effectively.

  • What is the role of 'prompts' in LangChain?

    -Prompts in LangChain are the text inputs sent to the language model. They can be simple or more instructional, and often involve prompt templates that dynamically generate prompts based on the scenario, guiding the language model to provide the desired output.

  • Can you provide an example of how LangChain uses 'embeddings'?

    -LangChain uses 'embeddings' to convert text into a numerical representation, or vector, that captures the semantic meaning of the text. This is particularly useful for similarity searches and comparisons, as it allows for the efficient matching of documents based on their semantic content.

  • What is the significance of 'retrievers' in LangChain?

    -Retrievers in LangChain are mechanisms that combine documents with language models, often through similarity searches using embeddings. They help in finding relevant documents based on a query, making it easier to work with large repositories of information.

  • How does LangChain manage 'memory' in interactions, such as chat histories?

    -LangChain manages 'memory' through the use of chat message history models, which keep track of the conversation context. This allows language models to reference past interactions, improving the coherence and relevance of their responses in ongoing conversations.

Outlines

00:00

📚 Introduction to LangChain Basics

The video script introduces LangChain, a framework for developing applications powered by language models, with the aim of simplifying the building process and making it enjoyable. The presenter, Greg, shares his enthusiasm for building apps with LangChain and mentions the new conceptual docs that provide a more theoretical understanding of the framework. Greg also introduces the LangChain Cookbook, a companion guide with examples and code snippets, and emphasizes the importance of understanding the components and use cases of LangChain.

05:01

🔍 Exploring LangChain Components and Models

This section delves into the various components of LangChain, starting with text and chat message schemas that help in structuring the input for language models. It then discusses different model types, such as language models for text input and output, chat models that simulate conversation, and embedding models for text similarity searches. The script provides examples of how to use these models, including creating chat messages and generating embeddings for semantic text representation.

10:03

📝 Prompts, Example Selectors, and Output Parsers

The script explains the use of prompts to instruct language models and introduces prompt templates for dynamic input. It also covers example selectors that help in choosing relevant examples for the language model to learn from, specifically mentioning the semantic similarity example selector. Furthermore, it discusses output parsers that structure the language model's response into a JSON object, making it easier to handle and work with.

15:04

🗂️ Document Loaders, Text Splitting, and Retrievers

This part of the script focuses on organizing documents for better interaction with language models. It introduces document loaders for fetching data from various sources, such as Hacker News, and text splitting techniques to break down long documents into manageable chunks. The concept of retrievers, particularly vector store retrievers, is also explained, which involves creating embeddings from document chunks and storing them for efficient similarity searches.

20:04

🧠 Vector Stores and Memory for LangChain

The script describes vector stores as databases for storing embeddings of documents, allowing for efficient semantic searches. It also discusses the importance of memory in LangChain, especially for chat history, to help language models remember past interactions and provide more contextually relevant responses.

25:06

🔗 Chains and Agents for Automating LLM Calls

The concept of chains in LangChain is introduced, which allows for combining multiple LLM calls and actions in a sequence. The script differentiates between simple sequential chains for breaking down tasks and summarization chains for processing and summarizing large texts. It also introduces agents, which are more advanced and can decide dynamically which tools to use based on the user input, without a predetermined chain of calls.

30:07

🎯 Agents in Action: A Multi-Step Question Example

In this final part, the script provides a detailed example of how an agent in LangChain can handle a multi-step question about the debut album of a band. It illustrates the agent's decision-making process, using its toolkit and language model to search for information and arrive at the correct answer, demonstrating the power of LangChain's agent-based approach.

35:08

🎉 Conclusion and Invitation for Further Exploration

The video script concludes by recapping the broad overview of LangChain's features and components. The presenter invites viewers to follow him on Twitter for updates, watch for part two of the video covering use cases, and engage with the content by leaving comments and questions.

Mindmap

Keywords

💡Lang Chain

Lang Chain is a framework designed for developing applications powered by language models. It simplifies the complex aspects of working with AI models by providing integration capabilities and agency, which allows language models to make decisions and interact with their environment. In the video, Lang Chain is the central theme, with the creator demonstrating its various components and use cases to build applications with language models.

💡Schema

In the context of the video, a schema refers to a data model that defines the structure and types of the data to be used within Lang Chain. It is a blueprint for organizing information, such as chat messages and documents, which helps in managing and processing language model inputs and outputs. The script mentions different schema models like text, chat messages, and documents, illustrating how they are used to structure data for the language models.

💡Integration

Integration in Lang Chain is the process of bringing external data sources, such as files, applications, or API data, into the language models. This feature is highlighted in the video as one of the main ways Lang Chain simplifies working with AI models, allowing them to access and utilize diverse data sets to enhance their functionality and responsiveness.

💡Agency

Agency in Lang Chain refers to the capability of language models to make decisions and interact with their environment. The video emphasizes this concept as it allows the models to determine the next course of action when the path is unclear or unknown, showcasing how Lang Chain can be used for more dynamic and responsive applications.

💡Prompts

Prompts are the text inputs provided to the language models to elicit responses. The video discusses how prompts can be simple or more instructional, and introduces the concept of prompt templates, which are dynamically generated prompts that include tokens or placeholders based on the scenario. This is demonstrated when the creator uses a prompt template to ask for a classic dish from a user-specified location.

💡Embeddings

Embeddings in the video are the numerical representations of text that capture the semantic meaning of the content. They are used for similarity searches and comparisons of texts within language models. The script provides an example of using an 'embeddings model' to convert a piece of text into a vector, which can then be compared with other vectors for semantic similarity.

💡Chains

Chains in Lang Chain are sequences of actions or calls to language models that are combined to perform a series of tasks. The video explains how chains can be used to break up complex tasks into simpler, more manageable steps, and how they can be used to create workflows that involve multiple language model calls. An example given is the 'simple sequential chain' used to generate a classic dish and its recipe based on a user's location.

💡Agents

Agents in Lang Chain are language models that drive decision-making processes, especially in applications that require dynamic and unknown chains of actions. The video describes agents as having access to a suite of tools and the ability to decide which tool to use based on the user input. An example demonstrated is an agent searching for information about a band and its debut album in a multi-step question scenario.

💡Document Loaders

Document loaders are components in Lang Chain that facilitate the structuring of documents in a way that is more accessible for language models. The video script mentions 'Hacker News data loader' as an example, which is used to load and process data from a specific URL, making it easier for language models to work with the content.

💡Text Splitting

Text splitting is the process of dividing longer documents into smaller chunks that can be more effectively processed by language models. The video explains the importance of text splitting for maintaining a good signal-to-noise ratio and provides an example of using a 'recursive character text splitter' to break down an essay into smaller, more manageable pieces of text.

Highlights

Introduction to LangChain, a framework for developing applications powered by language models.

LangChain abstracts complex parts of working with AI models, making it easier to integrate external data and make decisions.

The presenter, Greg, shares his experience building apps with LangChain and invites viewers to follow along on Twitter.

Explanation of the conceptual documentation for LangChain that focuses on theoretical and qualitative aspects.

The LangChain Cookbook is introduced as a companion to the video, offering a dense document with numerous links for self-service.

LangChain's components such as schema models, prompts, indexes, memory, chains, and agents are discussed with working code samples.

The importance of text as a new programming language for instructing language models is highlighted.

Demonstration of chat messages in LangChain, showing how system, human, and AI messages interact within a conversational context.

Introduction to document schemas, emphasizing the role of metadata in managing large repositories of information.

Different model types in LangChain are explored, including language models, chat models, and text-embedding models.

The use of prompts and prompt templates to dynamically generate instructions for language models is explained.

Example selectors and their role in in-context learning by showing the language model relevant examples.

Output parsers are discussed for structuring the language model's output into a usable format like JSON.

Document loaders and their importance in structuring documents for better language model interaction are covered.

Text splitting techniques are demonstrated to manage long documents by breaking them into manageable chunks.

Retrieval methods using vector stores to find similar documents based on semantic meaning is showcased.

Memory functionality in LangChain is explained to help language models remember past interactions in a chat context.

Chains in LangChain are introduced to automate the combination of different LLM calls and actions.

Agents in LangChain are discussed as a complex concept for making dynamic decisions based on user input and available tools.

A comprehensive overview of LangChain's capabilities concludes the video, inviting viewers to engage for part two focusing on use cases.

Transcripts

play00:00

hello good people have you ever wondered

play00:03

what Lang chain was or maybe you've

play00:05

heard about it and you've played around

play00:07

with a few sections but you're not quite

play00:09

sure where to look next well in this

play00:11

video we're going to be covering all of

play00:13

the lane chain Basics with the goal of

play00:15

getting you building and having fun as

play00:18

quick as possible my name is Greg and

play00:20

I've been having a ton of fun building

play00:22

out apps in langchain now I share most

play00:24

of my work on Twitter so if you want to

play00:26

go check it out links in the description

play00:28

you can go follow along with me now this

play00:31

video is going to be based off of the

play00:33

new conceptual docs from lanechain and

play00:35

the reason why I'm doing a video here is

play00:37

because it takes all the technical

play00:39

pieces and abstracts them up into more

play00:41

theoretical qualitative aspects of Lane

play00:43

chain which I think is extremely helpful

play00:45

for it and in order to understand this a

play00:49

little bit better I've created a

play00:50

companion for this video and that is the

play00:53

Lang chain cookbook links in the

play00:55

description if you want to go check that

play00:56

out please go and check out the GitHub

play00:58

and you can follow along here I'm gonna

play01:00

put a lot of time stamps in the

play01:01

description as well there's gonna be a

play01:03

fair amount of content in this one so

play01:04

you can watch it all the way through or

play01:06

if you want to skip to a certain section

play01:07

feel free to jump to that time stamp all

play01:09

right without further Ado let's jump

play01:12

into it all right here are the new

play01:13

conceptual docs from Lang chain now the

play01:15

reason why these are different is

play01:16

because there are the python docs which

play01:18

are going to be the more technical

play01:19

focused one or the JavaScript docs as

play01:21

well which is also more technical

play01:22

documentation however these concepts are

play01:25

more qualitative so you can understand

play01:27

what is going on in the background of

play01:29

these different sections here now we're

play01:31

going to focus on these components of

play01:32

Lang chain there's an entire section on

play01:35

use cases which is when you actually put

play01:36

these into practice and that is going to

play01:38

be a part two of this video so we won't

play01:40

jump into this today that would be a too

play01:41

long for us we're going to run through

play01:43

schema models prompts indexes indexes

play01:46

memory chains and agents with a working

play01:49

code sample for each one of those well

play01:52

without further Ado let's jump into some

play01:54

code here we are with the Lang chain

play01:55

cookbook now my goal is to make this a

play01:58

dense document with a ton of links so

play02:00

you can go and self-service right into

play02:01

their links in the description and if

play02:03

you want to follow along I encourage you

play02:05

to get this on your computer and go from

play02:07

it go for it from there so the goal of

play02:09

this dock is to provide an introductory

play02:11

understanding of the components and use

play02:12

cases of Lang chain in a explain like M5

play02:16

way with examples and code Snippets for

play02:19

use cases check out part two which is

play02:20

not made yet that is coming soon

play02:21

hopefully by the time you see this it

play02:23

will be a bunch of links here we go into

play02:25

what is Lang chain so Lang chain is

play02:28

going to be a framework for developing

play02:29

applications powered by language models

play02:32

well Greg openai just came out with

play02:34

plugins yes but there is a whole lot of

play02:37

other things that you can do with

play02:38

language models outside of those and

play02:40

Lang chain helps abstract a ton of that

play02:43

so that you're able to work with it more

play02:45

easily and intermix different pieces and

play02:48

customize really how you need to

play02:50

so Lane chain makes the complicated

play02:51

parts of working and building with AI

play02:54

models easier it does this in two main

play02:56

ways the first big way is going to be

play02:58

through integration so you can bring

play02:59

external data such as your files other

play03:02

applications API data to your language

play03:05

models which is cool the other big way

play03:07

that it helps do this is through agency

play03:09

so it allows your language models to

play03:11

interact with its environment via

play03:13

decision making basically you're using

play03:15

the language model to help decide which

play03:17

action to take next and you do this when

play03:20

the path isn't so clear or it may be

play03:22

unknown and we'll get into more more of

play03:24

that later

play03:25

so why link chain specifically there are

play03:27

four big reasons why I like Lane chain

play03:29

the first one is going to be for the

play03:30

components Lang chain makes it easy to

play03:33

swap out abstractions and components

play03:34

necessary to work with language models

play03:36

basically they've created a ton of tools

play03:39

that make it super simple to work with

play03:41

language models like chat GPT or

play03:44

anything on how you face how you may

play03:46

want also because it allows you to

play03:48

customize chains really easily so

play03:50

there's a ton of out of the box of

play03:51

support for using and customizing chains

play03:53

basically combining series of actions

play03:57

together

play03:58

on the qualitative side of why LinkedIn

play04:00

is awesome is because the speed is great

play04:02

almost every day I need to go and make

play04:04

sure that I'm on the latest branch of

play04:06

Lang Chan and I go and I update it every

play04:08

time so the speed is awesome the other

play04:10

really cool part is the community so

play04:12

there's a ton of meetups there's a

play04:13

Discord Channel and there's a ton of

play04:15

events like webinars that go on

play04:16

throughout the week that are really

play04:18

awesome learning resources for us

play04:20

cool now again to summarize all this why

play04:24

do we need Lang chain well because

play04:25

language models can be pretty

play04:27

straightforward it is text in text app

play04:29

and you may have experienced this

play04:30

yourself however once you just start

play04:32

developing applications there's a ton of

play04:34

friction points that Lang chain is going

play04:36

to help you develop uh there's a ton of

play04:40

friction points that they're going to

play04:40

help you with basically now the last

play04:43

thing that I'll say about this before we

play04:44

jump into it is that this cookbook isn't

play04:46

going to cover all of the aspects of

play04:48

Lang chain this isn't meant to be a

play04:50

replacement for the documentation online

play04:51

this is meant to show you a very broad

play04:54

overview about the capabilities that

play04:56

there are with my interpretation of them

play04:58

and my voice over with it and with that

play05:00

I'm hoping that you can get to building

play05:02

and impact as quick as possible

play05:05

I'm super curious to see what you build

play05:07

so please let me know and uh I will

play05:09

hopefully uh I would love to see it

play05:10

first thing we're going to do is we're

play05:11

going to import our openai API key now I

play05:14

have a hidden cell here but you're going

play05:16

to replace your API key API key right

play05:19

here just throw that in there the first

play05:21

aspect of link chain components that

play05:22

we're going to look at is the schema now

play05:24

I almost didn't even include this one

play05:25

but the first one is going to be text

play05:26

now what's really cool about these

play05:29

language models is that text is the new

play05:31

programming language not verbatim not

play05:33

per se but we're using a lot more

play05:35

English language to tell language models

play05:37

what to do in this case what day comes

play05:39

after Friday is an example of something

play05:41

I may go tell a language model and it is

play05:43

going to respond back to me with a

play05:45

natural language response very cool next

play05:47

up is going to be chat messages so like

play05:49

text chat messages are similar but they

play05:52

have different types the first type is

play05:53

going to be system and this is helpful

play05:55

background context that tell the AI what

play05:57

to do all right like your helpful

play05:59

teacher assistant bot or something then

play06:01

we have human messages and these are

play06:03

messages that are intended to represent

play06:04

the user and so literally user input or

play06:07

something that I may text from it then

play06:10

we have ai messages and these are

play06:11

messages that show what the AI responded

play06:14

with and the cool part about this is the

play06:16

AI may or may not have actually

play06:18

responded with it but you can tell it

play06:20

that it did so that it has additional

play06:21

context on how to answer you okay so

play06:24

what I'm going to do here is I'm going

play06:25

to import chat open Ai and my three

play06:27

message types and then I'm going to

play06:28

create my chat model I'm going to do

play06:30

that and then I'm going to type in two

play06:31

messages the first system message is you

play06:34

are a nice AI bot that helps a user

play06:36

figure out what to eat in a short

play06:37

sentence and then a human message I like

play06:40

tomatoes what should I eat let me go

play06:42

ahead and run this

play06:43

and you get an AI message back because

play06:45

this is what it responds with you could

play06:47

try making a tomato salad with fresh

play06:48

basil and mozzarella cheese thanks AI

play06:51

That's cool what you can also do is you

play06:54

can also pass more chat history and get

play06:56

responses from the AI so in this case

play06:58

you're a nice AI bot that helps a user

play07:00

figure out where to travel to in one

play07:02

short sentence

play07:03

I'm saying I like the beaches where

play07:05

should I go

play07:07

I'm telling it that it responded to me

play07:09

it didn't actually do this but I'm

play07:10

telling it that it did you should go to

play07:12

Nice France cool what else should I do

play07:15

when

play07:17

when I'm there and so the reason why I

play07:19

did this one is because you'll notice

play07:20

that I didn't say where I went it's

play07:23

going to have to infer from the history

play07:25

on what where I went and it says wow and

play07:28

nice and so it picked up where I was

play07:29

because it gets the history of the chat

play07:30

messages now if you're making a chatbot

play07:32

you could see how you could append

play07:34

different messages that have been back

play07:36

and forth uh I'm not sure if that's a

play07:39

verb but back and forth through the user

play07:40

okay

play07:41

the next model that we're the next model

play07:43

that we're going to look at is going to

play07:44

be documents so documents are important

play07:47

because this represents a piece of text

play07:49

along with Associated metadata now

play07:52

metadata is just a fancy word for things

play07:54

about that document and in this case

play07:56

this document or the text is held within

play07:59

a field called page content so this is

play08:03

my document it's full text that I've

play08:04

gathered from other places awesome and

play08:06

then I'm going to pass in some metadata

play08:08

and this metadata is a dictionary of key

play08:10

value pairs my document ID which is my

play08:13

key here and then some random document

play08:15

ID here that happens to be an INT it

play08:17

could be whatever you want it to be my

play08:19

document Source this is the Lang chain

play08:21

papers and then my document create time

play08:24

is going to be some timestamp whatever

play08:25

you want it to be and this is going to

play08:27

be can be whatever format you want this

play08:29

is extremely helpful for when you're

play08:30

making a large repositories of

play08:33

information and you want to be able to

play08:34

filter by it so instead of just going

play08:37

and asking link chain to look at all

play08:38

your documents in your database you can

play08:40

go ahead and filter these by a certain

play08:42

metadata go ahead and run this and you

play08:44

can see here I get a document object

play08:46

with a bunch of metadata on it from

play08:48

there cool if those are the schemas that

play08:52

we work with the next thing we're going

play08:53

to look at is the different models now

play08:55

these are the ways of interacting with

play08:58

well different models

play09:00

um but the reason why this is important

play09:02

is because they're different model types

play09:03

let me just show an example here the

play09:05

normal one that we're looking at is

play09:06

going to be the language model and this

play09:08

is when text goes in and text comes out

play09:10

okay now the first thing I'll do is I'll

play09:13

import open Ai and I'll make my model

play09:15

and you'll notice here that I changed my

play09:17

model in case you ever want to change

play09:18

your model as well and so I'm going to

play09:21

pass in a regular string into this one

play09:22

into my language model what day comes

play09:25

after Friday

play09:26

go ahead and run this and I get Saturday

play09:28

comes out the other end but not all

play09:31

models are like this you actually have

play09:33

chat models as well and we looked at

play09:35

this in the previous example but I

play09:36

didn't call it out specifically so for

play09:38

this one I'm going to import chat open

play09:40

AI I'm going to import my messages again

play09:42

I'm going to put temperature equals one

play09:45

which means the model is going to get a

play09:46

little spicy on me no but really it just

play09:49

means it's going to have more creativity

play09:51

and it's it's a little bit more

play09:53

exaggerated and so in this case I'm

play09:55

going to say you are an unhelpful AI bot

play09:57

that makes jokes at whatever the user

play10:00

says and in this case the user says I

play10:02

would like to go to New York how should

play10:04

I do this I'm going to go ahead and run

play10:06

this model

play10:07

you could try walking but I don't

play10:08

recommend it unless you have a lot of

play10:09

time on your hands

play10:11

maybe try flapping your arms really hard

play10:12

to see if it can fly there so as you can

play10:14

see it took that system message

play10:17

and it understood those directions and

play10:21

it uh it wasn't very helpful for me well

play10:22

because I told her not to be very

play10:24

helpful

play10:25

the last type of model that we're going

play10:26

to look at is going to be your text and

play10:28

betting model the reason why this one is

play10:30

important is because we do a lot of

play10:32

similarity searches and a lot of

play10:34

comparing texts when working with

play10:35

language models now in this case openai

play10:38

also has an AI embeddings model that

play10:40

we're going to use there's a lot of

play10:41

embedding models out there you can use

play10:43

whatever you want I just use open AI

play10:45

because it feels like it's a standard

play10:47

and it's very simple right now so I'm

play10:48

going to pass in my API key I'm going to

play10:51

get my embeddings engine ready and then

play10:53

I'm going to define a piece of text hi

play10:55

it's time for the beach let me go ahead

play10:57

and do that text

play10:58

and what I'm going to do is I'm going to

play11:00

pass that text and I'm going to embed

play11:03

that text so what that means is is it's

play11:05

going to take this string which is just

play11:06

a series of letters and it's going to

play11:08

convert it into a vector and in this

play11:11

case a vector is just simply a

play11:13

one-dimensional array meaning a list of

play11:15

numbers and that'll be a semantic

play11:18

representation of that text that's a

play11:21

fancy way of saying is that meaning of

play11:23

that text is going to be embedded in

play11:25

those numbers right there which makes it

play11:26

really easy to compare

play11:28

across other as others as well so I'm

play11:31

going to put that in a variable called

play11:32

text embeddings I'm going to see how

play11:34

long my text embeddings is and I'm going

play11:36

to get a preview of it so you'll notice

play11:38

here that my text embedding length is

play11:40

1536 this means that there are 1536

play11:44

different numbers within that list that

play11:47

represent the meaning of my text

play11:49

that's a lot of numbers and I'm glad I

play11:51

don't have to deal with them I'm glad

play11:52

the computer can so here's a sample of

play11:54

what those look like in case you're

play11:55

curious I only show the first uh five

play11:58

here but I put a dot dot dots you know

play12:00

that there are

play12:01

1531 other numbers out there next let's

play12:04

look at prompts so prompts are going to

play12:06

be the text that you send over to your

play12:08

language model we've already sent some

play12:10

prompts over to the language model but

play12:12

they've been pretty simple in this case

play12:14

we're going to start doing more

play12:15

instructional prompts and passing those

play12:17

to our model so again a prompt is what

play12:19

we pass to our language model I'm going

play12:21

to import open AI in this case I'm using

play12:24

DaVinci as my model and I'm going to say

play12:26

prompt equals this string now I use

play12:29

three three double quotes because

play12:31

um well I think it looks fancier no but

play12:34

really it's just easier to use which is

play12:35

why I like it in this case I'm not doing

play12:37

anything fancy and I could have passed

play12:39

this string right within my language

play12:40

model but in this case I

play12:42

made a variable for it because it's a

play12:45

little bit easier to understand so

play12:47

today's Monday tomorrow's Wednesday what

play12:48

is wrong with the statement the

play12:50

statement is incorrect tomorrow's

play12:51

Tuesday not Wednesday so you can see how

play12:52

it picked it up from there now why

play12:54

prompts are cool is because we start to

play12:56

get into the prompt template world

play12:58

the reason why prompt templates are

play13:00

important is because most of the time

play13:02

you're going to be dynamically

play13:03

generating your prompts meaning they

play13:06

won't just be static strings that you

play13:07

type out but you're actually going to be

play13:09

inputting tokens or inputting

play13:10

placeholders based off of the scenario

play13:13

that we're you're working with

play13:14

so in this case what I'm doing here is

play13:17

I'm importing my packages again in this

play13:19

case prompt template is going to be the

play13:21

new one I'm going to do DaVinci again

play13:22

okay great and in this case I'm going to

play13:25

create a template to start so I really

play13:27

want to travel to location you'll notice

play13:30

my opened and closed brackets around

play13:31

location which means that this is going

play13:33

to be a token that I'm going to be

play13:35

replacing later what should I do there

play13:37

respond in one short sentence because

play13:39

we're also just responded too much I'm

play13:41

going to create a prompt template in

play13:43

this case I'm going to put it in this

play13:44

variable prompt my input variable is

play13:46

going to be location which matches the

play13:48

same name that we had up here and then

play13:49

the template is this this whole thing

play13:51

that I had here the final prompt is

play13:54

going to be prompt.format which means

play13:56

going to insert the values I tell you go

play13:58

and insert the value Rome into where it

play14:00

says location right here let's go ahead

play14:02

and run this

play14:04

so final prompt I really want to travel

play14:05

to Rome which replace location up above

play14:08

and here we have our prompt template

play14:09

that's finally filled out and then in

play14:11

terms of the output it tells me what I

play14:14

should do so it took that information in

play14:16

with Rome and responded one short

play14:18

sentence it gives me this which is cool

play14:21

all right the next cool part that we're

play14:24

going to look at is the example

play14:25

selectors so often when you're

play14:27

constructing your prompts you're going

play14:29

to do something called in context

play14:31

learning this means that you're going to

play14:34

show you're going to show the language

play14:35

model what you want it to do and one of

play14:37

the main ways that people do this is

play14:39

through examples this could be about how

play14:41

to answer a customer service request or

play14:43

it could be how to respond to some

play14:45

nuanced question and in this case I'm

play14:48

going to pick examples however we have

play14:51

example selectors because say you had 10

play14:54

000 different examples you don't want to

play14:55

throw all those into your uh into your

play14:57

prompt they may not fit and they may not

play14:58

be as relevant so you want to select

play15:00

which ones you want and in this case

play15:02

what I'm going to do is I'm going to

play15:04

import a lot of things here but the one

play15:06

I'm the main star of the show is going

play15:08

to be the semantic similarity example

play15:10

selector that's a long name for a

play15:14

functionality that's going to select

play15:15

similar examples so I'm going to get my

play15:17

language model going again I'm going to

play15:19

get my example prompt and this is just a

play15:21

prompt template like we saw up above and

play15:23

then I'm going to define a list of

play15:25

different examples so in this case I

play15:28

want to name a noun and then I want the

play15:31

language model to tell me where this

play15:33

noun is usually found so in this case a

play15:35

pirate on a ship a pilot on a plane

play15:37

driver in a car a tree

play15:40

oh that's not true a tree in the ground

play15:43

or a bird in a nest so I'll go ahead and

play15:46

run that one

play15:47

and then what we're going to do is we're

play15:49

going to get our example selector ready

play15:50

so we have our similar example selector

play15:53

we're going to pass it the list of

play15:56

examples that I just defined above but

play15:58

then we're also going to pass it our

play16:00

embedding engine and the reason why we

play16:01

do this is because we're actually going

play16:03

to match examples on their semantic

play16:05

meaning so not just matching them off of

play16:07

similar strings but off of what they

play16:09

actually mean so in this case we're

play16:11

going to use the open AI embeddings

play16:13

which is one of the models that has been

play16:15

shared by Facebook which is really cool

play16:17

and this is going to help store our

play16:19

embeddings and then we're going to tell

play16:20

it

play16:21

um how many we want how many examples we

play16:24

want back in this case I want k equals

play16:26

two let me go ahead and run that and

play16:29

then we're going to have a new prompt

play16:31

template here and this is going to be

play16:33

the few shot prompt template meaning the

play16:35

few shot part means that there's going

play16:37

to be a few examples in there for us so

play16:39

we give it our example selector we give

play16:41

our example prompt which we made up

play16:43

above and then we're going to add on

play16:44

just some little strings before and

play16:46

after to make it easier for the model so

play16:48

give the location that an item is

play16:50

usually found in cool and then the

play16:52

suffix will be the input and the output

play16:54

that we have from here based off of what

play16:56

the user inputs then the input variable

play16:58

go ahead and do that so here I'm going

play17:01

to say my noun is student So based off

play17:04

of the noun of student it's going to go

play17:07

and find me the examples up above that

play17:08

are most closely related to student and

play17:11

we're going to use those examples so if

play17:13

I would go ahead and do this I'm going

play17:14

to say print and it's going to print me

play17:16

the prompt that we're actually going to

play17:18

use within or give to our language model

play17:20

in this case it found the driver and it

play17:22

found the pilot one being most similar

play17:25

to student which is cool now if I were

play17:28

to do a different one say flower it's

play17:31

going to give me the the tree and the

play17:33

bird examples okay but I'm going to

play17:35

stick this with student and what I'm

play17:36

going to do is I'm going to take this

play17:38

prompter that we just made and I'm going

play17:39

to pass that into the language model and

play17:41

all of a sudden you get classroom the

play17:43

next thing we're going to look at is

play17:44

output parsers now that's kind of a

play17:47

complicated way to say

play17:49

um we need some structured output like

play17:52

we want the language model to return a

play17:54

Json object back to us why well because

play17:56

it makes it a heck of a lot easier to go

play17:58

deal with and work with on the other

play18:00

side

play18:01

there's two big Concepts when we talk

play18:03

about output parsers first it's going to

play18:04

be the formatting instructions piece so

play18:07

this is the prompt template that is

play18:09

going to tell your language model how to

play18:11

respond back to you and Lang chain

play18:13

provides us some conventions to do this

play18:15

automatically which is cool and then the

play18:17

second thing we're going to have is

play18:18

going to be the parser and so this is

play18:19

going to be the tool that is going to

play18:21

parse the output of your language model

play18:23

so the language model can only return

play18:26

back a string but if we want a Json

play18:28

object well we need to go and parse that

play18:30

string and extract the Json Json from

play18:32

that okay so we're going to get a

play18:34

structured output parsing and we're

play18:36

going to get the response schema from

play18:37

there let's import our language model

play18:39

again and we're going to have a response

play18:41

schema so in this case I just want it to

play18:43

be a two field Json object it I'm going

play18:45

to have a bad string which is a poorly

play18:47

formatted user input string and then a

play18:50

good string this is your response a

play18:52

formatted response and so the really

play18:54

nice response from the um from the

play18:56

language model there and in this case

play18:57

I'm going to go ahead and create my

play18:59

output parser which is going to read the

play19:00

response schema and it's going to be

play19:02

able to parse it for us but we won't use

play19:04

that until just a second here

play19:05

so first thing we're going to have is

play19:07

our format instructions so on the output

play19:09

and parser we're going to say get format

play19:11

instructions and then let's print those

play19:12

out

play19:14

in fact I don't need to do that I could

play19:15

just print this out directly right here

play19:18

cool and so this is a piece of text that

play19:21

is going to be input or insert put into

play19:24

the prompt the output should be a

play19:26

markdown code snippet format it in the

play19:29

following schema Json and then the two

play19:32

fields that I input up above but it did

play19:34

the formatting for me or at least Lane

play19:35

chain did for him to put it into here

play19:37

so let's go ahead and create a prompt

play19:39

template we're going to do a placeholder

play19:41

variable for our format instructions and

play19:44

then we're also going to do a

play19:45

placeholder for user input this will be

play19:47

the poorly formatted string that the

play19:48

user is going to input and then finally

play19:50

I put your response here just to tell it

play19:52

it's like hey I'm done telling you

play19:53

instructions give me a response we go

play19:55

ahead and we get the prompt template we

play19:56

have the user input we have a partial

play19:58

variable of format instructions and this

play20:00

will be the format instructions we had

play20:02

up above we have our template which is

play20:04

the string up above here and then we

play20:05

have our prompt value so this will be

play20:07

the actual value that is filled out with

play20:09

the variables I tell it and I'm going to

play20:12

say welcome to California with an

play20:15

exclamation point let's go ahead and do

play20:17

that one and here I print out the final

play20:19

prompt that is going to be sent to the

play20:20

llm we have user input Welcome to

play20:22

California with everything we had up

play20:24

above let's go ahead and run this

play20:27

let's see what it responds back to us so

play20:29

we get a string here it kind of looks

play20:31

like gobbledygook but if we were to

play20:33

print this out it'd make more sense but

play20:34

before printing out let's just go ahead

play20:36

and parse this and now we can actually

play20:37

parse this and we get a nice uh Json

play20:40

object back well in this case it's going

play20:42

to be addict but um you can see here

play20:44

it's typed

play20:45

the next thing we're going to look at is

play20:46

different indexes so in this case we're

play20:48

going to be structuring documents in a

play20:50

way that language models have a better

play20:52

time working with them and one of the

play20:54

main ways that lanechain does this it's

play20:56

going to be through document loaders now

play20:58

this is very similar to the open AI

play21:00

plugins that just were released however

play21:01

there's a lot of support for a lot of

play21:03

really cool data sources in langchain

play21:05

that aren't yet supported within the

play21:07

plug-in World in this case I'm going to

play21:09

be doing a Hacker News data loader so

play21:12

all I'm doing is just passing a simple

play21:14

URL to this data loader I'm going to say

play21:17

hey go get me that data and so I'm

play21:19

asking hey how many pieces of data did

play21:21

you find

play21:23

uh and in this case it found 76

play21:25

different comments within this Hacker

play21:27

News Post and I asked it to print me out

play21:29

a sample and here we see uh one of the

play21:32

responses by the moderator dang within

play21:34

uh Hacker News and we see the response

play21:36

there we see different comments uh you

play21:39

can go and work with these within your

play21:40

language model now which is pretty cool

play21:41

another big piece of what we do a ton of

play21:43

is text splitting so oftentimes your

play21:46

document like your book or your essay or

play21:48

whatever is going to be too long for

play21:50

your language model you need to split it

play21:52

up into chunks and text Splitters will

play21:54

help with this now the reason why you do

play21:56

this is because if you want a single

play21:57

answer out of a book it wouldn't behoove

play22:00

you too much to input that entire book

play22:02

Into The Prompt one because it's too

play22:04

long but two is because the signal to

play22:06

noise ratio is too much or it's too

play22:09

little for your language model to

play22:10

effectively do its job it'd be a lot

play22:13

better if you just put in a few pieces

play22:14

of text into there and in order to get

play22:16

those few pieces of text we need to do

play22:18

splitting or chunking of those so in

play22:20

this case I'm going to do text splitting

play22:22

and the one that I use most often is

play22:24

going to be the recursive character text

play22:25

splitter there's a lot of different

play22:27

types of text Splitters depending on

play22:29

your use case I encourage you to go

play22:31

check those out and in this case I'm

play22:32

going to pull in a Paul Graham essay his

play22:35

worked essay this one is quite long it

play22:37

may be as long as actually so if I were

play22:39

to read his document

play22:40

I just have one big long document right

play22:43

now which means it's a really long piece

play22:44

of text but in this case what I want to

play22:46

do is I want to have the recursive

play22:48

character text splitter and I'm going to

play22:50

say chunk size equals 150. this means

play22:53

that I'm going to have a size of 150

play22:56

when I end up splitting my star document

play22:59

there and if you want chunk overlap that

play23:01

means that the Venn diagram of your docs

play23:03

is going to overlap just a little bit I

play23:05

encourage you to play with these

play23:06

variables to see which one works best

play23:08

for your use case normally I wouldn't do

play23:10

150 I'd probably do a thousand or two

play23:11

thousand but for demonstration purposes

play23:13

I'm doing 150. go ahead and run that and

play23:16

so we had one document up above but

play23:18

after I split it I now have 606

play23:21

documents all right and if I wanted to

play23:23

preview those I can go ahead and preview

play23:24

these and see how they're nice and small

play23:27

they're super small and if I wanted to

play23:29

make this 50 for example well then my

play23:31

chunks will be a whole lot smaller but

play23:33

let me go ahead and make that bigger the

play23:34

next thing we're going to look at is

play23:35

going to be retrievers now retrievers

play23:37

are easy ways to combine your documents

play23:40

with your language models there's going

play23:42

to be a lot of different types of

play23:43

retrievers and the most widely supported

play23:45

one is going to be the vector store

play23:46

Retriever and it's most widely supported

play23:49

because we're doing so much similarity

play23:51

search within embeddings let's look at

play23:53

an example here we're going to load up a

play23:55

hologram essay just like how we had

play23:57

beforehand I'm going to do some

play23:58

splitting of it and so we're going to

play23:59

get a whole bunch of documents

play24:01

we're going to split the documents and

play24:03

then I'm going to create embeddings out

play24:04

of those documents and so all those

play24:06

little chunks we're going to create

play24:07

vectors out of them which is the

play24:09

semantic meaning of them and then I'm

play24:10

going to store those vectors within a

play24:13

document store here okay and I'm going

play24:14

to call that within a my DB there and

play24:17

then I'm going to say hey this retriever

play24:19

is going to be the DB but we're going to

play24:21

set it as the retriever okay so it knows

play24:22

to go get stuff and if I were to look at

play24:24

this you can see here that we have our

play24:26

Vector store retriever that's output

play24:27

right here

play24:28

okay we're going to take our Retriever

play24:31

and I'm going to say hey go get me the

play24:33

relevant documents what types of things

play24:34

did the author want to build now in the

play24:37

background what it's doing here is it's

play24:38

taking the string and it's converting it

play24:40

to a vector it's taking that vector and

play24:42

it's going to go compare it to the

play24:43

vector store that you have and find the

play24:45

similar documents that come from there

play24:47

so what I'm going to do here is I'm just

play24:48

going to print out this is a one-liner

play24:50

kind of complicated one just to print

play24:51

out the preview of the documents that we

play24:54

have here I'm just going to have it

play24:55

print out the first two

play24:56

docs is not defined great let's go ahead

play24:58

and run those so all of a sudden these

play25:00

are the previews of the docs that it

play25:02

found there

play25:03

um what I wanted was to not just build

play25:05

things but build things that would last

play25:08

so you can see here that out of all

play25:10

those documents that I found it found

play25:12

the two that were most similar to what I

play25:13

was looking for which is really cool I

play25:16

wanted to build things nice next let's

play25:18

look at Vector stores so we briefly just

play25:20

talked about Vector stores right before

play25:21

this but to go into it a little bit

play25:23

further think of a vector store really

play25:25

the way that I think about it is a table

play25:27

with rows with your embeddings and

play25:30

Associated metadata that comes with it

play25:32

an example of it is right here two main

play25:35

players in the space are now are going

play25:37

to be pine cone and weeviate however if

play25:39

you want to you can go check out open

play25:40

ai's retriever documentation and they

play25:42

list a whole bunch of other ones that

play25:44

you may find awesome for you

play25:46

okay so let's go ahead and look at these

play25:48

again we're going to import our models

play25:50

we got our embeddings okay cool now with

play25:53

these embeddings I'm gonna look at that

play25:55

and based off of how I split my document

play25:57

up above with a thousand chunks or a

play25:59

thousand as a chunk size we get 78

play26:02

documents Auto programs worked essay

play26:04

okay what I'm going to do is I'm going

play26:06

to create those as embeddings and I'm

play26:07

going to get my embeddings list from

play26:09

there and I'm going to let's look at the

play26:12

length of the embedding list I have 78

play26:14

embeddings reason why is because I have

play26:16

one vector for each one of my documents

play26:18

so all right makes sense and here's a

play26:21

sample of one so here's an example of

play26:23

what the embedding would look like it's

play26:26

a numerical representation of the

play26:28

semantic meaning of your document there

play26:30

so your vector store is going to be

play26:32

storing your embeddings and it makes

play26:34

them easily searchable so in this case

play26:36

it is going to take my embedding and

play26:37

it's going to store it like a database

play26:38

the next topic I want to look at is

play26:40

going to be Memory so this is going to

play26:42

be how you help your language models

play26:44

remember things the most common use case

play26:46

for this is going to be your chat

play26:48

history so if you're making a chat bot

play26:50

then you can tell up the history

play26:52

messages that you've had beforehand

play26:53

which makes it a whole lot better at

play26:55

helping your user do whatever it needs

play26:57

to do so in this case I'm going to

play26:58

import chat message history and I'm

play27:00

going to import my chat open AI again

play27:03

and so I'm going to create my chat model

play27:05

and then I'm going to create my history

play27:06

model and to my history model I'm going

play27:09

to add an AI message Hi and then I'm

play27:12

going to add a user message what is the

play27:14

capital of France so let me go ahead and

play27:16

run that and if I were to take a look at

play27:17

my history messages I get my two that

play27:20

are input right there they're in the

play27:21

right order as we would expect there to

play27:23

be

play27:24

so what's cool is that I can pass my

play27:26

history of messages to the language

play27:29

model and so in this case it is going to

play27:31

read oh I said hi to start and then the

play27:33

human message was what's the capital of

play27:35

France and let's see what it responds

play27:37

back to us the capital France is Paris

play27:39

and it gives us an AI message which is

play27:41

cool and so what I want to do here is I

play27:43

want to add an AI message to my history

play27:45

which uh I shouldn't repeat this but I

play27:49

am actually no I'm not repeating it I'm

play27:51

taking the AI response and I'm just

play27:52

putting out the content let me print out

play27:54

those messages again and you can see

play27:55

here that it adds uh the capital Francis

play27:58

Paris to the end of my chat history

play28:00

which makes it easy for me to work with

play28:02

and another cool functionality of this

play28:03

too is link chain makes it extremely

play28:05

simple to save this chat history so you

play28:07

can go ahead and load it later a lot of

play28:10

really cool functionality I encourage

play28:11

you to go check out the next concept

play28:13

we're going to look at is chains so in

play28:15

this case we're going to be combining

play28:16

different llm calls and actions

play28:19

automatically so say you have one input

play28:21

but then the output of that language

play28:22

model you want to use as the input to

play28:24

another call and then another call and

play28:26

then another call well in that case

play28:28

you're going to be using chains which is

play28:30

where the chain and Lang chain comes

play28:32

from so in this case we're going to

play28:33

cover two of them there's a lot of

play28:35

really complicated examples here I

play28:38

encourage you again to go check out the

play28:39

documentation to see if one of them

play28:41

would cover your use case better than

play28:43

what you're seeing here the first one is

play28:45

going to be a simple sequential chain

play28:47

and in this case I'm going to go ahead

play28:49

and tell it hey I want you to do X and

play28:51

then Y and then Z now the reason why

play28:54

this is important

play28:55

or why I like to do it is because it

play28:57

helps break up the tasks Now language

play28:59

models can get distracted sometimes and

play29:01

if you ask it to do too many things in a

play29:03

row it could get it could get confused

play29:05

it could start to hallucinate and that's

play29:07

not good for anybody Plus

play29:10

I want to make sure that my thinking is

play29:11

sound and that way I can kind of check

play29:13

out the different outputs of each one of

play29:14

my different actions here so in this

play29:16

case I'm going to import the simple

play29:18

sequential chain let me go ahead and run

play29:19

this and I'm going to put two different

play29:21

things to here I'm going to use two

play29:23

different prompt templates so your job

play29:25

is to come up with a classic dish from

play29:27

the area that the user suggests I'm

play29:30

going to input the user location and I'm

play29:32

going to give it the user location which

play29:33

we'll we'll do in a second here

play29:35

and I'm going to create a llm chain with

play29:38

this and I'm going to call it location

play29:39

chain which basically is going to take

play29:41

my language model it's going to take a

play29:42

prompt template okay

play29:44

and then the next one we're going to

play29:45

look at given a meal give a short and

play29:48

simple recipe on how to make that dish

play29:50

at home so in this case we have the user

play29:53

location which that's not actually what

play29:54

we want we want user meal output this

play29:58

wouldn't have mattered because I had the

play30:00

variables the same but it just to make

play30:01

it more clear

play30:02

given a meal okay cool your response I'm

play30:05

going to do the same thing I'm going to

play30:07

put that into a meal chain so what it's

play30:08

going to do is it's going to Output a

play30:11

meal a classic dish and then it's going

play30:13

to Output a simple recipe for that

play30:14

classic dish

play30:16

okay I'm gonna create my simple

play30:18

sequential chain and in this case I'm

play30:21

going to specify My Chains as my

play30:22

location chain and then the meal chain

play30:25

order matters be careful on that I'm

play30:27

going to set verbose equals true which

play30:29

means that it's going to tell us what

play30:30

it's thinking and it's actually going to

play30:32

print those statements out so it's

play30:33

easier to debug what's going on

play30:35

let's go ahead and create that and then

play30:36

I'm going to say My overall chain I want

play30:39

you to run and in this case I only have

play30:41

one input variable which is going to be

play30:42

Rome which is going to be the user

play30:44

location that I start in the first place

play30:46

let me go ahead and run this so you can

play30:48

see here that it's entering the new

play30:49

sequential chain and it ran Rome against

play30:52

the first prompt template and got me a

play30:54

classic dish which is really cool

play30:56

and then it gave me a recipe to on how

play31:00

to make that classic dish which is

play31:01

really cool so all of a sudden it just

play31:04

did two different runs for me all in one

play31:06

go and I didn't have to run any

play31:07

complicated code I could just use

play31:09

langtain for that it's pretty sweet

play31:11

now the next one that I want to show is

play31:13

one that I use quite often which is

play31:15

going to be the summarization chain the

play31:17

reason why this one was so cool is

play31:19

because if you have a long piece of text

play31:21

and you want it summarized or say you

play31:23

have an article you want summarized or a

play31:25

tweet thread or a Hacker News Post or

play31:27

whatever it may be you're going to want

play31:29

to Chunk Up Your longer piece of text

play31:33

and you're going to want to find you're

play31:34

going to want to find summaries of those

play31:36

different chunks and then get a final

play31:37

summary and in that case what we're

play31:40

going to do is we're going to load in

play31:42

load summarize chain and we're going to

play31:44

do Paul Graham's essay disk not even

play31:47

sure what that one's about then we're

play31:48

going to split it up into different

play31:50

texts right here the chunk size is going

play31:52

to be 700 and then I'm going to load

play31:54

summarize chain and the chain type that

play31:57

I'm going to do is going to be that one

play31:59

that I mentioned beforehand which is

play32:00

where you get the small summaries of the

play32:02

individual sections and then you get a

play32:04

summary of the small summaries I have a

play32:06

whole video on different chain types and

play32:08

so if you're curious go check out the

play32:10

video up above and you can go see it

play32:12

let me go ahead and run this and so as

play32:14

you can see here the language model is

play32:16

asking I'm sorry the chain or Lang chain

play32:20

is asking the language model to

play32:22

summarize this piece of text right here

play32:24

and then this piece of text right here

play32:26

because we only had two chunks that we

play32:27

wanted to summarize and then it's asking

play32:29

for a final concise summary so here's

play32:32

the summary of the chunk number one

play32:33

here's the summary of Chunk number two

play32:36

and it's asking for a summary of the

play32:37

summaries and we finally get a summary

play32:39

of the summaries which is really cool

play32:41

because all built into this one liner

play32:45

right here was all the different calls

play32:46

back and forth to figure out how to do

play32:48

the summary of the summaries which is

play32:50

one of the powers of Lane chain which is

play32:51

really sweet the last thing we're going

play32:53

to look at is agents and this is one of

play32:55

the most complicated Concepts within

play32:56

link chain which is why we're talking

play32:58

about it last here but I thought that

play33:00

the official link chain documentation

play33:02

did a great job describing what agents

play33:04

are

play33:05

some applications will not require just

play33:08

a predetermined chain of calls to llms

play33:11

and other tools what we did up above was

play33:13

a predetermined chain here but

play33:15

potentially an unknown chain that

play33:18

depends on the user input an unknown

play33:20

chain emphasis mine means that we're not

play33:23

really sure what route we want to take

play33:24

but we want the language model to tell

play33:26

us which route it thinks that it should

play33:28

take

play33:29

in these types of chains there is an

play33:31

agent which has an access to a suite of

play33:33

tools depending on the user input the

play33:36

agent can then decide which if any of

play33:40

these tools to call

play33:41

so for example hey you have two

play33:44

databases you could pick information

play33:46

from they're on completely different

play33:48

topics the user just asked you a

play33:50

question about uh trees

play33:53

which database should you go looking to

play33:55

find your tree information well an agent

play33:57

can decide that which is really sweet so

play33:59

I'm going to go over the vocabulary

play34:00

first and then we're going to look at an

play34:01

example so an agent is the language

play34:04

model that is going to be driving the

play34:05

decision making cool

play34:07

tools or tool is going to be a

play34:10

capability of the agent so you can think

play34:12

of this as similar to the open AI

play34:14

plugins that just came out you can also

play34:16

think of this as the ability to go

play34:18

search Google the ability to go lick

play34:20

your email whatever it may be

play34:22

a tool kit is going to be a collection

play34:24

of tools so an agent will have a toolkit

play34:27

of tools uh

play34:30

an agent will have a toolkit of tools

play34:32

and that's what that's what it's going

play34:33

to do there I'm going to import load

play34:35

tools I'm going to initialize the agent

play34:36

I'm going to import openai as well

play34:39

with that I'm going to create my

play34:40

language model now I've made I've insert

play34:43

my serp API key because that's the

play34:45

example that we're going to be running

play34:46

through here which is an easy way to

play34:47

search Google

play34:48

and then with the toolkit I'm going to

play34:50

go ahead and load the tools now in this

play34:52

case I'm only loading one tool and it's

play34:55

the server API however you could load in

play34:57

a lot of tools here and you may

play34:59

naturally think well let me just load it

play35:00

up with all the tools in the world you

play35:02

could it's just going to get difficult

play35:03

for the model or the agent to know which

play35:06

tool to use at which time so you kind of

play35:07

only want to use the ones that you know

play35:09

you're going to um

play35:10

be needing at that at that point so I'm

play35:12

going to pass in my language model and

play35:14

I'm going to pass in my serve API here

play35:16

API key then I'm going to create my

play35:18

agent so I'm going to pass in the

play35:20

toolkit that I just made I'm going to

play35:21

pass in the language model again I'm

play35:23

going to say what type of agent are you

play35:25

now there's different agent types for

play35:27

different types of tasks and I encourage

play35:29

you to go check out the language or the

play35:31

documentation to see which would be best

play35:32

for you I'm going to say verbose equals

play35:34

true so we can see it thinking I'm also

play35:36

going to return the intermediate steps

play35:38

which just means that we get more

play35:39

granularity into what it's actually

play35:41

doing

play35:41

with this I'm going to say response uh

play35:44

oh agent is not defined then what I'm

play35:46

going to do here is I'm going to pass in

play35:47

my query to the agent itself so what was

play35:51

the first album of the band that Natalie

play35:54

Bergman is a part of the reason why I

play35:56

asked this question specifically is

play35:58

because keep in mind I haven't uploaded

play35:59

any documents here so there's no

play36:01

information pre-loaded and it's kind of

play36:03

a complicated question that has multiple

play36:05

steps that need to be answered for it

play36:07

this is a perfect question for an agent

play36:09

here so let's go ahead and run this and

play36:10

let's see how the agent is thinking

play36:11

about it entering the new agent executor

play36:13

class and it said I should try to find

play36:15

out what band Natalie Bergman is a part

play36:17

of so it needs to it knows that it needs

play36:19

to go search which it has a Search tool

play36:21

up above which I gave it and it's saying

play36:23

Natalie Bergman banded so it's searching

play36:25

for that one and it says observation

play36:27

which is what it observed from its

play36:29

action Natalie Bergman is an American

play36:31

singer-songwriter she has one half the

play36:33

duo of wild Belle okay cool I should

play36:36

search for the debut album of wild Belle

play36:39

it understood the band that she's a part

play36:42

of and now it now it knows and needs to

play36:43

go search for that band so it's going to

play36:45

search again it's going to say wild

play36:47

Belle debut album and it observes that

play36:50

the debut album is Isles I know the

play36:52

final answer which is good we want it to

play36:55

know the finance

play36:55

uh Isles is the debut album of wild

play36:58

Belle the band that Natalie Bergman is a

play37:00

part of that is really cool because that

play37:02

is a multi-step question and uh the

play37:05

agent knew what it needed to go find out

play37:07

without me telling it the chain so this

play37:09

chain could have been a whole lot longer

play37:11

if it needed more steps with it but uh

play37:13

it dynamically figured that out along

play37:15

the way which is really really cool and

play37:17

so if we were to print out the

play37:18

intermediate steps you get more

play37:20

information about what it actually did

play37:21

and how it searched and all that good

play37:23

information from there

play37:25

um and if we were to confirm this let's

play37:27

go ahead and run this WOW yep wild Belle

play37:30

there's Natalie Bergman brother and

play37:32

sister Duo band

play37:34

um beautiful I would play their song If

play37:36

it wasn't going to give me copyright

play37:37

trouble but I encourage you to go look

play37:39

it up link to my favorite song of theirs

play37:42

is in the description well my friends

play37:43

that was a very broad overview of all of

play37:46

the nuts and bolts of lion chain the

play37:48

Tactical nuts and bolts I congratulate

play37:50

you for making it to the end of this

play37:52

video and if you have any questions

play37:53

please let me know I encourage you to

play37:56

subscribe to check out for part two when

play37:57

we go through actual use cases for these

play37:59

nuts and bolts and again I share a lot

play38:01

of tools on Twitter so I encourage you

play38:03

to follow me there like always please

play38:05

leave comments let me know what you

play38:06

think of the video and let me know if

play38:08

you have any questions we'll see you

play38:09

later

Rate This

5.0 / 5 (0 votes)

Related Tags
LangChainAI DevelopmentLanguage ModelsAPI IntegrationCode ExamplesChatbotsText ProcessingSemantic SearchContent CreationTechnical Documentation