Perplexica: How to Install this Free AI Search Engine, Locally?

Mervin Praison
23 Jun 202409:01

Summary

TLDRThe video introduces 'perplexa', an open-source AI-powered search engine that can run locally on your computer, offering an alternative to online search engines like 'perplexity'. It demonstrates how to set up 'perplexa' using Docker or npm for a private search experience, highlighting its capabilities in various modes like academic research and YouTube search. The tutorial guides viewers through installation, configuration with API keys, and testing with different models like OpenAI and GPT, emphasizing data privacy and customization.

Takeaways

  • 😲 Perplexa is a free, open-source AI search engine that can run locally on your computer.
  • 🔍 It serves as an alternative to Perplexity, a search engine powered by AI.
  • 🌐 Users can utilize local models such as GPT, Olama, or Open AI for their search needs.
  • 💻 The demonstration shows Perplexa running on Local Host 3000 with various search options.
  • 🔑 The script provides a step-by-step guide on how to install and set up Perplexa using Docker or npm.
  • 🔄 Docker is presented as the easiest option for running Perplexa, requiring the download of Docker on the computer.
  • 📝 The video script includes instructions for cloning the repository, editing configuration files, and setting API keys.
  • 🔍 Perplexa offers different modes, including co-pilot mode (in development) and normal mode with focus on writing, academic research, and searching platforms like YouTube and Reddit.
  • 📈 It demonstrates the capability to search for the latest AI news, images, and videos, similar to Perplexity.
  • 📚 The script also shows how to use Perplexa for academic research, providing summaries of relevant papers.
  • 🎥 The presenter mentions that they regularly create videos on Artificial Intelligence and encourages viewers to subscribe to their YouTube channel for updates.
  • 🛠️ For non-developers, the script suggests using Docker for installation due to its simplicity, while developers may opt for npm and node package manager.

Q & A

  • What is Perplexa and how does it relate to Perplexity?

    -Perplexa is a free, open-source AI search engine that serves as an alternative to Perplexity, which is also an AI-powered search engine. Perplexa allows users to run their own search engine locally on their computer, utilizing AI models like GPT, Olama, or OpenAI.

  • How can one access and use Perplexa?

    -Perplexa can be accessed by running it locally on a computer. Users can visit 'Local Host 3000' to use it, and it offers various search options similar to those available on Perplexity, such as searching for academic research, YouTube, and Reddit content.

  • What are the main modes in Perplexa?

    -Perplexa has two main modes: 'co-pilot mode', which is still in development, and 'normal mode', which is the primary mode of operation. Within normal mode, there are focus modes for different tasks such as writing articles, academic research, and searching YouTube and Reddit.

  • How can one install and set up Perplexa for use with local AI models?

    -Installation can be done using Docker for an easier setup or without Docker using npm (node package manager). The script provides step-by-step instructions for both methods, including cloning the repository, editing configuration files, and running the application locally.

  • What is Docker and how is it used in setting up Perplexa?

    -Docker is a platform that allows users to develop, ship, and run applications in containers. In the context of Perplexa, Docker is used to easily set up the application by downloading the required container images and running them.

  • What is the purpose of the 'config.toml' file in Perplexa?

    -The 'config.toml' file is used to configure the settings for Perplexa, including the API keys for OpenAI, GPT, and Olama, as well as other settings such as the host address for the local model.

  • How does Perplexa handle searching for images and videos?

    -Perplexa provides the capability to search for images and videos, similar to its functionality for text searches. It processes the user's query and retrieves relevant media content from the web.

  • What is the co-pilot feature in Perplexa and what is its current status?

    -The co-pilot feature in Perplexa is a development in progress that aims to enhance the user experience. As of the script, it is still being built and may not be fully functional yet.

  • How can one switch between different AI models in Perplexa?

    -Users can switch between different AI models in Perplexa by accessing the settings and selecting the desired model provider, such as OpenAI or GPT. They can also choose a local model if they have one installed.

  • What is the significance of the 'searchxng' in Perplexa?

    -Searchxng is the main search engine that powers Perplexa. It is essential for the functioning of the application, allowing it to perform searches and retrieve information from various sources.

  • How can a non-developer install Perplexa without Docker?

    -A non-developer can install Perplexa without Docker by downloading Node.js, navigating to the UI folder in the Perplexa repository, and following the steps to install npm packages, build the project, and start both the front-end and back-end services.

Outlines

00:00

🤖 Introduction to Perplexa: Local AI Search Engine

The video introduces Perplexa, a local AI search engine that serves as an alternative to the perplexity search engine. It highlights the ability to run a personalized search engine on one's computer using models like Gro, Llama, or Open AI. The script demonstrates how Perplexa can be used for various purposes, including academic research, YouTube, and Reddit searches, and how it can search for images and videos. The host also encourages viewers to subscribe to their AI-focused YouTube channel for more content.

05:01

🛠️ Setting Up Perplexa with Docker and API Keys

The script details the step-by-step process of setting up Perplexa using Docker, which simplifies the installation by downloading the necessary container images and running them automatically. It explains how to clone the Perplexa repository, navigate to the folder, and edit configuration files to include API keys for Open AI and Gro. The video also covers how to run Perplexa without Docker using npm and node package manager, including installing dependencies, building the project, and starting both the front-end and back-end services.

🔍 Exploring Perplexa's Features and Customization Options

This part of the script showcases the various features of Perplexa, such as searching for the latest AI news, academic papers, and YouTube videos. It explains how the search engine can provide summaries and detailed answers based on the content it finds. The video also discusses the co-pilot feature, which is still under development, and the option to use custom Open AI models with fake API keys for testing. The script emphasizes the importance of privacy when using local software and the ability to run Perplexa completely locally.

Mindmap

Keywords

💡AI Search Engine

An AI search engine is a tool that uses artificial intelligence to enhance search capabilities, often by understanding context, providing personalized results, or filtering information more effectively. In the video, the main theme revolves around 'Perplexa,' an AI-powered search engine that can be run locally, offering an alternative to traditional search engines by leveraging AI models for various search functionalities.

💡Perplexity

Perplexity, in the context of the video, refers to a search engine that utilizes AI to improve the search experience. It is mentioned as the inspiration for 'Perplexa,' which aims to provide similar AI-driven search capabilities but with the added benefit of being locally hosted, thus potentially offering more privacy and control to the user.

💡Local Host

Local Host typically refers to the server on a user's own computer that is used to run applications locally. In the script, 'Local Host 3000' is the address where the 'Perplexa' search engine is accessible, indicating that the AI search engine is being run directly on the user's machine, as opposed to being hosted on a remote server.

💡Gro

Gro is mentioned in the script as one of the AI models that can be used to power the local search engine 'Perplexa.' It represents an AI model or service that contributes to the functionality of the search engine, possibly by providing natural language understanding or generation capabilities.

💡Olama

Olama is another AI model referenced in the video script, which can be used with 'Perplexa.' It is likely a large-scale AI model that offers advanced capabilities for processing and generating human-like text, which is essential for the search engine's ability to provide relevant and coherent search results.

💡Open AI

Open AI is a well-known organization in the field of AI research and deployment. In the context of the video, Open AI models are suggested as an alternative to Gro and Olama for powering the 'Perplexa' search engine, indicating the flexibility of the search engine to work with different AI sources.

💡Docker

Docker is a platform that allows developers to develop, ship, and run applications in containers. The script explains that 'Perplexa' can be run using Docker, which simplifies the process by handling the deployment and running of the search engine components in an isolated environment.

💡npm (Node Package Manager)

npm is a package manager for JavaScript and is used to install and manage dependencies for Node.js applications. In the video, npm is mentioned as part of the process to install and run 'Perplexa' without Docker, indicating a method for developers to set up the search engine using Node.js packages.

💡Co-Pilot Mode

Co-Pilot Mode is a feature mentioned in the script that is still in development for 'Perplexa.' While not fully detailed, it suggests an additional mode of operation for the search engine that might offer different functionalities or user experiences, distinct from the 'normal mode.'

💡Focus Modes

Focus Modes in the context of 'Perplexa' refer to specific configurations or settings that tailor the search engine's functionality to particular tasks, such as writing articles, academic research, or searching through YouTube and Reddit. The script demonstrates how these modes can be selected to optimize the search results for different types of content.

💡Search xng

Search xng is mentioned as the main search engine powering 'Perplexa.' It is likely a component or service that handles the core search functionality, indexing content, and providing search results. The script indicates that setting up 'Perplexa' also involves configuring Search xng to work effectively with the AI models.

Highlights

Introduction to Perplexa, a free AI search engine alternative to Perplexity.

Ability to run a local search engine using AI models like Gro, Olama, or Open AI.

Demonstration of Perplexa running on Local Host 3000 with multiple search options.

Explanation of how Perplexa can search for the latest AI news, images, and videos.

Perplexa is an open-source AI-powered search engine with local model utilization.

Description of the two main modes in Perplexa: Co-pilot (in development) and Normal mode.

Instructions on how to install and set up Open AI, Olama, and Gro for Perplexa.

Invitation to subscribe to the YouTube channel for more AI-related content.

Step-by-step guide on running Perplexa with Docker for ease of use.

How to clone the Perplexa repository and navigate to the folder for setup.

Editing the config file to include API keys for Open AI, Gro, and Olama.

Using Docker commands to set up and run Perplexa with the required container images.

Testing Perplexa by searching for the latest AI news and changing settings for different models.

Demonstration of searching for academic papers and summarizing information based on research.

Exploring the YouTube search feature and its ability to provide relevant video information.

Introduction to the custom Open AI settings for local testing and data privacy.

Instructions on installing Perplexa without Docker using npm and node package manager.

Details on setting up the environment variables and starting the Perplexa backend.

The necessity of installing SearchXNG as a third step for Perplexa functionality.

Final thoughts on the excitement surrounding Perplexa and plans for future video content.

Transcripts

play00:00

this is amazing now we're going to see

play00:02

about perplex it's a free AI search

play00:05

engine it's an alternative to perplexity

play00:08

so if you don't know about perplexity

play00:10

it's a search engine using the power of

play00:12

AI but what if you can have your own

play00:15

search engine running locally on your

play00:17

computer you can power the local

play00:20

perplexity that is perplex using Gro

play00:24

olama or open AI models as you can see

play00:27

here it's running in Local Host 3000 and

play00:29

you have multiple options same like you

play00:31

got for perplexity for academic research

play00:34

writing YouTube and Reddit for example

play00:37

if I search give me the latest AI news

play00:41

then it's going to work same like

play00:43

perplexity going through various links

play00:45

as you can see here and then it's

play00:47

providing me the latest AI news here you

play00:50

can search for images and also you can

play00:52

search for videos that's exactly what

play00:54

we're going to see today let's get

play00:56

started

play01:00

hi everyone I'm really excited to show

play01:01

you about perplexa it's a AI power

play01:03

search engine which is completely open

play01:06

source you are able to use local models

play01:08

you have two main modes co-pilot mode is

play01:11

in development and mainly we'll be

play01:14

focusing on normal mode you have Focus

play01:16

modes to write article academic research

play01:19

YouTube search wol from reddits in this

play01:22

I'm going to take you through step by

play01:23

step how you can install how you can set

play01:25

up open AI olama Gro and finally run

play01:29

that locally on your computer but before

play01:31

that I regularly create videos in

play01:33

regards to Artificial Intelligence on my

play01:34

YouTube channel so do subscribe and

play01:36

click the Bell icon to stay tuned make

play01:38

sure you click the like button so this

play01:39

video can be helpful for many others

play01:40

like here first we are going to see how

play01:42

you can run this with using Docker that

play01:44

is the more easiest option which means

play01:46

you need to download Docker on your

play01:47

computer second we'll be seeing how you

play01:48

can install without Docker that is using

play01:51

npm node package manager so first step

play01:54

get clone and this URL and then click

play01:56

enter I'll provide all the information

play01:58

in the description below so after

play01:59

cloning navigate to that folder CD

play02:02

perplex and then click enter now I'm

play02:05

opening this in a code Editor to edit

play02:07

few files so after opening in the vs

play02:10

code editor you'll have a sample config

play02:13

TL so this is how it looks like so just

play02:16

make a copy of this or just rename this

play02:19

right click you can rename and remove

play02:21

the sample just keep only the config and

play02:24

then click enter and now you have one

play02:26

config tol file just open it you can

play02:29

keep all the things default the main

play02:31

thing to focus on is API keys so if

play02:33

you're planning to use open AI just

play02:35

enter your open a API key if you're

play02:37

planning to use grock use grock API key

play02:40

and if you're planning to use olama and

play02:42

if you already installed o on your

play02:44

computer just change this to Local Host

play02:46

for Mac if you're running AMA using

play02:49

Docker then you might need to modify

play02:51

this like this host Docker internal

play02:54

11434 but in my case I'm running on my

play02:57

Mac so I'm going to use Local Host

play03:00

11434 and I've already downloaded olama

play03:02

from ama.com and ran AMA P llama 3 to

play03:06

download the Llama 3 Model as you can

play03:08

see here but here I'm going to show you

play03:11

mainly open AI Gro so after entering the

play03:14

relevant API keys for grock and open AI

play03:17

just save the file now inside the same

play03:19

folder type darker compost up Hy D and

play03:23

then click enter now this will

play03:25

automatically download the required

play03:27

container image and it will

play03:28

automatically start running

play03:30

now you can just type Docker PS and then

play03:32

you can see the perplex front end

play03:35

perplex back end and it's using search

play03:37

xng that is mainly used for searching so

play03:40

our main focus is on the front end

play03:44

that's the user interface so it's 0000

play03:46

3000 that is low close 3000 so here I in

play03:49

low close 3,000 now we are ready to test

play03:52

this out here you can click the settings

play03:53

here as you can see here I'm using open

play03:56

a model first gp4 Omni choosing the

play03:59

embedding model open AI text embedding

play04:01

and here you'll get the API key and then

play04:04

click save now I'm going to type give me

play04:07

latest AI news and then click enter and

play04:09

you can see it automatically went

play04:11

through these links and got me this

play04:14

latest AI news you can change the

play04:16

settings to Gro by clicking the settings

play04:19

and here chat model provider Gro chat

play04:22

model Lama 370 billion parameter model

play04:25

then we are using embedding model from

play04:27

open AI but you can even set up your

play04:29

local embedding model by clicking local

play04:32

and choosing it here but for now I'm

play04:34

going to stick with open AI so let's

play04:36

test the Llama 370 billion parameter

play04:38

model using Gro let's say latest news

play04:40

about Gro and then click enter you can

play04:43

see it go through the latest news

play04:45

articles then based on that it's writing

play04:47

me this response now let's try different

play04:50

options such as academic so this will

play04:52

bring you academic papers and it's used

play04:55

for our research so let's say use of llm

play04:58

in healthcare and click enter and you

play05:01

can see the research are from archive

play05:04

papers and it went through all these

play05:07

papers and based on that it's giving me

play05:09

a summary with all the relevant

play05:11

information one of the primary

play05:13

application of llms in healthcare is in

play05:16

clinical language understanding toss

play05:18

llms can be fine-tuned to meet the

play05:20

unique needs of the healthcare domain

play05:22

and it's giving me more relevant and

play05:24

useful information based on the research

play05:26

article now I can even ask followup

play05:28

questions and it's giving me some

play05:29

suggestion how can LMS be fine-tuned to

play05:32

meet the unique needs of the healthcare

play05:33

domain just clicking on that again it

play05:36

went through all these articles and it

play05:38

gave me this detailed answer so you can

play05:42

keep on asking questions next let's see

play05:44

YouTube I'm going to ask question just

play05:47

typing praise and Ai and then click

play05:48

enter and it's going through all my

play05:51

YouTube videos in regards to praise and

play05:52

Ai and it's giving me more relevant

play05:55

information with all the references and

play05:58

you can see the number of YouTube videos

play06:00

it went through one thing to note is

play06:02

that the co-pilot feature is still being

play06:04

built so it might not work for now but I

play06:07

hope this will be improved very soon in

play06:10

the settings there is another option

play06:12

custom open AI here you can provide the

play06:14

model name API key that could be any

play06:17

fake API key or just for test and then

play06:20

here you can provide the base URL so

play06:23

here I provide oama base URL but you can

play06:25

change this to anything such as LM

play06:27

Studio Jan AI text generation we Bui so

play06:30

provide this here and then you can run

play06:33

this completely locally on your computer

play06:35

and all your data remains private if

play06:36

you're going to use any local softwares

play06:39

and choose the embedding model provider

play06:41

to local that will be super exciting

play06:44

next I'm going to show you how you can

play06:45

install this using node package manager

play06:48

but without Docker containers so make

play06:50

sure youve downloaded nodejs from

play06:52

nodejs.org next when you come to the vs

play06:55

code editor perlex there's a folder

play06:58

called UI just go on to that and then

play07:01

you got env. example so we are going to

play07:03

rename that ASV just rename EnV and then

play07:07

click enter so this is the content for

play07:09

the file keep your previous config tol

play07:14

file as it is because that's where the

play07:16

settings for the open AI Gro API key and

play07:20

ol is coming from now in the terminal

play07:22

you can navigate to the UI folder so CD

play07:25

UI and then click enter now npmi and

play07:29

then click click enter to install all

play07:31

the npm packages now you can here you

play07:33

can see the packages got installed now

play07:35

npm run build and then click enter now

play07:39

it's building all the required packages

play07:42

with the environment variables as you

play07:43

can see here and it's all ready now now

play07:46

finally npm Run start and then click

play07:49

enter now you can see it's running in

play07:51

port number 3,000 as before now one more

play07:53

step you need to do is to start the back

play07:56

end so currently the UI that is the

play07:57

front end is running just going to open

play07:59

a new terminal and it's in the perplex

play08:02

folder so there I'm going to type npmi

play08:05

to install all the backend packages now

play08:08

all the packages got installed now npm

play08:10

run build to build same as before now

play08:13

npm Run start and then click enter now

play08:16

we have both back end running and also

play08:18

if you see here the front end is also

play08:20

running now I'm going to open this URL

play08:22

and you can see the user interface here

play08:24

same as before I can change settings

play08:25

here and ask questions one more thing to

play08:28

note is that we have search xng that is

play08:31

the main search engine which Powers

play08:33

perplex so you might need to install

play08:36

this as a third step apart from running

play08:38

the front end and the back end to make

play08:40

it work this might be complicated for

play08:43

non-developers so it's better to use a

play08:45

Docker version if you are a

play08:47

non-developer and you got all the

play08:48

installation steps in this document file

play08:52

as you can see here I'm really excited

play08:53

about this I'm going to create more

play08:55

videos similar to this so stay tuned I

play08:57

hope you like this video do like share

play08:59

and subscribe and thanks for watching

Rate This

5.0 / 5 (0 votes)

Related Tags
AI SearchLocal HostingCustom ModelsOpen SourceDocker SetupAPI IntegrationContent DiscoveryResearch ToolYouTube SearchHealthcare AIData Privacy